Search results for: manufacturing optimization
3552 Optimized Deep Learning-Based Facial Emotion Recognition System
Authors: Erick C. Valverde, Wansu Lim
Abstract:
Facial emotion recognition (FER) system has been recently developed for more advanced computer vision applications. The ability to identify human emotions would enable smart healthcare facility to diagnose mental health illnesses (e.g., depression and stress) as well as better human social interactions with smart technologies. The FER system involves two steps: 1) face detection task and 2) facial emotion recognition task. It classifies the human expression in various categories such as angry, disgust, fear, happy, sad, surprise, and neutral. This system requires intensive research to address issues with human diversity, various unique human expressions, and variety of human facial features due to age differences. These issues generally affect the ability of the FER system to detect human emotions with high accuracy. Early stage of FER systems used simple supervised classification task algorithms like K-nearest neighbors (KNN) and artificial neural networks (ANN). These conventional FER systems have issues with low accuracy due to its inefficiency to extract significant features of several human emotions. To increase the accuracy of FER systems, deep learning (DL)-based methods, like convolutional neural networks (CNN), are proposed. These methods can find more complex features in the human face by means of the deeper connections within its architectures. However, the inference speed and computational costs of a DL-based FER system is often disregarded in exchange for higher accuracy results. To cope with this drawback, an optimized DL-based FER system is proposed in this study.An extreme version of Inception V3, known as Xception model, is leveraged by applying different network optimization methods. Specifically, network pruning and quantization are used to enable lower computational costs and reduce memory usage, respectively. To support low resource requirements, a 68-landmark face detector from Dlib is used in the early step of the FER system.Furthermore, a DL compiler is utilized to incorporate advanced optimization techniques to the Xception model to improve the inference speed of the FER system. In comparison to VGG-Net and ResNet50, the proposed optimized DL-based FER system experimentally demonstrates the objectives of the network optimization methods used. As a result, the proposed approach can be used to create an efficient and real-time FER system.Keywords: deep learning, face detection, facial emotion recognition, network optimization methods
Procedia PDF Downloads 1203551 Modified Bat Algorithm for Economic Load Dispatch Problem
Authors: Daljinder Singh, J.S.Dhillon, Balraj Singh
Abstract:
According to no free lunch theorem, a single search technique cannot perform best in all conditions. Optimization method can be attractive choice to solve optimization problem that may have exclusive advantages like robust and reliable performance, global search capability, little information requirement, ease of implementation, parallelism, no requirement of differentiable and continuous objective function. In order to synergize between exploration and exploitation and to further enhance the performance of Bat algorithm, the paper proposed a modified bat algorithm that adds additional search procedure based on bat’s previous experience. The proposed algorithm is used for solving the economic load dispatch (ELD) problem. The practical constraint such valve-point loading along with power balance constraints and generator limit are undertaken. To take care of power demand constraint variable elimination method is exploited. The proposed algorithm is tested on various ELD problems. The results obtained show that the proposed algorithm is capable of performing better in majority of ELD problems considered and is at par with existing algorithms for some of problems.Keywords: bat algorithm, economic load dispatch, penalty method, variable elimination method
Procedia PDF Downloads 4623550 Enhancing Dents through Lean Six Sigma
Authors: Prateek Guleria, Shubham Sharma, Rakesh Kumar Shukla, Harshit Sharma
Abstract:
Performance measurement of small and medium-sized businesses is the primary need for all companies to survive and thrive in a dynamic global company. A structured and systematic, integrated organization increases employee reliability, sustainability, and loyalty. This paper is a case study of a gear manufacturing industry that was facing the problem of rejection due to dents and damages in gear. The DMAIC cycle, along with different tools used in the research work includes SIPOC (Supply, Input, Process, Output, Control) Pareto analysis, Root & Cause analysis, and FMEA (Failure Mode and Effect Analysis). The six-sigma level was improved from 4.06 to 3.46, and the rejection rate was reduced from 7.44% to 1.56%. These findings highlighted the influence of a Lean Six Sigma module in the gear manufacturing unit, which has already increased operational quality and continuity to increase market success and meet customer expectations. According to the findings, applying lean six sigma tools will result in increased productivity. The results could assist businesses in deciding the quality tools that were likely to improve efficiency, competitiveness, and expense.Keywords: six sigma, DMAIC, SIPOC, failure mode, effect analysis
Procedia PDF Downloads 1143549 Method for Selecting and Prioritising Smart Services in Manufacturing Companies
Authors: Till Gramberg, Max Kellner, Erwin Gross
Abstract:
This paper presents a comprehensive investigation into the topic of smart services and IIoT-Platforms, focusing on their selection and prioritization in manufacturing organizations. First, a literature review is conducted to provide a basic understanding of the current state of research in the area of smart services. Based on discussed and established definitions, a definition approach for this paper is developed. In addition, value propositions for smart services are identified based on the literature and expert interviews. Furthermore, the general requirements for the provision of smart services are presented. Subsequently, existing approaches for the selection and development of smart services are identified and described. In order to determine the requirements for the selection of smart services, expert opinions from successful companies that have already implemented smart services are collected through semi-structured interviews. Based on the results, criteria for the evaluation of existing methods are derived. The existing methods are then evaluated according to the identified criteria. Furthermore, a novel method for the selection of smart services in manufacturing companies is developed, taking into account the identified criteria and the existing approaches. The developed concept for the method is verified in expert interviews. The method includes a collection of relevant smart services identified in the literature. The actual relevance of the use cases in the industrial environment was validated in an online survey. The required data and sensors are assigned to the smart service use cases. The value proposition of the use cases is evaluated in an expert workshop using different indicators. Based on this, a comparison is made between the identified value proposition and the required data, leading to a prioritization process. The prioritization process follows an established procedure for evaluating technical decision-making processes. In addition to the technical requirements, the prioritization process includes other evaluation criteria such as the economic benefit, the conformity of the new service offering with the company strategy, or the customer retention enabled by the smart service. Finally, the method is applied and validated in an industrial environment. The results of these experiments are critically reflected upon and an outlook on future developments in the area of smart services is given. This research contributes to a deeper understanding of the selection and prioritization process as well as the technical considerations associated with smart service implementation in manufacturing organizations. The proposed method serves as a valuable guide for decision makers, helping them to effectively select the most appropriate smart services for their specific organizational needs.Keywords: smart services, IIoT, industrie 4.0, IIoT-platform, big data
Procedia PDF Downloads 903548 Parameter Selection for Computationally Efficient Use of the Bfvrns Fully Homomorphic Encryption Scheme
Authors: Cavidan Yakupoglu, Kurt Rohloff
Abstract:
In this study, we aim to provide a novel parameter selection model for the BFVrns scheme, which is one of the prominent FHE schemes. Parameter selection in lattice-based FHE schemes is a practical challenges for experts or non-experts. Towards a solution to this problem, we introduce a hybrid principles-based approach that combines theoretical with experimental analyses. To begin, we use regression analysis to examine the parameters on the performance and security. The fact that the FHE parameters induce different behaviors on performance, security and Ciphertext Expansion Factor (CEF) that makes the process of parameter selection more challenging. To address this issue, We use a multi-objective optimization algorithm to select the optimum parameter set for performance, CEF and security at the same time. As a result of this optimization, we get an improved parameter set for better performance at a given security level by ensuring correctness and security against lattice attacks by providing at least 128-bit security. Our result enables average ~ 5x smaller CEF and mostly better performance in comparison to the parameter sets given in [1]. This approach can be considered a semiautomated parameter selection. These studies are conducted using the PALISADE homomorphic encryption library, which is a well-known HE library. The abstract goes here.Keywords: lattice cryptography, fully homomorphic encryption, parameter selection, LWE, RLWE
Procedia PDF Downloads 1633547 Engineered Bio-Coal from Pressed Seed Cake for Removal of 2, 4, 6-Trichlorophenol with Parametric Optimization Using Box–Behnken Method
Authors: Harsha Nagar, Vineet Aniya, Alka Kumari, Satyavathi B.
Abstract:
In the present study, engineered bio-coal was produced from pressed seed cake, which otherwise is non-edible in origin. The production process involves a slow pyrolysis wherein, based on the optimization of process parameters; a substantial reduction in H/C and O/C of 77% was achieved with respect to the original ratio of 1.67 and 0.8, respectively. The bio-coal, so the product was found to have a higher heating value of 29899 kJ/kg with surface area 17 m²/g and pore volume of 0.002 cc/g. The functional characterization of bio-coal and its subsequent modification was carried out to enhance its active sites, which were further used as an adsorbent material for removal of 2,4,6-Trichlorophenol (2,4,6-TCP) herbicide from the aqueous stream. The point of zero charge for the bio-coal was found to be pH < 3 where its surface is positively charged and attracts anions resulting in the maximum 2, 4, 6-TCP adsorption at pH 2.0. The parametric optimization of the adsorption process was studied based on the Box-Behken design with the desirability approach. The results showed optimum values of adsorption efficiency of 74.04% and uptake capacity of 118.336 mg/g for an initial metal concentration of 250 mg/l and particle size of 0.12 mm at pH 2.0 and 1 g/L of bio-coal loading. Negative Gibbs free energy change values indicated the feasibility of 2,4,6-TCP adsorption on biochar. Decreasing the ΔG values with the rise in temperature indicated high favourability at low temperatures. The equilibrium modeling results showed that both isotherms (Langmuir and Freundlich) accurately predicted the equilibrium data, which may be attributed to the different affinity of the functional groups of bio-coal for 2,4,6-TCP removal. The possible mechanism for 2,4,6-TCP adsorption is found to be physisorption (pore diffusion, p*_p electron donor-acceptor interaction, H-bonding, and van der Waals dispersion forces) and chemisorption (phenolic and amine groups chemical bonding) based on the kinetics data modeling.Keywords: engineered biocoal, 2, 4, 6-trichlorophenol, box behnken design, biosorption
Procedia PDF Downloads 1173546 Application of Rapid Prototyping to Create Additive Prototype Using Computer System
Authors: Meftah O. Bashir, Fatma A. Karkory
Abstract:
Rapid prototyping is a new group of manufacturing processes, which allows fabrication of physical of any complexity using a layer by layer deposition technique directly from a computer system. The rapid prototyping process greatly reduces the time and cost necessary to bring a new product to market. The prototypes made by these systems are used in a range of industrial application including design evaluation, verification, testing, and as patterns for casting processes. These processes employ a variety of materials and mechanisms to build up the layers to build the part. The present work was to build a FDM prototyping machine that could control the X-Y motion and material deposition, to generate two-dimensional and three-dimensional complex shapes. This study focused on the deposition of wax material. This work was to find out the properties of the wax materials used in this work in order to enable better control of the FDM process. This study will look at the integration of a computer controlled electro-mechanical system with the traditional FDM additive prototyping process. The characteristics of the wax were also analysed in order to optimize the model production process. These included wax phase change temperature, wax viscosity and wax droplet shape during processing.Keywords: rapid prototyping, wax, manufacturing processes, shape
Procedia PDF Downloads 4663545 Minimization Entropic Applied to Rotary Dryers to Reduce the Energy Consumption
Authors: I. O. Nascimento, J. T. Manzi
Abstract:
The drying process is an important operation in the chemical industry and it is widely used in the food, grain industry and fertilizer industry. However, for demanding a considerable consumption of energy, such a process requires a deep energetic analysis in order to reduce operating costs. This paper deals with thermodynamic optimization applied to rotary dryers based on the entropy production minimization, aiming at to reduce the energy consumption. To do this, the mass, energy and entropy balance was used for developing a relationship that represents the rate of entropy production. The use of the Second Law of Thermodynamics is essential because it takes into account constraints of nature. Since the entropy production rate is minimized, optimals conditions of operations can be established and the process can obtain a substantial gain in energy saving. The minimization strategy had been led using classical methods such as Lagrange multipliers and implemented in the MATLAB platform. As expected, the preliminary results reveal a significant energy saving by the application of the optimal parameters found by the procedure of the entropy minimization It is important to say that this method has shown easy implementation and low cost.Keywords: thermodynamic optimization, drying, entropy minimization, modeling dryers
Procedia PDF Downloads 2603544 Sulfanilamide/Epoxy Resin and Its Application as Tackifier in Epoxy Adhesives
Authors: Oiane Ruiz de Azua, Salvador Borros, Nuria Agullo, Jordi Arbusa
Abstract:
Tackiness is described as the ability to spontaneously form a bond to another material under light pressures within a short application time. During the first few minutes of the adhesive's curing, it is necessary to have enough tack to keep the substrates together while cohesion is increasing within the adhesive. This property plays a key role in the manufacturing process of pieces. Epoxy adhesives, unlike other adhesives, usually present low tackiness before curing; however, there is very little literature about the use of tackifiers in epoxy adhesives, except for the high molecular weight epoxy additives. In the present work, a tetrafunctional epoxy resin based on Bisphenol-A and Sulfanilamide has been synthesized in order to be used as a tackifier. This additive offers improved specific adhesion to two-component (2K) epoxy adhesives. The dosage of the tackifier has to be done carefully not to alter the mechanical and rheological properties of the adhesive. The synthetized product has been analyzed by FTIR and ¹H-NMR analysis, and the effect of the addition of 1 wt % of the tackifier on rheological properties, viscoelastic behavior, and mechanical properties has been studied. On one hand, the addition of the product in the epoxy resin part showed a significant increase in tackiness regarding the neat epoxy resin. On the other hand, tackiness of the whole formulation was also increased. Curing time of the adhesive has not undergone any relevant changes with the tackifier addition. Regarding viscoelastic properties, Storage Modulus (G') and Loss Modulus (G'') remain also unchanged at ambient temperature. Probably, in case higher tackifier concentration would be added, differences in viscoelastic properties would be observed. The study of mechanical properties shows that hardness and tensile strength also keep their values unchanged regarding neat two component adhesive. In conclusion, the addition of 1 wt % of sulfanilamide/epoxy enhanced the tackiness of the epoxy resin part, improves tack without modifying significantly either the rheological, the mechanical, or the viscoelastic properties of the product. Thus, the sulfanilamide presented could be a good candidate to be used as an additive to the 2k epoxy formulation for the manufacturing process of pieces.Keywords: epoxy adhesive, manufacturing process of pieces, sulfanilamide, tackifiers
Procedia PDF Downloads 1853543 Design of a Graphical User Interface for Data Preprocessing and Image Segmentation Process in 2D MRI Images
Authors: Enver Kucukkulahli, Pakize Erdogmus, Kemal Polat
Abstract:
The 2D image segmentation is a significant process in finding a suitable region in medical images such as MRI, PET, CT etc. In this study, we have focused on 2D MRI images for image segmentation process. We have designed a GUI (graphical user interface) written in MATLABTM for 2D MRI images. In this program, there are two different interfaces including data pre-processing and image clustering or segmentation. In the data pre-processing section, there are median filter, average filter, unsharp mask filter, Wiener filter, and custom filter (a filter that is designed by user in MATLAB). As for the image clustering, there are seven different image segmentations for 2D MR images. These image segmentation algorithms are as follows: PSO (particle swarm optimization), GA (genetic algorithm), Lloyds algorithm, k-means, the combination of Lloyds and k-means, mean shift clustering, and finally BBO (Biogeography Based Optimization). To find the suitable cluster number in 2D MRI, we have designed the histogram based cluster estimation method and then applied to these numbers to image segmentation algorithms to cluster an image automatically. Also, we have selected the best hybrid method for each 2D MR images thanks to this GUI software.Keywords: image segmentation, clustering, GUI, 2D MRI
Procedia PDF Downloads 3773542 Solving Nonconvex Economic Load Dispatch Problem Using Particle Swarm Optimization with Time Varying Acceleration Coefficients
Authors: Alireza Alizadeh, Hossein Ghadimi, Oveis Abedinia, Noradin Ghadimi
Abstract:
A Particle Swarm Optimization with Time Varying Acceleration Coefficients (PSO-TVAC) is proposed to determine optimal economic load dispatch (ELD) problem in this paper. The proposed methodology easily takes care of solving non-convex economic load dispatch problems along with different constraints like transmission losses, dynamic operation constraints and prohibited operating zones. The proposed approach has been implemented on the 3-machines 6-bus, IEEE 5-machines 14-bus, IEEE 6-machines 30-bus systems and 13 thermal units power system. The proposed technique is compared to solve the ELD problem with hybrid approach by using the valve-point effect. The comparison results prove the capability of the proposed method giving significant improvements in the generation cost for the economic load dispatch problem.Keywords: PSO-TVAC, economic load dispatch, non-convex cost function, prohibited operating zone, transmission losses
Procedia PDF Downloads 3883541 Statistical Optimization of Distribution Coefficient for Reactive Extraction of Lactic Acid Using Tri-n-octyl Amine in Oleyl Alcohol and n-Hexane
Authors: Avinash Thakur, Parmjit S. Panesar, Manohar Singh
Abstract:
The distribution coefficient, KD for the reactive extraction of lactic acid from aqueous solutions of lactic acid using 10-30% (v/v) tri-n-octyl amine (extractant) dissolved in n-hexane (inert diluent) and 20% (v/v) oleyl alcohol (modifier) was optimized by using response surface methodology (RSM). A three level Box-Behnken design was employed for experimental design, analysis of the results and to depict the combined interactive effect of seven independent variables, viz lactic acid concentration (cl), pH, TOA concentration in organic phase (ψ), treat ratio (φ), temperature (T), agitation speed (ω) and batch agitation time (τ) on distribution coefficient of lactic acid. The regression analysis recommended that the quadratic model is significant (R2 and adjusted R2 are 98.72 % and 98.69 % respectively) for analysis. A numerical optimization had resulted in maximum lactic acid distribution coefficient (KD) of 3.16 at the optimized values for test variables, cl, pH, ψ, φ, T, ω and τ as 0.15 [M], 3.0, 22.75% (v/v), 1.0 (v/v), 26°C, 145 rpm and 23 min respectively. A good agreement between the predicted and experimentally obtained values for distribution coefficient using the optimized conditions was exhibited.Keywords: Distribution coefficient, tri-n-octylamine, lactic acid, response surface methodology
Procedia PDF Downloads 4583540 Order Optimization of a Telecommunication Distribution Center through Service Lead Time
Authors: Tamás Hartványi, Ferenc Tóth
Abstract:
European telecommunication distribution center performance is measured by service lead time and quality. Operation model is CTO (customized to order) namely, a high mix customization of telecommunication network equipment and parts. CTO operation contains material receiving, warehousing, network and server assembly to order and configure based on customer specifications. Variety of the product and orders does not support mass production structure. One of the success factors to satisfy customer is to have a proper aggregated planning method for the operation in order to have optimized human resources and highly efficient asset utilization. Research will investigate several methods and find proper way to have an order book simulation where practical optimization problem may contain thousands of variables and the simulation running times of developed algorithms were taken into account with high importance. There are two operation research models that were developed, customer demand is given in orders, no change over time, customer demands are given for product types, and changeover time is constant.Keywords: CTO, aggregated planning, demand simulation, changeover time
Procedia PDF Downloads 2693539 Cash Flow Optimization on Synthetic CDOs
Authors: Timothée Bligny, Clément Codron, Antoine Estruch, Nicolas Girodet, Clément Ginet
Abstract:
Collateralized Debt Obligations are not as widely used nowadays as they were before 2007 Subprime crisis. Nonetheless there remains an enthralling challenge to optimize cash flows associated with synthetic CDOs. A Gaussian-based model is used here in which default correlation and unconditional probabilities of default are highlighted. Then numerous simulations are performed based on this model for different scenarios in order to evaluate the associated cash flows given a specific number of defaults at different periods of time. Cash flows are not solely calculated on a single bought or sold tranche but rather on a combination of bought and sold tranches. With some assumptions, the simplex algorithm gives a way to find the maximum cash flow according to correlation of defaults and maturities. The used Gaussian model is not realistic in crisis situations. Besides present system does not handle buying or selling a portion of a tranche but only the whole tranche. However the work provides the investor with relevant elements on how to know what and when to buy and sell.Keywords: synthetic collateralized debt obligation (CDO), credit default swap (CDS), cash flow optimization, probability of default, default correlation, strategies, simulation, simplex
Procedia PDF Downloads 2763538 Process Optimization of Mechanochemical Synthesis for the Production of 4,4 Bipyridine Based MOFS using Twin Screw Extrusion and Multivariate Analysis
Authors: Ahmed Metawea, Rodrigo Soto, Majeida Kharejesh, Gavin Walker, Ahmad B. Albadarin
Abstract:
In this study, towards a green approach, we have investigated the effect of operating conditions of solvent assessed twin-screw extruder (TSE) for the production of 4, 4-bipyridine (1-dimensional coordinated polymer (1D)) based coordinated polymer using cobalt nitrate as a metal precursor with molar ratio 1:1. Different operating parameters such as solvent percentage, screw speed and feeding rate are considered. The resultant product is characterized using offline characterization methods, namely Powder X-ray diffraction (PXRD), Raman spectroscopy and scanning electron microscope (SEM) in order to investigate the product purity and surface morphology. A lower feeding rate increased the product’s quality as more resident time was provided for the reaction to take place. The most important influencing factor was the amount of liquid added. The addition of water helped in facilitating the reaction inside the TSE by increasing the surface area of the reaction for particlesKeywords: MOFS, multivariate analysis, process optimization, chemometric
Procedia PDF Downloads 1603537 Pharmaceutical Scale up for Solid Dosage Forms
Authors: A. Shashank Tiwari, S. P. Mahapatra
Abstract:
Scale-up is defined as the process of increasing batch size. Scale-up of a process viewed as a procedure for applying the same process to different output volumes. There is a subtle difference between these two definitions: batch size enlargement does not always translate into a size increase of the processing volume. In mixing applications, scale-up is indeed concerned with increasing the linear dimensions from the laboratory to the plant size. On the other hand, processes exist (e.g., tableting) where the term ‘scale-up’ simply means enlarging the output by increasing the speed. To complete the picture, one should point out special procedures where an increase of the scale is counterproductive and ‘scale-down’ is required to improve the quality of the product. In moving from Research and Development (R&D) to production scale, it is sometimes essential to have an intermediate batch scale. This is achieved at the so-called pilot scale, which is defined as the manufacturing of drug product by a procedure fully representative of and simulating that used for full manufacturing scale. This scale also makes it possible to produce enough products for clinical testing and to manufacture samples for marketing. However, inserting an intermediate step between R&D and production scales does not, in itself, guarantee a smooth transition. A well-defined process may generate a perfect product both in the laboratory and the pilot plant and then fail quality assurance tests in production.Keywords: scale up, research, size, batch
Procedia PDF Downloads 4143536 Performance Evaluation and Dear Based Optimization on Machining Leather Specimens to Reduce Carbonization
Authors: Khaja Moiduddin, Tamer Khalaf, Muthuramalingam Thangaraj
Abstract:
Due to the variety of benefits over traditional cutting techniques, the usage of laser cutting technology has risen substantially in recent years. Hot wire machining can cut the leather in the required shape by controlling the wire by generating thermal energy. In the present study, an attempt has been made to investigate the effects of performance measures in the hot wire machining process on cutting leather specimens. Carbonization and material removal rates were considered as quality indicators. Burning leather during machining might cause carbon particles, reducing product quality. Minimizing the effect of carbon particles is crucial for assuring operator and environmental safety, health, and product quality. Hot wire machining can efficiently cut the specimens by controlling the current through it. Taguchi- DEAR-based optimization was also performed in the process, which resulted in a required Carbonization and material removal rate. Using the DEAR approach, the optimal parameters of the present study were found with 3.7% prediction error accuracy.Keywords: cabronization, leather, MRR, current
Procedia PDF Downloads 653535 A Method Development for Improving the Efficiency of Solid Waste Collection System Using Network Analyst
Authors: Dhvanidevi N. Jadeja, Daya S. Kaul, Anurag A. Kandya
Abstract:
Municipal Solid Waste (MSW) collection in a city is performed in less effective manner which results in the poor management of the environment and natural resources. Municipal corporation does not possess efficient waste management and recycling programs because of the complex task involving many factors. Solid waste collection system depends upon various factors such as manpower, number and size of vehicles, transfer station size, dustbin size and weight, on-road traffic, and many others. These factors affect the collection cost, energy and overall municipal tax for the city. Generally, different types of waste are scattered throughout the city in a heterogeneous way that poses changes for efficient collection of solid waste. Efficient waste collection and transportation strategy must be effectively undertaken which will include optimization of routes, volume of waste, and manpower. Being these optimized, the overall cost can be reduced as the fuel and energy requirements would be less and also the municipal waste taxes levied will be less. To carry out the optimization study of collection system various data needs to be collected from the Ahmedabad municipal corporation such as amount of waste generated per day, number of workers, collection schedule, road maps, number of transfer station, location of transfer station, number of equipment (tractors, machineries), number of zones, route of collection etc. The ArcGis Network Analyst is introduced for the best routing identification applied in municipal waste collection. The simulation consists of scenarios of visiting loading spots in the municipality of Ahmedabad, considering dynamic factors like network traffic changes, closed roads due to natural or technical causes. Different routes were selected in a particular area of Ahmedabad city, and present routes were optimized to reduce the length of the routes, by using ArcGis Network Analyst. The result indicates up to 35% length minimization in the routes.Keywords: collection routes, efficiency, municipal solid waste, optimization
Procedia PDF Downloads 1373534 Formulation Development, Process Optimization and Comparative study of Poorly Compressible Drugs Ibuprofen, Acetaminophen Using Direct Compression and Top Spray Granulation Technique
Authors: Abhishek Pandey
Abstract:
Ibuprofen and Acetaminophen is widely used as prescription & non-prescription medicine. Ibuprofen mainly used in the treatment of mild to moderate pain related to headache, migraine, postoperative condition and in the management of spondylitis, osteoarthritis and rheumatoid arthritis. Acetaminophen is used as an analgesic and antipyretic drug. Ibuprofen having high tendency of sticking to punches of tablet punching machine while Acetaminophen is not ordinarily compressible to tablet formulation because Acetaminophen crystals are very hard and brittle in nature and fracture very easily when compressed producing capping and laminating tablet defects therefore wet granulation method is used to make them compressible. The aim of study was to prepare Ibuprofen and Acetaminophen tablets by direct compression and top spray granulation technique. In this Investigation tablets were prepared by using directly compressible grade excipients. Dibasic calcium phosphate, lactose anhydrous (DCL21), microcrystalline cellulose (Avicel PH 101). In order to obtain best or optimized formulation, nine different formulations were generated among them batch F7, F8, F9 shows good results and within the acceptable limit. Formulation (F7) selected as optimize product on the basis of dissolution study. Furtherly, directly compressible granules of both drugs were prepared by using top spray granulation technique in fluidized bed processor equipment and compressed .In order to obtain best product process optimization was carried out by performing four trials in which various parameters like inlet air temperature, spray rate, peristaltic pump rpm, % LOD, properties of granules, blending time and hardness were optimized. Batch T3 coined as optimized batch on the basis physical & chemical evaluation. Finally formulations prepared by both techniques were compared.Keywords: direct compression, top spray granulation, process optimization, blending time
Procedia PDF Downloads 3643533 Data-Driven Strategies for Enhancing Food Security in Vulnerable Regions: A Multi-Dimensional Analysis of Crop Yield Predictions, Supply Chain Optimization, and Food Distribution Networks
Authors: Sulemana Ibrahim
Abstract:
Food security remains a paramount global challenge, with vulnerable regions grappling with issues of hunger and malnutrition. This study embarks on a comprehensive exploration of data-driven strategies aimed at ameliorating food security in such regions. Our research employs a multifaceted approach, integrating data analytics to predict crop yields, optimizing supply chains, and enhancing food distribution networks. The study unfolds as a multi-dimensional analysis, commencing with the development of robust machine learning models harnessing remote sensing data, historical crop yield records, and meteorological data to foresee crop yields. These predictive models, underpinned by convolutional and recurrent neural networks, furnish critical insights into anticipated harvests, empowering proactive measures to confront food insecurity. Subsequently, the research scrutinizes supply chain optimization to address food security challenges, capitalizing on linear programming and network optimization techniques. These strategies intend to mitigate loss and wastage while streamlining the distribution of agricultural produce from field to fork. In conjunction, the study investigates food distribution networks with a particular focus on network efficiency, accessibility, and equitable food resource allocation. Network analysis tools, complemented by data-driven simulation methodologies, unveil opportunities for augmenting the efficacy of these critical lifelines. This study also considers the ethical implications and privacy concerns associated with the extensive use of data in the realm of food security. The proposed methodology outlines guidelines for responsible data acquisition, storage, and usage. The ultimate aspiration of this research is to forge a nexus between data science and food security policy, bestowing actionable insights to mitigate the ordeal of food insecurity. The holistic approach converging data-driven crop yield forecasts, optimized supply chains, and improved distribution networks aspire to revitalize food security in the most vulnerable regions, elevating the quality of life for millions worldwide.Keywords: data-driven strategies, crop yield prediction, supply chain optimization, food distribution networks
Procedia PDF Downloads 633532 Optimal Maintenance Clustering for Rail Track Components Subject to Possession Capacity Constraints
Authors: Cuong D. Dao, Rob J.I. Basten, Andreas Hartmann
Abstract:
This paper studies the optimal maintenance planning of preventive maintenance and renewal activities for components in a single railway track when the available time for maintenance is limited. The rail-track system consists of several types of components, such as rail, ballast, and switches with different preventive maintenance and renewal intervals. To perform maintenance or renewal on the track, a train free period for maintenance, called a possession, is required. Since a major possession directly affects the regular train schedule, maintenance and renewal activities are clustered as much as possible. In a highly dense and utilized railway network, the possession time on the track is critical since the demand for train operations is very high and a long possession has a severe impact on the regular train schedule. We present an optimization model and investigate the maintenance schedules with and without the possession capacity constraint. In addition, we also integrate the social-economic cost related to the effects of the maintenance time to the variable possession cost into the optimization model. A numerical example is provided to illustrate the model.Keywords: rail-track components, maintenance, optimal clustering, possession capacity
Procedia PDF Downloads 2643531 Impact of Process Parameters on Tensile Strength of Fused Deposition Modeling Printed Crisscross Poylactic Acid
Authors: Shilpesh R. Rajpurohit, Harshit K. Dave
Abstract:
Additive manufacturing gains the popularity in recent times, due to its capability to create prototype as well functional as end use product directly from CAD data without any specific requirement of tooling. Fused deposition modeling (FDM) is one of the widely used additive manufacturing techniques that are used to create functional end use part of polymer that is comparable with the injection-molded parts. FDM printed part has an application in various fields such as automobile, aerospace, medical, electronic, etc. However, application of FDM part is greatly affected by poor mechanical properties. Proper selection of the process parameter could enhance the mechanical performance of the printed part. In the present study, experimental investigation has been carried out to study the behavior of the mechanical performance of the printed part with respect to process variables. Three process variables viz. raster angle, raster width and layer height have been varied to understand its effect on tensile strength. Further, effect of process variables on fractured surface has been also investigated.Keywords: 3D Printing, fused deposition modeling, layer height, raster angle, raster width, tensile strength
Procedia PDF Downloads 1983530 Trajectory Optimization for Autonomous Deep Space Missions
Authors: Anne Schattel, Mitja Echim, Christof Büskens
Abstract:
Trajectory planning for deep space missions has become a recent topic of great interest. Flying to space objects like asteroids provides two main challenges. One is to find rare earth elements, the other to gain scientific knowledge of the origin of the world. Due to the enormous spatial distances such explorer missions have to be performed unmanned and autonomously. The mathematical field of optimization and optimal control can be used to realize autonomous missions while protecting recourses and making them safer. The resulting algorithms may be applied to other, earth-bound applications like e.g. deep sea navigation and autonomous driving as well. The project KaNaRiA ('Kognitionsbasierte, autonome Navigation am Beispiel des Ressourcenabbaus im All') investigates the possibilities of cognitive autonomous navigation on the example of an asteroid mining mission, including the cruise phase and approach as well as the asteroid rendezvous, landing and surface exploration. To verify and test all methods an interactive, real-time capable simulation using virtual reality is developed under KaNaRiA. This paper focuses on the specific challenge of the guidance during the cruise phase of the spacecraft, i.e. trajectory optimization and optimal control, including first solutions and results. In principle there exist two ways to solve optimal control problems (OCPs), the so called indirect and direct methods. The indirect methods are being studied since several decades and their usage needs advanced skills regarding optimal control theory. The main idea of direct approaches, also known as transcription techniques, is to transform the infinite-dimensional OCP into a finite-dimensional non-linear optimization problem (NLP) via discretization of states and controls. These direct methods are applied in this paper. The resulting high dimensional NLP with constraints can be solved efficiently by special NLP methods, e.g. sequential quadratic programming (SQP) or interior point methods (IP). The movement of the spacecraft due to gravitational influences of the sun and other planets, as well as the thrust commands, is described through ordinary differential equations (ODEs). The competitive mission aims like short flight times and low energy consumption are considered by using a multi-criteria objective function. The resulting non-linear high-dimensional optimization problems are solved by using the software package WORHP ('We Optimize Really Huge Problems'), a software routine combining SQP at an outer level and IP to solve underlying quadratic subproblems. An application-adapted model of impulsive thrusting, as well as a model of an electrically powered spacecraft propulsion system, is introduced. Different priorities and possibilities of a space mission regarding energy cost and flight time duration are investigated by choosing different weighting factors for the multi-criteria objective function. Varying mission trajectories are analyzed and compared, both aiming at different destination asteroids and using different propulsion systems. For the transcription, the robust method of full discretization is used. The results strengthen the need for trajectory optimization as a foundation for autonomous decision making during deep space missions. Simultaneously they show the enormous increase in possibilities for flight maneuvers by being able to consider different and opposite mission objectives.Keywords: deep space navigation, guidance, multi-objective, non-linear optimization, optimal control, trajectory planning.
Procedia PDF Downloads 4123529 A Construction Scheduling Model by Applying Pedestrian and Vehicle Simulation
Authors: Akhmad F. K. Khitam, Yi Tai, Hsin-Yun Lee
Abstract:
In the modern research of construction management, the goals of scheduling are not only to finish the project within the limited duration, but also to improve the impact of people and environment. Especially for the impact to the pedestrian and vehicles, the considerable social cost should be estimated in the total performance of a construction project. However, the site environment has many differences between projects. These interactions affect the requirement and goal of scheduling. It is difficult for schedule planners to quantify these interactions. Therefore, this study use 3D dynamic simulation technology to plan the schedule of the construction engineering projects that affect the current space users (i.e., the pedestrians and vehicles). The proposed model can help the project manager find out the optimal schedule to minimize the inconvenience brought to the space users. Besides, a roadwork project and a building renovation project were analyzed for the practical situation of engineering and operations. Then this study integrates the proper optimization algorithms and computer technology to establish a decision support model. The proposed model can generate a near-optimal schedule solution for project planners.Keywords: scheduling, simulation, optimization, pedestrian and vehicle behavior
Procedia PDF Downloads 1423528 A Monocular Measurement for 3D Objects Based on Distance Area Number and New Minimize Projection Error Optimization Algorithms
Authors: Feixiang Zhao, Shuangcheng Jia, Qian Li
Abstract:
High-precision measurement of the target’s position and size is one of the hotspots in the field of vision inspection. This paper proposes a three-dimensional object positioning and measurement method using a monocular camera and GPS, namely the Distance Area Number-New Minimize Projection Error (DAN-NMPE). Our algorithm contains two parts: DAN and NMPE; specifically, DAN is a picture sequence algorithm, NMPE is a relatively positive optimization algorithm, which greatly improves the measurement accuracy of the target’s position and size. Comprehensive experiments validate the effectiveness of our proposed method on a self-made traffic sign dataset. The results show that with the laser point cloud as the ground truth, the size and position errors of the traffic sign measured by this method are ± 5% and 0.48 ± 0.3m, respectively. In addition, we also compared it with the current mainstream method, which uses a monocular camera to locate and measure traffic signs. DAN-NMPE attains significant improvements compared to existing state-of-the-art methods, which improves the measurement accuracy of size and position by 50% and 15.8%, respectively.Keywords: monocular camera, GPS, positioning, measurement
Procedia PDF Downloads 1443527 Corporate Social Responsibility (CSR) and Energy Efficiency: Empirical Evidence from the Manufacturing Sector of India
Authors: Baikunthanath Sahoo, Santosh Kumar Sahu, Krishna Malakar
Abstract:
With the essence of global environmental sustainability and green business management, the wind of business research moved towards Corporate Social Responsibility. In addition to international and national treaties, businesses have also started realising environmental protection and energy efficiency through CSR as part of business strategy in response to climate change. Considering the ambitious emission reduction target and rapid economic development of India, this study is an attempt to explore the effect of CSR on the energy efficiency management of manufacturing firms in India. By using firm-level data, the panel fixed effect model shows that the CSR dummy variable is negatively influencing the energy intensity or technically, they are energy efficient. The result demonstrates that in the presence of CSR, all the production economic variables are significant. The result also shows that doing environmental expenditure does not improve energy efficiency might be because very few firms are motivated to do such expenditure and also not common to all sectors. The interactive effect model result conforms that without considering CSR dummy as an intervening variable only Manufacturers of Chemical and Chemical products, Manufacturers of Pharmaceutical, medical chemical, and botanical products firms energy intensity low but after considering CSR in their business practices all six sub-sector firms become energy efficient. The empirical result also validate that firms are continuously engaged in CSR activities they are highly energy efficient. It is an important motivational factor for firms to become economically and environmentally sustainable in the corporate world. This analysis would help business practitioners to know how to manage today’s profitability and tomorrow’s sustainability to achieve a comparative advantage in the emerging market economy. The paper concludes that reducing energy consumption as part of their social responsibility to care for the environment, will need collaborative efforts of business society and policy bodies.Keywords: CSR, Energy Efficiency, Indian manufacturing Sector, Business strategy
Procedia PDF Downloads 853526 Optimization of Traffic Agent Allocation for Minimizing Bus Rapid Transit Cost on Simplified Jakarta Network
Authors: Gloria Patricia Manurung
Abstract:
Jakarta Bus Rapid Transit (BRT) system which was established in 2009 to reduce private vehicle usage and ease the rush hour gridlock throughout the Jakarta Greater area, has failed to achieve its purpose. With gradually increasing the number of private vehicles ownership and reduced road space by the BRT lane construction, private vehicle users intuitively invade the exclusive lane of BRT, creating local traffic along the BRT network. Invaded BRT lanes costs become the same with the road network, making BRT which is supposed to be the main public transportation in the city becoming unreliable. Efforts to guard critical lanes with preventing the invasion by allocating traffic agents at several intersections have been expended, lead to the improving congestion level along the lane. Given a set of number of traffic agents, this study uses an analytical approach to finding the best deployment strategy of traffic agent on a simplified Jakarta road network in minimizing the BRT link cost which is expected to lead to the improvement of BRT system time reliability. User-equilibrium model of traffic assignment is used to reproduce the origin-destination demand flow on the network and the optimum solution conventionally can be obtained with brute force algorithm. This method’s main constraint is that traffic assignment simulation time escalates exponentially with the increase of set of agent’s number and network size. Our proposed metaheuristic and heuristic algorithms perform linear simulation time increase and result in minimized BRT cost approaching to brute force algorithm optimization. Further analysis of the overall network link cost should be performed to see the impact of traffic agent deployment to the network system.Keywords: traffic assignment, user equilibrium, greedy algorithm, optimization
Procedia PDF Downloads 2323525 Optimization-Based Design Improvement of Synchronizer in Transmission System for Efficient Vehicle Performance
Authors: Sanyka Banerjee, Saikat Nandi, P. K. Dan
Abstract:
Synchronizers as an integral part of gearbox is a key element in the transmission system in automotive. The performance of synchronizer affects transmission efficiency and driving comfort. Synchronizing mechanism as a major component of transmission system must be capable of preventing vibration and noise in the gears. Gear shifting efficiency improvement with an aim to achieve smooth, quick and energy efficient power transmission remains a challenge for the automotive industry. Performance of the synchronizer is dependent on the features and characteristics of its sub-components and therefore analysis of the contribution of such characteristics is necessary. An important exercise involved is to identify all such characteristics or factors which are associated with the modeling and analysis and for this purpose the literature was reviewed, rather extensively, to study the mathematical models, formulated considering such. It has been observed that certain factors are rather common across models; however, there are few factors which have specifically been selected for individual models, as reported. In order to obtain a more realistic model, an attempt here has been made to identify and assimilate practically all possible factors which may be considered in formulating the model more comprehensively. A simulation study, formulated as a block model, for such analysis has been carried out in a reliable environment like MATLAB. Lower synchronization time is desirable and hence, it has been considered here as the output factors in the simulation modeling for evaluating transmission efficiency. An improved synchronizer model requires optimized values of sub-component design parameters. A parametric optimization utilizing Taguchi’s design of experiment based response data and their analysis has been carried out for this purpose. The effectiveness of the optimized parameters for the improved synchronizer performance has been validated by the simulation study of the synchronizer block model with improved parameter values as input parameters for better transmission efficiency and driver comfort.Keywords: design of experiments, modeling, parametric optimization, simulation, synchronizer
Procedia PDF Downloads 3153524 Isolation and Identification of Biosurfactant Producing Microorganism for Bioaugmentation
Authors: Karthick Gopalan, Selvamohan Thankiah
Abstract:
Biosurfactants are lipid compounds produced by microbes, which are amphipathic molecules consisting of hydrophophic and hydrophilic domains. In the present investigation, ten bacterial strains were isolated from petroleum oil contaminated sites near petrol bunk. Oil collapsing test, haemolytic activity were used as a criteria for primary isolation of biosurfactant producing bacteria. In this study, all the bacterial strains gave positive results. Among the ten strains, two were observed as good biosurfactant producers, they utilize the diesel as a sole carbon source. Optimization of biosurfactant producing bacteria isolated from petroleum oil contaminated sites was carried out using different parameters such as, temperature (20ºC, 25ºC, 30ºC, 37ºC and 45ºC), pH (5,6,7,8 & 9) and nitrogen sources (ammonium chloride, ammonium carbonate and sodium nitrate). Biosurfactants produced by bacteria were extracted, dried and quantified. As a result of optimization of parameters the suitable values for the production of more amount of biosurfactant by the isolated bacterial species was observed as 30ºC (0.543 gm/lt) in the pH 7 (0.537 gm/lt) with ammonium nitrate (0.431 gm/lt) as sole carbon source.Keywords: isolation and identification, biosurfactant, microorganism, bioaugmentation
Procedia PDF Downloads 3513523 Aerodynamic Design of Axisymmetric Supersonic Nozzle Used by an Optimization Algorithm
Authors: Mohammad Mojtahedpoor
Abstract:
In this paper, it has been studied the method of optimal design of the supersonic nozzle. It could make viscous axisymmetric nozzles that the quality of their outlet flow is quite desired. In this method, it is optimized the divergent nozzle, at first. The initial divergent nozzle contour is designed through the method of characteristics and adding a suitable boundary layer to the inviscid contour. After that, it is made a proper grid and then simulated flow by the numerical solution and AUSM+ method by using the operation boundary condition. At the end, solution outputs are investigated and optimized. The numerical method has been validated with experimental results. Also, in order to evaluate the effectiveness of the present method, the nozzles compared with the previous studies. The comparisons show that the nozzles obtained through this method are sufficiently better in some conditions, such as the flow uniformity, size of the boundary layer, and obtained an axial length of the nozzle. Designing the convergent nozzle part affects by flow uniformity through changing its axial length and input diameter. The results show that increasing the length of the convergent part improves the output flow uniformity.Keywords: nozzle, supersonic, optimization, characteristic method, CFD
Procedia PDF Downloads 201