Search results for: product optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6780

Search results for: product optimization

4650 Optimization Of Biogas Production Using Co-digestion Feedstocks Via Anaerobic Technologhy

Authors: E Tolufase

Abstract:

The demand, high costs and health implications of using energy derived from hydrocarbon compound have necessitated the continuous search for alternative source of energy. The World energy market is facing some challenges viz: depletion of fossil fuel reserves, population explosion, lack of energy security, economic and urbanization growth and also, in Nigeria some rural areas still depend largely on wood, charcoal, kerosene, petrol among others, as the sources of their energy. To overcome these short falls in energy supply and demand, as well as taking into consideration the risks from global climate change due to effect of greenhouse gas emissions and other pollutants from fossil fuels’ combustion, brought a lot of attention on efficiently harnessing the renewable energy sources. A very promising among the renewable energy resources for a clean energy technology for power production, vehicle and domestic usage is biogas. Therefore, optimization of biogas yield and quality is imperative. Hence, this study investigated yield and quality of biogas using low cost bio-digester and combination of various feed stocks referred to as co-digestion. Batch/Discontinuous Bio-digester type was used because it was cheap, easy, plausible and appropriate for different substrates used to get the desired results. Three substrates were used; cow dung, chicken droppings and lemon grass digested in five separate 21 litre digesters, A, B, C, D, and E and the gas collection system was designed using locally available materials. For single digestion we had; cow dung, chicken droppings, lemon grass, in Bio-digesters A, B, and C respectively, the co-digested three substrates in different mixed ratio 7:1:2 in digester D and E in ratio 5:3:2. The respective feed-stocks materials were collected locally, digested and analyzed in accordance with standard procedures. They were pre-fermented for a period of 10 days before being introduced into the digesters. They were digested for a retention period of 28 days, the physiochemical parameters namely; pressure, temperature, pH, volume of the gas collector system and volume of biogas produced were all closely monitored and recorded daily. The values of pH and temperature ranged 6.0 - 8.0, and 220C- 350C respectively. For the single substrate, bio-digester A(Cow dung only) produced biogas of total volume 0.1607m3(average volume of 0.0054m3 daily),while B (Chicken droppings ) produced 0.1722m3 (average of 0.0057m3 daily) and C (lemon grass) produced 0.1035m3 (average of 0.0035m3 daily). For the co-digested substrates in bio-digester D the total biogas produced was 0.2007m³ (average volume of 0.0067m³ daily) and bio-digester E produced 0.1991m³ (average volume of 0.0066m³ daily) It’s obvious from the results, that combining different substrates gave higher yields than when a singular feed stock was used and also mixing ratio played some roles in the yield improvement. Bio-digesters D and E contained the same substrates but mixed with different ratios, but higher yield was noticed in D with mixing ratio of 7:1:2 than in E with ratio 5:3:2.Therefore, co-digestion of substrates and mixing proportions are important factors for biogas production optimization.

Keywords: anaerobic, batch, biogas, biodigester, digestion, fermentation, optimization

Procedia PDF Downloads 31
4649 Monitoring the Drying and Grinding Process during Production of Celitement through a NIR-Spectroscopy Based Approach

Authors: Carolin Lutz, Jörg Matthes, Patrick Waibel, Ulrich Precht, Krassimir Garbev, Günter Beuchle, Uwe Schweike, Peter Stemmermann, Hubert B. Keller

Abstract:

Online measurement of the product quality is a challenging task in cement production, especially in the production of Celitement, a novel environmentally friendly hydraulic binder. The mineralogy and chemical composition of clinker in ordinary Portland cement production is measured by X-ray diffraction (XRD) and X ray fluorescence (XRF), where only crystalline constituents can be detected. But only a small part of the Celitement components can be measured via XRD, because most constituents have an amorphous structure. This paper describes the development of algorithms suitable for an on-line monitoring of the final processing step of Celitement based on NIR-data. For calibration intermediate products were dried at different temperatures and ground for variable durations. The products were analyzed using XRD and thermogravimetric analyses together with NIR-spectroscopy to investigate the dependency between the drying and the milling processes on one and the NIR-signal on the other side. As a result, different characteristic parameters have been defined. A short overview of the Celitement process and the challenging tasks of the online measurement and evaluation of the product quality will be presented. Subsequently, methods for systematic development of near-infrared calibration models and the determination of the final calibration model will be introduced. The application of the model on experimental data illustrates that NIR-spectroscopy allows for a quick and sufficiently exact determination of crucial process parameters.

Keywords: calibration model, celitement, cementitious material, NIR spectroscopy

Procedia PDF Downloads 503
4648 Fermented Fruit and Vegetable Discard as a Source of Feeding Ingredients and Functional Additives

Authors: Jone Ibarruri, Mikel Manso, Marta Cebrián

Abstract:

A high amount of food is lost or discarded in the World every year. In addition, in the last decades, an increasing demand of new alternative and sustainable sources of proteins and other valuable compounds is being observed in the food and feeding sectors and, therefore, the use of food by-products as nutrients for these purposes sounds very interesting from the environmental and economical point of view. However, the direct use of discarded fruit and vegetables that present, in general, a low protein content is not interesting as feeding ingredient except if they are used as a source of fiber for ruminants. Especially in the case of aquaculture, several alternatives to the use of fish meal and other vegetable protein sources have been extensively explored due to the scarcity of fish stocks and the unsustainability of fishing for these purposes. Fish mortality is also of great concern in this sector as this problem highly reduces their economic feasibility. So, the development of new functional and natural ingredients that could reduce the need for vaccination is also of great interest. In this work, several fermentation tests were developed at lab scale using a selected mixture of fruit and vegetable discards from a wholesale market located in the Basque Country to increase their protein content and also to produce some bioactive extracts that could be used as additives in aquaculture. Fruit and vegetable mixtures (60/40 ww) were centrifugated for humidity reduction and crushed to 2-5 mm particle size. Samples were inoculated with a selected Rhizopus oryzae strain and fermented for 7 days in controlled conditions (humidity between 65 and 75% and 28ºC) in Petri plates (120 mm) by triplicate. Obtained results indicated that the final fermented product presented a twofold protein content (from 13 to 28% d.w). Fermented product was further processed to determine their possible functionality as a feed additive. Extraction tests were carried out to obtain an ethanolic extract (60:40 ethanol: water, v.v) and remaining biomass that also could present applications in food or feed sectors. The extract presented a polyphenol content of about 27 mg GAE/gr d.w with antioxidant activity of 8.4 mg TEAC/g d.w. Remining biomass is mainly composed of fiber (51%), protein (24%) and fat (10%). Extracts also presented antibacterial activity according to the results obtained in Agar Diffusion and to the Minimum Inhibitory Concentration (MIC) tests determined against several food and fish pathogen strains. In vitro, digestibility was also assessed to obtain preliminary information about the expected effect of extraction procedure on fermented product digestibility. First results indicated that remaining biomass after extraction doesn´t seem to improve digestibility in comparison to the initial fermented product. These preliminary results show that fermented fruit and vegetables can be a useful source of functional ingredients for aquaculture applications and a substitute of other protein sources in the feeding sector. Further validation will be also carried out through “in vivo” tests with trout and bass.

Keywords: fungal solid state fermentation, protein increase, functional extracts, feed ingredients

Procedia PDF Downloads 65
4647 Optimization of Fused Deposition Modeling 3D Printing Process via Preprocess Calibration Routine Using Low-Cost Thermal Sensing

Authors: Raz Flieshman, Adam Michael Altenbuchner, Jörg Krüger

Abstract:

This paper presents an approach to optimizing the Fused Deposition Modeling (FDM) 3D printing process through a preprocess calibration routine of printing parameters. The core of this method involves the use of a low-cost thermal sensor capable of measuring tempera-tures within the range of -20 to 500 degrees Celsius for detailed process observation. The calibration process is conducted by printing a predetermined path while varying the process parameters through machine instructions (g-code). This enables the extraction of critical thermal, dimensional, and surface properties along the printed path. The calibration routine utilizes computer vision models to extract features and metrics from the thermal images, in-cluding temperature distribution, layer adhesion quality, surface roughness, and dimension-al accuracy and consistency. These extracted properties are then analyzed to optimize the process parameters to achieve the desired qualities of the printed material. A significant benefit of this calibration method is its potential to create printing parameter profiles for new polymer and composite materials, thereby enhancing the versatility and application range of FDM 3D printing. The proposed method demonstrates significant potential in enhancing the precision and reliability of FDM 3D printing, making it a valuable contribution to the field of additive manufacturing.

Keywords: FDM 3D printing, preprocess calibration, thermal sensor, process optimization, additive manufacturing, computer vision, material profiles

Procedia PDF Downloads 47
4646 Fluid-Structure Interaction Study of Fluid Flow past Marine Turbine Blade Designed by Using Blade Element Theory and Momentum Theory

Authors: Abu Afree Andalib, M. Mezbah Uddin, M. Rafiur Rahman, M. Abir Hossain, Rajia Sultana Kamol

Abstract:

This paper deals with the analysis of flow past the marine turbine blade which is designed by using the blade element theory and momentum theory for the purpose of using in the field of renewable energy. The designed blade is analyzed for various parameters using FSI module of Ansys. Computational Fluid Dynamics is used for the study of fluid flow past the blade and other fluidic phenomena such as lift, drag, pressure differentials, energy dissipation in water. Finite Element Analysis (FEA) module of Ansys was used to analyze the structural parameter such as stress and stress density, localization point, deflection, force propagation. Fine mesh is considered in every case for more accuracy in the result according to computational machine power. The relevance of design, search and optimization with respect to complex fluid flow and structural modeling is considered and analyzed. The relevancy of design and optimization with respect to complex fluid for minimum drag force using Ansys Adjoint Solver module is analyzed as well. The graphical comparison of the above-mentioned parameter using CFD and FEA and subsequently FSI technique is illustrated and found the significant conformity between both the results.

Keywords: blade element theory, computational fluid dynamics, finite element analysis, fluid-structure interaction, momentum theory

Procedia PDF Downloads 304
4645 Multi Response Optimization in Drilling Al6063/SiC/15% Metal Matrix Composite

Authors: Hari Singh, Abhishek Kamboj, Sudhir Kumar

Abstract:

This investigation proposes a grey-based Taguchi method to solve the multi-response problems. The grey-based Taguchi method is based on the Taguchi’s design of experimental method, and adopts Grey Relational Analysis (GRA) to transfer multi-response problems into single-response problems. In this investigation, an attempt has been made to optimize the drilling process parameters considering weighted output response characteristics using grey relational analysis. The output response characteristics considered are surface roughness, burr height and hole diameter error under the experimental conditions of cutting speed, feed rate, step angle, and cutting environment. The drilling experiments were conducted using L27 orthogonal array. A combination of orthogonal array, design of experiments and grey relational analysis was used to ascertain best possible drilling process parameters that give minimum surface roughness, burr height and hole diameter error. The results reveal that combination of Taguchi design of experiment and grey relational analysis improves surface quality of drilled hole.

Keywords: metal matrix composite, drilling, optimization, step drill, surface roughness, burr height, hole diameter error

Procedia PDF Downloads 323
4644 Relay-Augmented Bottleneck Throughput Maximization for Correlated Data Routing: A Game Theoretic Perspective

Authors: Isra Elfatih Salih Edrees, Mehmet Serdar Ufuk Türeli

Abstract:

In this paper, an energy-aware method is presented, integrating energy-efficient relay-augmented techniques for correlated data routing with the goal of optimizing bottleneck throughput in wireless sensor networks. The system tackles the dual challenge of throughput optimization while considering sensor network energy consumption. A unique routing metric has been developed to enable throughput maximization while minimizing energy consumption by utilizing data correlation patterns. The paper introduces a game theoretic framework to address the NP-complete optimization problem inherent in throughput-maximizing correlation-aware routing with energy limitations. By creating an algorithm that blends energy-aware route selection strategies with the best reaction dynamics, this framework provides a local solution. The suggested technique considerably raises the bottleneck throughput for each source in the network while reducing energy consumption by choosing the best routes that strike a compromise between throughput enhancement and energy efficiency. Extensive numerical analyses verify the efficiency of the method. The outcomes demonstrate the significant decrease in energy consumption attained by the energy-efficient relay-augmented bottleneck throughput maximization technique, in addition to confirming the anticipated throughput benefits.

Keywords: correlated data aggregation, energy efficiency, game theory, relay-augmented routing, throughput maximization, wireless sensor networks

Procedia PDF Downloads 86
4643 Dogs Chest Homogeneous Phantom for Image Optimization

Authors: Maris Eugênia Dela Rosa, Ana Luiza Menegatti Pavan, Marcela De Oliveira, Diana Rodrigues De Pina, Luis Carlos Vulcano

Abstract:

In medical veterinary as well as in human medicine, radiological study is essential for a safe diagnosis in clinical practice. Thus, the quality of radiographic image is crucial. In last year’s there has been an increasing substitution of image acquisition screen-film systems for computed radiology equipment (CR) without technical charts adequacy. Furthermore, to carry out a radiographic examination in veterinary patient is required human assistance for restraint this, which can compromise image quality by generating dose increasing to the animal, for Occupationally Exposed and also the increased cost to the institution. The image optimization procedure and construction of radiographic techniques are performed with the use of homogeneous phantoms. In this study, we sought to develop a homogeneous phantom of canine chest to be applied to the optimization of these images for the CR system. In carrying out the simulator was created a database with retrospectives chest images of computed tomography (CT) of the Veterinary Hospital of the Faculty of Veterinary Medicine and Animal Science - UNESP (FMVZ / Botucatu). Images were divided into four groups according to the animal weight employing classification by sizes proposed by Hoskins & Goldston. The thickness of biological tissues were quantified in a 80 animals, separated in groups of 20 animals according to their weights: (S) Small - equal to or less than 9.0 kg, (M) Medium - between 9.0 and 23.0 kg, (L) Large – between 23.1 and 40.0kg and (G) Giant – over 40.1 kg. Mean weight for group (S) was 6.5±2.0 kg, (M) 15.0±5.0 kg, (L) 32.0±5.5 kg and (G) 50.0 ±12.0 kg. An algorithm was developed in Matlab in order to classify and quantify biological tissues present in CT images and convert them in simulator materials. To classify tissues presents, the membership functions were created from the retrospective CT scans according to the type of tissue (adipose, muscle, bone trabecular or cortical and lung tissue). After conversion of the biologic tissue thickness in equivalent material thicknesses (acrylic simulating soft tissues, bone tissues simulated by aluminum and air to the lung) were obtained four different homogeneous phantoms, with (S) 5 cm of acrylic, 0,14 cm of aluminum and 1,8 cm of air; (M) 8,7 cm of acrylic, 0,2 cm of aluminum and 2,4 cm of air; (L) 10,6 cm of acrylic, 0,27 cm of aluminum and 3,1 cm of air and (G) 14,8 cm of acrylic, 0,33 cm of aluminum and 3,8 cm of air. The developed canine homogeneous phantom is a practical tool, which will be employed in future, works to optimize veterinary X-ray procedures.

Keywords: radiation protection, phantom, veterinary radiology, computed radiography

Procedia PDF Downloads 419
4642 The Role of Metaheuristic Approaches in Engineering Problems

Authors: Ferzat Anka

Abstract:

Many types of problems can be solved using traditional analytical methods. However, these methods take a long time and cause inefficient use of resources. In particular, different approaches may be required in solving complex and global engineering problems that we frequently encounter in real life. The bigger and more complex a problem, the harder it is to solve. Such problems are called Nondeterministic Polynomial time (NP-hard) in the literature. The main reasons for recommending different metaheuristic algorithms for various problems are the use of simple concepts, the use of simple mathematical equations and structures, the use of non-derivative mechanisms, the avoidance of local optima, and their fast convergence. They are also flexible, as they can be applied to different problems without very specific modifications. Thanks to these features, it can be easily embedded even in many hardware devices. Accordingly, this approach can also be used in trend application areas such as IoT, big data, and parallel structures. Indeed, the metaheuristic approaches are algorithms that return near-optimal results for solving large-scale optimization problems. This study is focused on the new metaheuristic method that has been merged with the chaotic approach. It is based on the chaos theorem and helps relevant algorithms to improve the diversity of the population and fast convergence. This approach is based on Chimp Optimization Algorithm (ChOA), that is a recently introduced metaheuristic algorithm inspired by nature. This algorithm identified four types of chimpanzee groups: attacker, barrier, chaser, and driver, and proposed a suitable mathematical model for them based on the various intelligence and sexual motivations of chimpanzees. However, this algorithm is not more successful in the convergence rate and escaping of the local optimum trap in solving high-dimensional problems. Although it and some of its variants use some strategies to overcome these problems, it is observed that it is not sufficient. Therefore, in this study, a newly expanded variant is described. In the algorithm called Ex-ChOA, hybrid models are proposed for position updates of search agents, and a dynamic switching mechanism is provided for transition phases. This flexible structure solves the slow convergence problem of ChOA and improves its accuracy in multidimensional problems. Therefore, it tries to achieve success in solving global, complex, and constrained problems. The main contribution of this study is 1) It improves the accuracy and solves the slow convergence problem of the ChOA. 2) It proposes new hybrid movement strategy models for position updates of search agents. 3) It provides success in solving global, complex, and constrained problems. 4) It provides a dynamic switching mechanism between phases. The performance of the Ex-ChOA algorithm is analyzed on a total of 8 benchmark functions, as well as a total of 2 classical and constrained engineering problems. The proposed algorithm is compared with the ChoA, and several well-known variants (Weighted-ChoA, Enhanced-ChoA) are used. In addition, an Improved algorithm from the Grey Wolf Optimizer (I-GWO) method is chosen for comparison since the working model is similar. The obtained results depict that the proposed algorithm performs better or equivalently to the compared algorithms.

Keywords: optimization, metaheuristic, chimp optimization algorithm, engineering constrained problems

Procedia PDF Downloads 79
4641 Research on Public Space Optimization Strategies for Existing Settlements Based on Intergenerational Friendliness

Authors: Huanhuan Qiang, Sijia Jin

Abstract:

Population aging has become a global trend, and China has entered an aging society, implementing an active aging system focused on home and community-based care. However, most urban communities where elderly people live face issues such as monotonous planning, unappealing landscapes, and inadequate aging infrastructure, which do not meet the requirements for active aging. Intergenerational friendliness and mutual assistance are key components in China's active aging policy framework. Therefore, residential development should prioritize enhancing intergenerational friendliness. Residential and public spaces are central to community life and well-being, offering new and challenging venues to improve relationships among residents of different ages. They are crucial for developing intergenerational communities with diverse generations and non-blood relationships. This paper takes the Maigaoqiao community in Nanjing, China, as a case study, examining intergenerational interactions in public spaces. Based on Maslow's hierarchy of needs and using time geography analysis, it identifies the spatiotemporal behavior characteristics of intergenerational groups in outdoor activities. Then construct an intergenerational-friendly evaluation system and an IPA quadrant model for public spaces in residential areas. Lastly, it explores optimization strategies for public spaces to promote intergenerational friendly interactions, focusing on five aspects: accessibility, safety, functionality, a sense of belonging, and interactivity.

Keywords: intergenerational friendliness, demand theory, spatiotemporal behavior, IPA analysis, existing residential public space

Procedia PDF Downloads 12
4640 Energetics of Photosynthesis with Respect to the Environment and Recently Reported New Balanced Chemical Equation

Authors: Suprit Pradhan, Sushil Pradhan

Abstract:

Photosynthesis is a physiological process where green plants prepare their food from carbon dioxide from the atmosphere and water being absorbed from the soil in presence of sun light and chlorophyll. From this definition it is clear that four reactants (Carbon Dioxide, Water, Light and Chlorophyll) are essential for the process to proceed and the product is a sugar or carbohydrate ultimately stored as starch. The entire process has “Light Reaction” (Photochemical) and “Dark Reaction” (Biochemical). Biochemical reactions are very much complicated being catalysed by various enzymes and the path of carbon is known as “Calvin Cycle” according to the name of its discover. The overall reaction which is now universally accepted can be explained like this. Six molecules of carbon dioxide react with twelve molecules of water in presence of chlorophyll and sun light to give only one molecule of sugar (Carbohydrate) six molecules of water and six molecules of oxygen is being evolved in gaseous form. This is the accepted equation and also chemically balanced. However while teaching the subject the author came across a new balanced equation from among the students who happened to be the daughter of the author. In the new balanced equation in place of twelve water molecules in the reactant side seven molecules can be expressed and accordingly in place of six molecules of water in the product side only one molecule of water is produced. The energetics of the photosynthesis as related to the environment and the newly reported balanced chemical equation has been discussed in detail in the present research paper presentation in this international conference on energy, environmental and chemical engineering.

Keywords: biochemistry, enzyme , isotope, photosynthesis

Procedia PDF Downloads 512
4639 Development of an Asset Database to Enhance the Circular Business Models for the European Solar Industry: A Design Science Research Approach

Authors: Ässia Boukhatmi, Roger Nyffenegger

Abstract:

The expansion of solar energy as a means to address the climate crisis is undisputed, but the increasing number of new photovoltaic (PV) modules being put on the market is simultaneously leading to increased challenges in terms of managing the growing waste stream. Many of the discarded modules are still fully functional but are often damaged by improper handling after disassembly or not properly tested to be considered for a second life. In addition, the collection rate for dismantled PV modules in several European countries is only a fraction of previous projections, partly due to the increased number of illegal exports. The underlying problem for those market imperfections is an insufficient data exchange between the different actors along the PV value chain, as well as the limited traceability of PV panels during their lifetime. As part of the Horizon 2020 project CIRCUSOL, an asset database prototype was developed to tackle the described problems. In an iterative process applying the design science research methodology, different business models, as well as the technical implementation of the database, were established and evaluated. To explore the requirements of different stakeholders for the development of the database, surveys and in-depth interviews were conducted with various representatives of the solar industry. The proposed database prototype maps the entire value chain of PV modules, beginning with the digital product passport, which provides information about materials and components contained in every module. Product-related information can then be expanded with performance data of existing installations. This information forms the basis for the application of data analysis methods to forecast the appropriate end-of-life strategy, as well as the circular economy potential of PV modules, already before they arrive at the recycling facility. The database prototype could already be enriched with data from different data sources along the value chain. From a business model perspective, the database offers opportunities both in the area of reuse as well as with regard to the certification of sustainable modules. Here, participating actors have the opportunity to differentiate their business and exploit new revenue streams. Future research can apply this approach to further industry and product sectors, validate the database prototype in a practical context, and can serve as a basis for standardization efforts to strengthen the circular economy.

Keywords: business model, circular economy, database, design science research, solar industry

Procedia PDF Downloads 130
4638 Structural Damage Detection via Incomplete Model Data Using Output Data Only

Authors: Ahmed Noor Al-qayyim, Barlas Özden Çağlayan

Abstract:

Structural failure is caused mainly by damage that often occurs on structures. Many researchers focus on obtaining very efficient tools to detect the damage in structures in the early state. In the past decades, a subject that has received considerable attention in literature is the damage detection as determined by variations in the dynamic characteristics or response of structures. This study presents a new damage identification technique. The technique detects the damage location for the incomplete structure system using output data only. The method indicates the damage based on the free vibration test data by using “Two Points - Condensation (TPC) technique”. This method creates a set of matrices by reducing the structural system to two degrees of freedom systems. The current stiffness matrices are obtained from optimization of the equation of motion using the measured test data. The current stiffness matrices are compared with original (undamaged) stiffness matrices. High percentage changes in matrices’ coefficients lead to the location of the damage. TPC technique is applied to the experimental data of a simply supported steel beam model structure after inducing thickness change in one element. Where two cases are considered, the method detects the damage and determines its location accurately in both cases. In addition, the results illustrate that these changes in stiffness matrix can be a useful tool for continuous monitoring of structural safety using ambient vibration data. Furthermore, its efficiency proves that this technique can also be used for big structures.

Keywords: damage detection, optimization, signals processing, structural health monitoring, two points–condensation

Procedia PDF Downloads 366
4637 First Order Moment Bounds on DMRL and IMRL Classes of Life Distributions

Authors: Debasis Sengupta, Sudipta Das

Abstract:

The class of life distributions with decreasing mean residual life (DMRL) is well known in the field of reliability modeling. It contains the IFR class of distributions and is contained in the NBUE class of distributions. While upper and lower bounds of the reliability distribution function of aging classes such as IFR, IFRA, NBU, NBUE, and HNBUE have discussed in the literature for a long time, there is no analogous result available for the DMRL class. We obtain the upper and lower bounds for the reliability function of the DMRL class in terms of first order finite moment. The lower bound is obtained by showing that for any fixed time, the minimization of the reliability function over the class of all DMRL distributions with a fixed mean is equivalent to its minimization over a smaller class of distribution with a special form. Optimization over this restricted set can be made algebraically. Likewise, the maximization of the reliability function over the class of all DMRL distributions with a fixed mean turns out to be a parametric optimization problem over the class of DMRL distributions of a special form. The constructive proofs also establish that both the upper and lower bounds are sharp. Further, the DMRL upper bound coincides with the HNBUE upper bound and the lower bound coincides with the IFR lower bound. We also prove that a pair of sharp upper and lower bounds for the reliability function when the distribution is increasing mean residual life (IMRL) with a fixed mean. This result is proved in a similar way. These inequalities fill a long-standing void in the literature of the life distribution modeling.

Keywords: DMRL, IMRL, reliability bounds, hazard functions

Procedia PDF Downloads 397
4636 Optimization of Fermentation Conditions for Extracellular Production of the Oncolytic Enzyme, L-Asparaginase, by New Subsp. Streptomyces Rochei Subsp. Chromatogenes NEAE-K Using Response Surface Methodology under Solid State Fermentation

Authors: Noura El-Ahmady El-Naggar

Abstract:

L-asparaginase is an important enzyme as therapeutic agents used in combination therapy with other drugs in the treatment of acute lymphoblastic leukemia in children. L-asparaginase producing actinomycete strain, NEAE-K, was isolated from soil sample and identified on the basis of morphological, cultural, physiological and biochemical properties, together with 16S rDNA sequence as new subsp. Streptomyces rochei subsp. chromatogenes NEAE-K and sequencing product (1532 bp) was deposited in the GenBank database under accession number KJ200343. The study was conducted to screen parameters affecting the production of L-asparaginase by Streptomyces rochei subsp. chromatogenes NEAE-K on solid state fermentation using Plackett–Burman experimental design. Sixteen different independent variables including incubation time, moisture content, inoculum size, temperature, pH, soybean meal+ wheat bran, dextrose, fructose, L-asparagine, yeast extract, KNO3, K2HPO4, MgSO4.7H2O, NaCl, FeSO4. 7H2O, CaCl2, and three dummy variables were screened in Plackett–Burman experimental design of 20 trials. The most significant independent variables affecting enzyme production (dextrose, L-asparagine and K2HPO4) were further optimized by the central composite design. As a result, a medium of the following formula is the optimum for producing an extracellular L-asparaginase by Streptomyces rochei subsp. chromatogenes NEAE-K from solid state fermentation: g/L (soybean meal+ wheat bran 15, dextrose 3, fructose 4, L-asparagine 8, yeast extract 2, KNO3 1, K2HPO4 2, MgSO4.7H2O 0.5, NaCl 0.1, FeSO4. 7H2O 0.02, CaCl2 0.01), incubation time 7 days, moisture content 50%, inoculum size 3 mL, temperature 30°C, pH 8.5.

Keywords: streptomyces rochei subsp. chromatogenes neae-k, 16s rrna, identification, solid state fermentation, l-asparaginase production, plackett-burman design, central composite design

Procedia PDF Downloads 409
4635 Genetic Algorithm and Multi Criteria Decision Making Approach for Compressive Sensing Based Direction of Arrival Estimation

Authors: Ekin Nurbaş

Abstract:

One of the essential challenges in array signal processing, which has drawn enormous research interest over the past several decades, is estimating the direction of arrival (DOA) of plane waves impinging on an array of sensors. In recent years, the Compressive Sensing based DoA estimation methods have been proposed by researchers, and it has been discovered that the Compressive Sensing (CS)-based algorithms achieved significant performances for DoA estimation even in scenarios where there are multiple coherent sources. On the other hand, the Genetic Algorithm, which is a method that provides a solution strategy inspired by natural selection, has been used in sparse representation problems in recent years and provides significant improvements in performance. With all of those in consideration, in this paper, a method that combines the Genetic Algorithm (GA) and the Multi-Criteria Decision Making (MCDM) approaches for Direction of Arrival (DoA) estimation in the Compressive Sensing (CS) framework is proposed. In this method, we generate a multi-objective optimization problem by splitting the norm minimization and reconstruction loss minimization parts of the Compressive Sensing algorithm. With the help of the Genetic Algorithm, multiple non-dominated solutions are achieved for the defined multi-objective optimization problem. Among the pareto-frontier solutions, the final solution is obtained with the multiple MCDM methods. Moreover, the performance of the proposed method is compared with the CS-based methods in the literature.

Keywords: genetic algorithm, direction of arrival esitmation, multi criteria decision making, compressive sensing

Procedia PDF Downloads 149
4634 Neural Network Supervisory Proportional-Integral-Derivative Control of the Pressurized Water Reactor Core Power Load Following Operation

Authors: Derjew Ayele Ejigu, Houde Song, Xiaojing Liu

Abstract:

This work presents the particle swarm optimization trained neural network (PSO-NN) supervisory proportional integral derivative (PID) control method to monitor the pressurized water reactor (PWR) core power for safe operation. The proposed control approach is implemented on the transfer function of the PWR core, which is computed from the state-space model. The PWR core state-space model is designed from the neutronics, thermal-hydraulics, and reactivity models using perturbation around the equilibrium value. The proposed control approach computes the control rod speed to maneuver the core power to track the reference in a closed-loop scheme. The particle swarm optimization (PSO) algorithm is used to train the neural network (NN) and to tune the PID simultaneously. The controller performance is examined using integral absolute error, integral time absolute error, integral square error, and integral time square error functions, and the stability of the system is analyzed by using the Bode diagram. The simulation results indicated that the controller shows satisfactory performance to control and track the load power effectively and smoothly as compared to the PSO-PID control technique. This study will give benefit to design a supervisory controller for nuclear engineering research fields for control application.

Keywords: machine learning, neural network, pressurized water reactor, supervisory controller

Procedia PDF Downloads 159
4633 A Recommender System for Job Seekers to Show up Companies Based on Their Psychometric Preferences and Company Sentiment Scores

Authors: A. Ashraff

Abstract:

The increasing importance of the web as a medium for electronic and business transactions has served as a catalyst or rather a driving force for the introduction and implementation of recommender systems. Recommender Systems play a major role in processing and analyzing thousands of data rows or reviews and help humans make a purchase decision of a product or service. It also has the ability to predict whether a particular user would rate a product or service based on the user’s profile behavioral pattern. At present, Recommender Systems are being used extensively in every domain known to us. They are said to be ubiquitous. However, in the field of recruitment, it’s not being utilized exclusively. Recent statistics show an increase in staff turnover, which has negatively impacted the organization as well as the employee. The reasons being company culture, working flexibility (work from home opportunity), no learning advancements, and pay scale. Further investigations revealed that there are lacking guidance or support, which helps a job seeker find the company that will suit him best, and though there’s information available about companies, job seekers can’t read all the reviews by themselves and get an analytical decision. In this paper, we propose an approach to study the available review data on IT companies (score their reviews based on user review sentiments) and gather information on job seekers, which includes their Psychometric evaluations. Then presents the job seeker with useful information or rather outputs on which company is most suitable for the job seeker. The theoretical approach, Algorithmic approach and the importance of such a system will be discussed in this paper.

Keywords: psychometric tests, recommender systems, sentiment analysis, hybrid recommender systems

Procedia PDF Downloads 110
4632 Reducing The Frequency of Flooding Accompanied by Low pH Wastewater In 100/200 Unit of Phosphate Fertilizer 1 Plant by Implementing The 3R Program (Reduce, Reuse and Recycle)

Authors: Pradipta Risang Ratna Sambawa, Driya Herseta, Mahendra Fajri Nugraha

Abstract:

In 2020, PT Petrokimia Gresik implemented a program to increase the ROP (Run Of Pile) production rate at the Phosphate Fertilizer 1 plant, causing an increase in scrubbing water consumption in the 100/200 area unit. This increase in water consumption causes a higher discharge of wastewater, which can further cause local flooding, especially during the rainy season. The 100/200 area of the Phosphate Fertilizer 1 plant is close to the warehouse and is often a passing area for trucks transporting raw materials. This causes the pH in the wastewater to become acidic (the worst point is up to pH 1). The problem of flooding and exposure to acidic wastewater in the 100/200 area of Phosphate Fertilizer Plant 1 was then resolved by PT Petrokimia Gresik through wastewater optimization steps called the 3R program (Reduce, Reuse, and Recycle). The 3R (Reduce, reuse, and recycle) program consists of an air consumption reduction program by considering the liquid/gas ratio in scrubbing unit of 100/200 Phosphate Fertilizer 1 plant, creating a wastewater interconnection line so that wastewater from unit 100/200 can be used as scrubbing water in the Phonska 1, Phonska 2, Phonska 3 and unit 300 Phosphate Fertilizer 1 plant and increasing scrubbing effectiveness through scrubbing effectiveness simulations. Through a series of wastewater optimization programs, PT Petrokimia Gresik has succeeded in reducing NaOH consumption for neutralization up to 2,880 kg/day or equivalent in saving up to 314,359.76 dollars/year and reducing process water consumption up to 600 m3/day or equivalent in saving up to 63,739.62 dollars/year.

Keywords: fertilizer, phosphate fertilizer, wastewater, wastewater treatment, water management

Procedia PDF Downloads 30
4631 Simulation and Controller Tunning in a Photo-Bioreactor Applying by Taguchi Method

Authors: Hosein Ghahremani, MohammadReza Khoshchehre, Pejman Hakemi

Abstract:

This study involves numerical simulations of a vertical plate-type photo-bioreactor to investigate the performance of Microalgae Spirulina and Control and optimization of parameters for the digital controller by Taguchi method that MATLAB software and Qualitek-4 has been made. Since the addition of parameters such as temperature, dissolved carbon dioxide, biomass, and ... Some new physical parameters such as light intensity and physiological conditions like photosynthetic efficiency and light inhibitors are involved in biological processes, control is facing many challenges. Not only facilitate the commercial production photo-bioreactor Microalgae as feed for aquaculture and food supplements are efficient systems but also as a possible platform for the production of active molecules such as antibiotics or innovative anti-tumor agents, carbon dioxide removal and removal of heavy metals from wastewater is used. Digital controller is designed for controlling the light bioreactor until Microalgae growth rate and carbon dioxide concentration inside the bioreactor is investigated. The optimal values of the controller parameters of the S/N and ANOVA analysis software Qualitek-4 obtained With Reaction curve, Cohen-Con and Ziegler-Nichols method were compared. The sum of the squared error obtained for each of the control methods mentioned, the Taguchi method as the best method for controlling the light intensity was selected photo-bioreactor. This method compared to control methods listed the higher stability and a shorter interval to be answered.

Keywords: photo-bioreactor, control and optimization, Light intensity, Taguchi method

Procedia PDF Downloads 396
4630 Superamolecular Chemistry and Packing of FAMEs in the Liquid Phase for Optimization of Combustion and Emission

Authors: Zeev Wiesman, Paula Berman, Nitzan Meiri, Charles Linder

Abstract:

Supramolecular chemistry refers to the domain of chemistry beyond that of molecules and focuses on the chemical systems made up of a discrete number of assembled molecular sub units or components. Biodiesel components self arrangements is closely related/affect their physical properties in combustion systems and emission. Due to technological difficulties, knowledge regarding the molecular packing of FAMEs (biodiesel) in the liquid phase is limited. Spectral tools such as X-ray and NMR are known to provide evidences related to molecular structure organization. Recently, it was reported by our research group that using 1H Time Domain NMR methodology based on relaxation time and self diffusion coefficients, FAMEs clusters with different motilities can be accurately studied in the liquid phase. Head to head dimarization with quasi-smectic clusters organization, based on molecular motion analysis, was clearly demonstrated. These findings about the assembly/packing of the FAME components are directly associated with fluidity/viscosity of the biodiesel. Furthermore, these findings may provide information of micro/nano-particles that are formed in the delivery and injection system of various combustion systems (affected by thermodynamic conditions). Various relevant parameters to combustion such as: distillation/Liquid Gas phase transition, cetane number/ignition delay, shoot, oxidation/NOX emission maybe predicted. These data may open the window for further optimization of FAME/diesel mixture in terms of combustion and emission.

Keywords: supermolecular chemistry, FAMEs, liquid phase, fluidity, LF-NMR

Procedia PDF Downloads 341
4629 Production of Biocomposites Using Chars Obtained by Co-Pyrolysis of Olive Pomace with Plastic Wastes

Authors: Esra Yel, Tabriz Aslanov, Merve Sogancioglu, Suheyla Kocaman, Gulnare Ahmetli

Abstract:

The disposal of waste plastics has become a major worldwide environmental problem. Pyrolysis of waste plastics is one of the routes to waste minimization and recycling that has been gaining interest. In pyrolysis, the pyrolysed material is separated into gas, liquid (both are fuel) and solid (char) products. All fractions have utilities and economical value depending upon their characteristics. The first objective of this study is to determine the co-pyrolysis product fractions of waste HDPE- (high density polyethylene) and LDPE (low density polyethylene)-olive pomace (OP) and to determine the qualities of the solid product char. Chars obtained at 700 °C pyrolysis were used in biocomposite preparation as additive. As the second objective, the effects of char on biocomposite quality were investigated. Pyrolysis runs were performed at temperature 700 °C with heating rates of 5 °C/min. Biocomposites were prepared by mixing of chars with bisphenol-F type epoxy resin in various wt%. Biocomposite properties were determined by measuring electrical conductivity, surface hardness, Young’s modulus and tensile strength of the composites. The best electrical conductivity results were obtained with HDPE-OP char. For HDPE-OP char and LDPE-OP char, compared to neat epoxy, the tensile strength values of the composites increased by 102% and 78%, respectively, at 10% char dose. The hardness measurements showed similar results to the tensile tests, since there is a correlation between the hardness and the tensile strength.

Keywords: biocomposite, char, olive pomace, pyrolysis

Procedia PDF Downloads 253
4628 The Impact of Vertical Velocity Parameter Conditions and Its Relationship with Weather Parameters in the Hail Event

Authors: Nadine Ayasha

Abstract:

Hail happened in Sukabumi (August 23, 2020), Sekadau (August 22, 2020), and Bogor (September 23, 2020), where this extreme weather phenomenon occurred in the dry season. This study uses the ERA5 reanalysis model data, it aims to examine the vertical velocity impact on the hail occurrence in the dry season, as well as its relation to other weather parameters such as relative humidity, streamline, and wind velocity. Moreover, HCAI product satellite data is used as supporting data for the convective cloud development analysis. Based on the results of graphs, contours, and Hovmoller vertical cut from ERA5 modeling, the vertical velocity values in the 925 Mb-300 Mb layer in Sukabumi, Sekadau, and Bogor before the hail event ranged between -1.2-(-0.2), -1.5-(-0.2), -1-0 Pa/s. A negative value indicates that there is an upward motion from the air mass that trigger the convective cloud growth, which produces hail. It is evidenced by the presence of Cumulonimbus cloud on HCAI product when the hail falls. Therefore, the vertical velocity has significant effect on the hail event. In addition, the relative humidity in the 850-700 Mb layer is quite wet, which ranges from 80-90%. Meanwhile, the streamline and wind velocity in the three regions show the convergence with slowing wind velocity ranging from 2-4 knots. These results show that the upward motion of the vertical velocity is enough to form the wet atmospheric humidity and form a convergence for the growth of the convective cloud, which produce hail in the dry season.

Keywords: hail, extreme weather, vertical velocity, relative humidity, streamline

Procedia PDF Downloads 161
4627 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework

Authors: Iulia E. Falcan

Abstract:

The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.

Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization

Procedia PDF Downloads 171
4626 Research on the Function Optimization of China-Hungary Economic and Trade Cooperation Zone

Authors: Wenjuan Lu

Abstract:

China and Hungary have risen from a friendly and comprehensive cooperative relationship to a comprehensive strategic partnership in recent years, and the economic and trade relations between the two countries have developed smoothly. As an important country along the ‘Belt and Road’, Hungary and China have strong economic complementarities and have unique advantages in carrying China's industrial transfer and economic transformation and development. The construction of the China-Hungary Economic and Trade Cooperation Zone, which was initiated by the ‘Sino-Hungarian Borsod Industrial Zone’ and the ‘Hungarian Central European Trade and Logistics Cooperation Park’ has promoted infrastructure construction, optimized production capacity, promoted industrial restructuring, and formed brand and agglomeration effects. Enhancing the influence of Chinese companies in the European market has also promoted economic development in Hungary and even in Central and Eastern Europe. However, as the China-Hungary Economic and Trade Cooperation Zone is still in its infancy, there are still shortcomings such as small scale, single function, and no prominent platform. In the future, based on the needs of China's cooperation with ‘17+1’ and China-Hungary cooperation, on the basis of appropriately expanding the scale of economic and trade cooperation zones and appropriately increasing the number of economic and trade cooperation zones, it is better to focus on optimizing and adjusting its functions and highlighting different economic and trade cooperation. The differentiated function of the trade zones strengthens the multi-faceted cooperation of economic and trade cooperation zones and highlights its role as a platform for cooperation in information, capital, and services.

Keywords: ‘One Belt, One Road’ Initiative, China-Hungary economic and trade cooperation zone, function optimization, Central and Eastern Europe

Procedia PDF Downloads 182
4625 A User-Directed Approach to Optimization via Metaprogramming

Authors: Eashan Hatti

Abstract:

In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.

Keywords: optimization, metaprogramming, logic programming, abstraction

Procedia PDF Downloads 89
4624 Optimization of Lead Bioremediation by Marine Halomonas sp. ES015 Using Statistical Experimental Methods

Authors: Aliaa M. El-Borai, Ehab A. Beltagy, Eman E. Gadallah, Samy A. ElAssar

Abstract:

Bioremediation technology is now used for treatment instead of traditional metal removal methods. A strain was isolated from Marsa Alam, Red sea, Egypt showed high resistance to high lead concentration and was identified by the 16S rRNA gene sequencing technique as Halomonas sp. ES015. Medium optimization was carried out using Plackett-Burman design, and the most significant factors were yeast extract, casamino acid and inoculums size. The optimized media obtained by the statistical design raised the removal efficiency from 84% to 99% from initial concentration 250 ppm of lead. Moreover, Box-Behnken experimental design was applied to study the relationship between yeast extract concentration, casamino acid concentration and inoculums size. The optimized medium increased removal efficiency to 97% from initial concentration 500 ppm of lead. Immobilized Halomonas sp. ES015 cells on sponge cubes, using optimized medium in loop bioremediation column, showed relatively constant lead removal efficiency when reused six successive cycles over the range of time interval. Also metal removal efficiency was not affected by flow rate changes. Finally, the results of this research refer to the possibility of lead bioremediation by free or immobilized cells of Halomonas sp. ES015. Also, bioremediation can be done in batch cultures and semicontinuous cultures using column technology.

Keywords: bioremediation, lead, Box–Behnken, Halomonas sp. ES015, loop bioremediation, Plackett-Burman

Procedia PDF Downloads 200
4623 Exploring the Impact of Dual Brand Image on Continuous Smartphone Usage Intention

Authors: Chiao-Chen Chang, Yang-Chieh Chin

Abstract:

The mobile phone has no longer confined to communication, from the aspect of smartphones, consumers are only willing to pay for the product which the added value has corresponded with their appetites, such as multiple application, upgrade of the camera, and the appearance of the phone and so on. Moreover, as the maturity stage of smartphone industry today, the strategy which manufactures used to gain competitive advantages through hardware as well as software differentiation, is no longer valid. Thus, this research aims to initiate from brand image, to examine exactly whether consumers’ buying intention focus on smartphone brand or operating system, at the same time, perceived value and customer satisfaction will be added between brand image and continuous usage intention to investigate the impact of these two facets toward continuous usage intention. This study verifies the correlation, fitness, and relationship between the variables that lies within the conceptual framework. The result of using structural equation modeling shows that brand image has a positive impact on continuous usage intention. Firms can affect consumer perceived value and customer satisfaction through the creation of the brand image. It also shows that the brand image of smartphone and brand image of the operating system have a positive impact on customer perceived value and customer satisfaction. Furthermore, perceived value also has a positive impact on satisfaction, and so is the relation within satisfaction and perceived value to the continuous usage intention. Last but not least, the brand image of the smartphone has a more remarkable impact on customers than the brand image of the operating system. In addition, this study extends the results to management practice and suggests manufactures to provide fine product design and hardware.

Keywords: smartphone, brand image, perceived value, continuous usage intention

Procedia PDF Downloads 201
4622 The Application and Relevance of Costing Techniques in Service-Oriented Business Organizations a Review of the Activity-Based Costing (ABC) Technique

Authors: Udeh Nneka Evelyn

Abstract:

The shortcoming of traditional costing system in terms of validity, accuracy, consistency, and Relevance increased the need for modern management accounting system. Activity –Based Costing (ABC) can be used as a modern tool for planning, Control and decision making for management. Past studies on ABC system have focused on manufacturing firms thereby making the studies on service firms scanty to some extent. This paper reviewed the application and relevance of activity-based costing technique in service oriented business organizations by employing a qualitative research method which relied heavily on literature review of past and current relevant articles focusing on ABC. Findings suggest that ABC is not only appropriate for use in a manufacturing environment; it is also most appropriate for service organizations such as financial institutions, the healthcare industry and government organization. In fact, some banking and financial institutions have been applying the concept for years under other names. One of them is unit costing, which is used to calculate the cost of banking services by determining the cost and consumption of each unit of output of functions required to deliver the service. ABC in very basic terms may provide very good payback for businesses. Some of the benefits that relate directly to the financial services industry are: identification the most profitable customers: more accurate product and service pricing: increase product profitability: Well organized process costs.

Keywords: business, costing, organizations, planning, techniques

Procedia PDF Downloads 244
4621 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem

Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee

Abstract:

Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.

Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research

Procedia PDF Downloads 337