Search results for: complex optimization method
23155 Implementation of Green Deal Policies and Targets in Energy System Optimization Models: The TEMOA-Europe Case
Authors: Daniele Lerede, Gianvito Colucci, Matteo Nicoli, Laura Savoldi
Abstract:
The European Green Deal is the first internationally agreed set of measures to contrast climate change and environmental degradation. Besides the main target of reducing emissions by at least 55% by 2030, it sets the target of accompanying European countries through an energy transition to make the European Union into a modern, resource-efficient, and competitive net-zero emissions economy by 2050, decoupling growth from the use of resources and ensuring a fair adaptation of all social categories to the transformation process. While the general purpose to allow the realization of the purposes of the Green Deal already dates back to 2019, strategies and policies keep being developed coping with recent circumstances and achievements. However, general long-term measures like the Circular Economy Action Plan, the proposals to shift from fossil natural gas to renewable and low-carbon gases, in particular biomethane and hydrogen, and to end the sale of gasoline and diesel cars by 2035, will all have significant effects on energy supply and demand evolution across the next decades. The interactions between energy supply and demand over long-term time frames are usually assessed via energy system models to derive useful insights for policymaking and to address technological choices and research and development. TEMOA-Europe is a newly developed energy system optimization model instance based on the minimization of the total cost of the system under analysis, adopting a technologically integrated, detailed, and explicit formulation and considering the evolution of the system in partial equilibrium in competitive markets with perfect foresight. TEMOA-Europe is developed on the TEMOA platform, an open-source modeling framework totally implemented in Python, therefore ensuring third-party verification even on large and complex models. TEMOA-Europe is based on a single-region representation of the European Union and EFTA countries on a time scale between 2005 and 2100, relying on a set of assumptions for socio-economic developments based on projections by the International Energy Outlook and a large technological dataset including 7 sectors: the upstream and power sectors for the production of all energy commodities and the end-use sectors, including industry, transport, residential, commercial and agriculture. TEMOA-Europe also includes an updated hydrogen module considering its production, storage, transportation, and utilization. Besides, it can rely on a wide set of innovative technologies, ranging from nuclear fusion and electricity plants equipped with CCS in the power sector to electrolysis-based steel production processes and steel in the industrial sector – with a techno-economic characterization based on public literature – to produce insightful energy scenarios and especially to cope with the very long analyzed time scale. The aim of this work is to examine in detail the scheme of measures and policies for the realization of the purposes of the Green Deal and to transform them into a set of constraints and new socio-economic development pathways. Based on them, TEMOA-Europe will be used to produce and comparatively analyze scenarios to assess the consequences of Green Deal-related measures on the future evolution of the energy mix over the whole energy system in an economic optimization environment.Keywords: European Green Deal, energy system optimization modeling, scenario analysis, TEMOA-Europe
Procedia PDF Downloads 10423154 Comparison of Monte Carlo Simulations and Experimental Results for the Measurement of Complex DNA Damage Induced by Ionizing Radiations of Different Quality
Authors: Ifigeneia V. Mavragani, Zacharenia Nikitaki, George Kalantzis, George Iliakis, Alexandros G. Georgakilas
Abstract:
Complex DNA damage consisting of a combination of DNA lesions, such as Double Strand Breaks (DSBs) and non-DSB base lesions occurring in a small volume is considered as one of the most important biological endpoints regarding ionizing radiation (IR) exposure. Strong theoretical (Monte Carlo simulations) and experimental evidence suggests an increment of the complexity of DNA damage and therefore repair resistance with increasing linear energy transfer (LET). Experimental detection of complex (clustered) DNA damage is often associated with technical deficiencies limiting its measurement, especially in cellular or tissue systems. Our groups have recently made significant improvements towards the identification of key parameters relating to the efficient detection of complex DSBs and non-DSBs in human cellular systems exposed to IR of varying quality (γ-, X-rays 0.3-1 keV/μm, α-particles 116 keV/μm and 36Ar ions 270 keV/μm). The induction and processing of DSB and non-DSB-oxidative clusters were measured using adaptations of immunofluorescence (γH2AX or 53PB1 foci staining as DSB probes and human repair enzymes OGG1 or APE1 as probes for oxidized purines and abasic sites respectively). In the current study, Relative Biological Effectiveness (RBE) values for DSB and non-DSB induction have been measured in different human normal (FEP18-11-T1) and cancerous cell lines (MCF7, HepG2, A549, MO59K/J). The experimental results are compared to simulation data obtained using a validated microdosimetric fast Monte Carlo DNA Damage Simulation code (MCDS). Moreover, this simulation approach is implemented in two realistic clinical cases, i.e. prostate cancer treatment using X-rays generated by a linear accelerator and a pediatric osteosarcoma case using a 200.6 MeV proton pencil beam. RBE values for complex DNA damage induction are calculated for the tumor areas. These results reveal a disparity between theory and experiment and underline the necessity for implementing highly precise and more efficient experimental and simulation approaches.Keywords: complex DNA damage, DNA damage simulation, protons, radiotherapy
Procedia PDF Downloads 32423153 Characterization of the State of Pollution by Nitrates in the Groundwater in Arid Zones Case of Eloued District (South-East of Algeria)
Authors: Zair Nadje, Attoui Badra, Miloudi Abdelmonem
Abstract:
This study aims to assess sensitivity to nitrate pollution and monitor the temporal evolution of nitrate contents in groundwater using statistical models and map their spatial distribution. The nitrate levels observed in the waters of the town of El-Oued differ from one aquifer to another. Indeed, the waters of the Quaternary aquifer are the richest in nitrates, with average annual contents varying from 6 mg/l to 85 mg/l, for an average of 37 mg/l. These levels are higher than the WHO standard (50 mg/l) for drinking water. At the water level of the Terminal Complex (CT) aquifer, the annual average nitrate levels vary from 14 mg/l to 37 mg/l, with an average of 18 mg/l. In the Terminal Complex, excessive nitrate levels are observed in the central localities of the study area. The spatial distribution of nitrates in the waters of the Quaternary aquifer shows that the majority of the catchment points of this aquifer are subject to nitrate pollution. This study shows that in the waters of the Terminal Complex aquifer, nitrate pollution evolves in two major areas. The first focus is South-North, following the direction of underground flow. The second is West-East, progressing towards the East zone. The temporal distribution of nitrate contents in the water of the Terminal Complex aquifer in the city of El-Oued showed that for decades, nitrate contents have suffered a decline after an increase. This evolution of nitrate levels is linked to demographic growth and the rapid urbanization of the city of El-Oued.Keywords: anthropogenic activities, groundwater, nitrates, pollution, arid zones city of El-Oued, Algeria
Procedia PDF Downloads 5423152 Design Optimization of Chevron Nozzles for Jet Noise Reduction
Authors: E. Manikandan, C. Chilambarasan, M. Sulthan Ariff Rahman, S. Kanagaraj, V. R. Sanal Kumar
Abstract:
The noise regulations around the major airports and rocket launching stations due to the environmental concern have made jet noise a crucial problem in the present day aero-acoustics research. The three main acoustic sources in jet nozzles are aerodynamics noise, noise from craft systems and engine and mechanical noise. Note that the majority of engine noise is due to the jet noise coming out from the exhaust nozzle. The previous studies reveal that the potential of chevron nozzles for aircraft engines noise reduction is promising owing to the fact that the jet noise continues to be the dominant noise component, especially during take-off. In this paper parametric analytical studies have been carried out for optimizing the number of chevron lobes, the lobe length and tip shape, and the level of penetration of the chevrons into the flow over a variety of flow conditions for various aerospace applications. The numerical studies have been carried out using a validated steady 3D density based, SST k-ω turbulence model with enhanced wall functions. In the numerical study, a fully implicit finite volume scheme of the compressible, Navier–Stokes equations is employed. We inferred that the geometry optimization of an environmental friendly chevron nozzle with a suitable number of chevron lobes with aerodynamically efficient tip contours for facilitating silent exit flow will enable a commendable sound reduction without much thrust penalty while comparing with the conventional supersonic nozzles with same area ratio.Keywords: chevron nozzle, jet acoustic level, jet noise suppression, shape optimization of chevron nozzles
Procedia PDF Downloads 30923151 The Influence of Culture on Manifestations of Animus
Authors: Anahit Khananyan
Abstract:
The results of the long-term Jungian analysis with female clients from Eastern and Asian countries, which belong to collectivist cultures, are summarised in the article. The goal of the paper is to describe the cultural complex, which was found by the author in the analysis of women of collectivistic culture. It was named “the repression of Animus”. Generally, C.G.Jung himself and the Post-Jungians studied conditions caused by the possession by Animus. The conditions and cases of the repressed Animus, depending on the type of culture and cultural complexes, as we know, were not widely disseminated. C.G. Jung discovered and recognized the Animus as the second component of a pair of opposites of the psyche of women – femininity and Animus. In the way of individuation, an awareness of manifestations of Animus plays an important role: understanding the differences between negative and positive Animus as well as the Animus and the Shadow, then standing the tension of the presence of a pair of opposites - femininity and Animus, acceptance of the tension of them, finding the balance between them and reconciliation of this opposites. All of the above are steps towards the realization of the Animus, its release Animua, and the healing of the psyche. In the paper, the author will share her experience of analyzing the women of different collectivist cultures and her experience of recognizing the repressed Animus during the analysis. Also, she will describe some peculiarities of upbringing and cultural traditions, which reflected the cultural complex of repression of Animus. This complex is manifested in the traditions of girls' upbringing in accordance with which an image of a woman with overly developed femininity and an absence or underdeveloped Animus is idealized and encouraged as well as an evaluating attitude towards females who have to correspond to this image and fulfill the role prescribed in this way in the family and society.Keywords: analysis, cultural complex, animus, manifestation, culture
Procedia PDF Downloads 8223150 Optimizing CNC Production Line Efficiency Using NSGA-II: Adaptive Layout and Operational Sequence for Enhanced Manufacturing Flexibility
Authors: Yi-Ling Chen, Dung-Ying Lin
Abstract:
In the manufacturing process, computer numerical control (CNC) machining plays a crucial role. CNC enables precise machinery control through computer programs, achieving automation in the production process and significantly enhancing production efficiency. However, traditional CNC production lines often require manual intervention for loading and unloading operations, which limits the production line's operational efficiency and production capacity. Additionally, existing CNC automation systems frequently lack sufficient intelligence and fail to achieve optimal configuration efficiency, resulting in the need for substantial time to reconfigure production lines when producing different products, thereby impacting overall production efficiency. Using the NSGA-II algorithm, we generate production line layout configurations that consider field constraints and select robotic arm specifications from an arm list. This allows us to calculate loading and unloading times for each job order, perform demand allocation, and assign processing sequences. The NSGA-II algorithm is further employed to determine the optimal processing sequence, with the aim of minimizing demand completion time and maximizing average machine utilization. These objectives are used to evaluate the performance of each layout, ultimately determining the optimal layout configuration. By employing this method, it enhance the configuration efficiency of CNC production lines and establish an adaptive capability that allows the production line to respond promptly to changes in demand. This will minimize production losses caused by the need to reconfigure the layout, ensuring that the CNC production line can maintain optimal efficiency even when adjustments are required due to fluctuating demands.Keywords: evolutionary algorithms, multi-objective optimization, pareto optimality, layout optimization, operations sequence
Procedia PDF Downloads 1823149 Solving Directional Overcurrent Relay Coordination Problem Using Artificial Bees Colony
Authors: M. H. Hussain, I. Musirin, A. F. Abidin, S. R. A. Rahim
Abstract:
This paper presents the implementation of Artificial Bees Colony (ABC) algorithm in solving Directional OverCurrent Relays (DOCRs) coordination problem for near-end faults occurring in fixed network topology. The coordination optimization of DOCRs is formulated as linear programming (LP) problem. The objective function is introduced to minimize the operating time of the associated relay which depends on the time multiplier setting. The proposed technique is to taken as a technique for comparison purpose in order to highlight its superiority. The proposed algorithms have been tested successfully on 8 bus test system. The simulation results demonstrated that the ABC algorithm which has been proved to have good search ability is capable in dealing with constraint optimization problems.Keywords: artificial bees colony, directional overcurrent relay coordination problem, relay settings, time multiplier setting
Procedia PDF Downloads 32823148 Performance Analysis and Multi-Objective Optimization of a Kalina Cycle for Low-Temperature Applications
Authors: Sadegh Sadeghi, Negar Shabani
Abstract:
From a thermal point of view, zeotropic mixtures are likely to be more efficient than azeotropic fluids in low-temperature thermodynamic cycles due to their suitable boiling characteristics. In this study, performance of a low-temperature Kalina cycle with R717/water working fluid used in different existing power plants is mathematically investigated. To analyze the behavior of the cycle, mass conservation, energy conservation, and exergy balance equations are presented. With regard to the similarity in molar mass of R717 (17.03 gr/mol) and water (18.01 gr/mol), there is no need to alter the size of Kalina system components such as turbine and pump. To optimize the cycle energy and exergy efficiencies simultaneously, a constrained multi-objective optimization is carried out applying an Artificial Bee Colony algorithm. The main motivation behind using this algorithm lies on its robustness, reliability, remarkable precision and high–speed convergence rate in dealing with complicated constrained multi-objective problems. Convergence rates of the algorithm for calculating the optimal energy and exergy efficiencies are presented. Subsequently, due to the importance of exergy concept in Kalina cycles, exergy destructions occurring in the components are computed. Finally, the impacts of pressure, temperature, mass fraction and mass flow rate on the energy and exergy efficiencies are elaborately studied.Keywords: artificial bee colony algorithm, binary zeotropic mixture, constrained multi-objective optimization, energy efficiency, exergy efficiency, Kalina cycle
Procedia PDF Downloads 15223147 Characterization on Molecular Weight of Polyamic Acids Using GPC Coupled with Multiple Detectors
Authors: Mei Hong, Wei Liu, Xuemin Dai, Yanxiong Pan, Xiangling Ji
Abstract:
Polyamic acid (PAA) is the precursor of polyimide (PI) prepared by a two-step method, its molecular weight and molecular weight distribution not only play an important role during the preparation and processing, but also influence the final performance of PI. However, precise characterization on molecular weight of PAA is still a challenge because of the existence of very complicated interactions in the solution system, including the electrostatic interaction, hydrogen bond interaction, dipole-dipole interaction, etc. Thus, it is necessary to establisha suitable strategy which can completely suppress these complex effects and get reasonable data on molecular weight. Herein, the gel permeation chromatography (GPC) coupled with differential refractive index (RI) and multi-angle laser light scattering (MALLS) detectors were applied to measure the molecular weight of (6FDA-DMB) PAA using different mobile phases, LiBr/DMF, LiBr/H3PO4/THF/DMF, LiBr/HAc/THF/DMF, and LiBr/HAc/DMF, respectively. It was found that combination of LiBr with HAc can shield the above-mentioned complex interactions and is more conducive to the separation of PAA than only addition of LiBr in DMF. LiBr/HAc/DMF was employed for the first time as a mild mobile phase to effectively separate PAA and determine its molecular weight. After a series of conditional experiments, 0.02M LiBr/0.2M HAc/DMF was fixed as an optimized mobile phase to measure the relative and absolute molecular weights of (6FDA-DMB) PAA prepared, and the obtained Mw from GPC-MALLS and GPC-RI were 35,300 g/mol and 125,000 g/mol, respectively. Particularly, such a mobile phase is also applicable to other PAA samples with different structures, and the final results on molecular weight are also reproducible.Keywords: Polyamic acids, Polyelectrolyte effects, Gel permeation chromatography, Mobile phase, Molecular weight
Procedia PDF Downloads 5223146 Mechanical Study Printed Circuit Boards Bonding for Jefferson Laboratory Detector
Authors: F. Noto, F. De Persio, V. Bellini, G. Costa. F. Mammoliti, F. Meddi, C. Sutera, G. M. Urcioli
Abstract:
One plane X and one plane Y of silicon microstrip detectors will constitute the front part of the Super Bigbite Spectrometer that is under construction and that will be installed in the experimental Hall A of the Thomas Jefferson National Accelerator Facility (Jefferson Laboratory), located in Newport News, Virgina, USA. Each plane will be made up by two nearly identical, 300 μm thick, 10 cm x 10.3 cm wide silicon microstrip detectors with 50 um pitch, whose electronic signals will be transferred to the front-end electronic based on APV25 chips through C-shaped FR4 Printed Circuit Boards (PCB). A total of about 10000 strips are read-out. This paper treats the optimization of the detector support structure, the materials used through a finite element simulation. A very important aspect of the study will also cover the optimization of the bonding parameters between detector and electronics.Keywords: FEM analysis, bonding, SBS tracker, mechanical structure
Procedia PDF Downloads 33723145 A Matheuristic Algorithm for the School Bus Routing Problem
Authors: Cagri Memis, Muzaffer Kapanoglu
Abstract:
The school bus routing problem (SBRP) is a variant of the Vehicle Routing Problem (VRP) classified as a location-allocation-routing problem. In this study, the SBRP is decomposed into two sub-problems: (1) bus route generation and (2) bus stop selection to solve large instances of the SBRP in reasonable computational times. To solve the first sub-problem, we propose a genetic algorithm to generate bus routes. Once the routes have been fixed, a sub-problem remains of allocating students to stops considering the capacity of the buses and the walkability constraints of the students. While the exact method solves small-scale problems, treating large-scale problems with the exact method becomes complex due to computational problems, a deficiency that the genetic algorithm can overcome. Results obtained from the proposed approach on 150 instances up to 250 stops show that the matheuristic algorithm provides better solutions in reasonable computational times with respect to benchmark algorithms.Keywords: genetic algorithm, matheuristic, school bus routing problem, vehicle routing problem
Procedia PDF Downloads 7023144 Cybernetic Model-Based Optimization of a Fed-Batch Process for High Cell Density Cultivation of E. Coli In Shake Flasks
Authors: Snehal D. Ganjave, Hardik Dodia, Avinash V. Sunder, Swati Madhu, Pramod P. Wangikar
Abstract:
Batch cultivation of recombinant bacteria in shake flasks results in low cell density due to nutrient depletion. Previous protocols on high cell density cultivation in shake flasks have relied mainly on controlled release mechanisms and extended cultivation protocols. In the present work, we report an optimized fed-batch process for high cell density cultivation of recombinant E. coli BL21(DE3) for protein production. A cybernetic model-based, multi-objective optimization strategy was implemented to obtain the optimum operating variables to achieve maximum biomass and minimized substrate feed rate. A syringe pump was used to feed a mixture of glycerol and yeast extract into the shake flask. Preliminary experiments were conducted with online monitoring of dissolved oxygen (DO) and offline measurements of biomass and glycerol to estimate the model parameters. Multi-objective optimization was performed to obtain the pareto front surface. The selected optimized recipe was tested for a range of proteins that show different extent soluble expression in E. coli. These included eYFP and LkADH, which are largely expressed in soluble fractions, CbFDH and GcanADH , which are partially soluble, and human PDGF, which forms inclusion bodies. The biomass concentrations achieved in 24 h were in the range 19.9-21.5 g/L, while the model predicted value was 19.44 g/L. The process was successfully reproduced in a standard laboratory shake flask without online monitoring of DO and pH. The optimized fed-batch process showed significant improvement in both the biomass and protein production of the tested recombinant proteins compared to batch cultivation. The proposed process will have significant implications in the routine cultivation of E. coli for various applications.Keywords: cybernetic model, E. coli, high cell density cultivation, multi-objective optimization
Procedia PDF Downloads 25223143 Optimization of Enzymatic Hydrolysis of Cooked Porcine Blood to Obtain Hydrolysates with Potential Biological Activities
Authors: Miguel Pereira, Lígia Pimentel, Manuela Pintado
Abstract:
Animal blood is a major by-product of slaughterhouses and still represents a cost and environmental problem in some countries. To be eliminated, blood should be stabilised by cooking and afterwards the slaughterhouses must have to pay for its incineration. In order to reduce the elimination costs and valorise the high protein content the aim of this study was the optimization of hydrolysis conditions, in terms of enzyme ratio and time, in order to obtain hydrolysates with biological activity. Two enzymes were tested in this assay: pepsin and proteases from Cynara cardunculus (cardosins). The latter has the advantage to be largely used in the Portuguese Dairy Industry and has a low price. The screening assays were carried out in a range of time between 0 and 10 h and using a ratio of enzyme/reaction volume between 0 and 5%. The assays were performed at the optimal conditions of pH and temperature for each enzyme: 55 °C at pH 5.2 for cardosins and 37 °C at pH 2.0 for pepsin. After reaction, the hydrolysates were evaluated by FPLC (Fast Protein Liquid Chromatography) and tested for their antioxidant activity by ABTS method. FPLC chromatograms showed different profiles when comparing the enzymatic reactions with the control (no enzyme added). The chromatogram exhibited new peaks with lower MW that were not present in control samples, demonstrating the hydrolysis by both enzymes. Regarding to the antioxidant activity, the best results for both enzymes were obtained using a ratio enzyme/reactional volume of 5% during 5 h of hydrolysis. However, the extension of reaction did not affect significantly the antioxidant activity. This has an industrial relevant aspect in what concerns to the process cost. In conclusion, the enzymatic blood hydrolysis can be a better alternative to the current elimination process allowing to the industry the reuse of an ingredient with biological properties and economic value.Keywords: antioxidant activity, blood, by-products, enzymatic hydrolysis
Procedia PDF Downloads 50723142 Adaptive Anchor Weighting for Improved Localization with Levenberg-Marquardt Optimization
Authors: Basak Can
Abstract:
This paper introduces an iterative and weighted localization method that utilizes a unique cost function formulation to significantly enhance the performance of positioning systems. The system employs locators, such as Gateways (GWs), to estimate and track the position of an End Node (EN). Performance is evaluated relative to the number of locators, with known locations determined through calibration. Performance evaluation is presented utilizing low cost single-antenna Bluetooth Low Energy (BLE) devices. The proposed approach can be applied to alternative Internet of Things (IoT) modulation schemes, as well as Ultra WideBand (UWB) or millimeter-wave (mmWave) based devices. In non-line-of-sight (NLOS) scenarios, using four or eight locators yields a 95th percentile localization performance of 2.2 meters and 1.5 meters, respectively, in a 4,305 square feet indoor area with BLE 5.1 devices. This method outperforms conventional RSSI-based techniques, achieving a 51% improvement with four locators and a 52 % improvement with eight locators. Future work involves modeling interference impact and implementing data curation across multiple channels to mitigate such effects.Keywords: lateration, least squares, Levenberg-Marquardt algorithm, localization, path-loss, RMS error, RSSI, sensors, shadow fading, weighted localization
Procedia PDF Downloads 2323141 Probabilistic Slope Stability Analysis of Excavation Induced Landslides Using Hermite Polynomial Chaos
Authors: Schadrack Mwizerwa
Abstract:
The characterization and prediction of landslides are crucial for assessing geological hazards and mitigating risks to infrastructure and communities. This research aims to develop a probabilistic framework for analyzing excavation-induced landslides, which is fundamental for assessing geological hazards and mitigating risks to infrastructure and communities. The study uses Hermite polynomial chaos, a non-stationary random process, to analyze the stability of a slope and characterize the failure probability of a real landslide induced by highway construction excavation. The correlation within the data is captured using the Karhunen-Loève (KL) expansion theory, and the finite element method is used to analyze the slope's stability. The research contributes to the field of landslide characterization by employing advanced random field approaches, providing valuable insights into the complex nature of landslide behavior and the effectiveness of advanced probabilistic models for risk assessment and management. The data collected from the Baiyuzui landslide, induced by highway construction, is used as an illustrative example. The findings highlight the importance of considering the probabilistic nature of landslides and provide valuable insights into the complex behavior of such hazards.Keywords: Hermite polynomial chaos, Karhunen-Loeve, slope stability, probabilistic analysis
Procedia PDF Downloads 7623140 Improving the Frequency Response of a Circular Dual-Mode Resonator with a Reconfigurable Bandwidth
Authors: Muhammad Haitham Albahnassi, Adnan Malki, Shokri Almekdad
Abstract:
In this paper, a method for reconfiguring bandwidth in a circular dual-mode resonator is presented. The method concerns the optimized geometry of a structure that may be used to host the tuning elements, which are typically RF (Radio Frequency) switches. The tuning elements themselves, and their performance during tuning, are not the focus of this paper. The designed resonator is able to reconfigure its fractional bandwidth by adjusting the inter-coupling level between the degenerate modes, while at the same time improving its response by adjusting the external-coupling level and keeping the center frequency fixed. The inter-coupling level has been adjusted by changing the dimensions of the perturbation element, while the external-coupling level has been adjusted by changing one of the feeder dimensions. The design was arrived at via optimization. Agreeing simulation and measurement results of the designed and implemented filters showed good improvements in return loss values and the stability of the center frequency.Keywords: dual-mode resonators, perturbation theory, reconfigurable filters, software defined radio, cognitine radio
Procedia PDF Downloads 16623139 Design of Digital IIR Filter Using Opposition Learning and Artificial Bee Colony Algorithm
Authors: J. S. Dhillon, K. K. Dhaliwal
Abstract:
In almost all the digital filtering applications the digital infinite impulse response (IIR) filters are preferred over finite impulse response (FIR) filters because they provide much better performance, less computational cost and have smaller memory requirements for similar magnitude specifications. However, the digital IIR filters are generally multimodal with respect to the filter coefficients and therefore, reliable methods that can provide global optimal solutions are required. The artificial bee colony (ABC) algorithm is one such recently introduced meta-heuristic optimization algorithm. But in some cases it shows insufficiency while searching the solution space resulting in a weak exchange of information and hence is not able to return better solutions. To overcome this deficiency, the opposition based learning strategy is incorporated in ABC and hence a modified version called oppositional artificial bee colony (OABC) algorithm is proposed in this paper. Duplication of members is avoided during the run which also augments the exploration ability. The developed algorithm is then applied for the design of optimal and stable digital IIR filter structure where design of low-pass (LP) and high-pass (HP) filters is carried out. Fuzzy theory is applied to achieve maximize satisfaction of minimum magnitude error and stability constraints. To check the effectiveness of OABC, the results are compared with some well established filter design techniques and it is observed that in most cases OABC returns better or atleast comparable results.Keywords: digital infinite impulse response filter, artificial bee colony optimization, opposition based learning, digital filter design, multi-parameter optimization
Procedia PDF Downloads 47723138 Recognizing Juxtaposition Patterns of the Dwelling Units in Housing Cluster: The Case Study of Aghayan Complex: An Example of Rural Residential Development in Qajar Era in Iran
Authors: Outokesh Fatemeh, Jourabchi Keivan, Talebi Maryam, Nikbakht Fatemeh
Abstract:
Mayamei is a small town in Iran that is located between Shahrud and Sabzevar cities, on the Silk Road. It enjoys a history of approximately 1000 years. An alley entitled ‘Aghayan’ exists in this town that comprises residential buildings of a famous family. Bathhouse, mosque, telegraph center, cistern are all related to this alley. This architectural complex belongs to Sadat Mousavi, who is one of the Mayamei's major grandees and religious household. The alley after construction has been inherited from generation to generation within the family masters. The purpose of this study, which was conducted on Aghayan alley and its associated complex, was to elucidate Iranian vernacular domestic architecture of Qajar era in small towns and villages. We searched for large, medium, and small architectural patterns in the contemplated complex, and tried to elaborate their evolution from past to the present. The other objective of this project was finding a correlation between changes in the lifestyle of the alley’s inhabitants with the form of the building's architecture. Our investigation methods included: literature review especially in regard to historical travelogues, peer site visiting, mapping, interviewing of the elderly people of the Mousavi family (the owners), and examining the available documents especially the 4 meters’ scroll-type testament of 150 years ago. For the analysis of the aforementioned data, an effort was made to discover (1) the patterns of placing of different buildings in respect of the others, (2) finding the relation between function of the buildings with their relative location in the complex, as was considered in the original design, and (3) possible changes of functions of the buildings during the time. In such an investigation, special attention was paid to the chronological changes of lifestyles of the residents. In addition, we tried to take all different activities of the residents into account including their daily life activities, religious ceremonies, etc. By combining such methods, we were able to obtain a picture of the buildings in their original (construction) state, along with a knowledge of the temporal evolution of the architecture. An interesting finding is that the Aghayan complex seems to be a big structure of the horizontal type apartments, which are placed next to each other. The houses made in this way are connected to the adjacent neighbors both by the bifacial rooms and from the roofs.Keywords: Iran, Qajar period, vernacular domestic architecture, life style, residential complex
Procedia PDF Downloads 16123137 Data-Driven Dynamic Overbooking Model for Tour Operators
Authors: Kannapha Amaruchkul
Abstract:
We formulate a dynamic overbooking model for a tour operator, in which most reservations contain at least two people. The cancellation rate and the timing of the cancellation may depend on the group size. We propose two overbooking policies, namely economic- and service-based. In an economic-based policy, we want to minimize the expected oversold and underused cost, whereas, in a service-based policy, we ensure that the probability of an oversold situation does not exceed the pre-specified threshold. To illustrate the applicability of our approach, we use tour package data in 2016-2018 from a tour operator in Thailand to build a data-driven robust optimization model, and we tested the proposed overbooking policy in 2019. We also compare the data-driven approach to the conventional approach of fitting data into a probability distribution.Keywords: applied stochastic model, data-driven robust optimization, overbooking, revenue management, tour operator
Procedia PDF Downloads 13123136 Optimization of Process Parameters for Copper Extraction from Wastewater Treatment Sludge by Sulfuric Acid
Authors: Usarat Thawornchaisit, Kamalasiri Juthaisong, Kasama Parsongjeen, Phonsiri Phoengchan
Abstract:
In this study, sludge samples that were collected from the wastewater treatment plant of a printed circuit board manufacturing industry in Thailand were subjected to acid extraction using sulfuric acid as the chemical extracting agent. The effects of sulfuric acid concentration (A), the ratio of a volume of acid to a quantity of sludge (B) and extraction time (C) on the efficiency of copper extraction were investigated with the aim of finding the optimal conditions for maximum removal of copper from the wastewater treatment sludge. Factorial experimental design was employed to model the copper extraction process. The results were analyzed statistically using analysis of variance to identify the process variables that were significantly affected the copper extraction efficiency. Results showed that all linear terms and an interaction term between volume of acid to quantity of sludge ratio and extraction time (BC), had statistically significant influence on the efficiency of copper extraction under tested conditions in which the most significant effect was ascribed to volume of acid to quantity of sludge ratio (B), followed by sulfuric acid concentration (A), extraction time (C) and interaction term of BC, respectively. The remaining two-way interaction terms, (AB, AC) and the three-way interaction term (ABC) is not statistically significant at the significance level of 0.05. The model equation was derived for the copper extraction process and the optimization of the process was performed using a multiple response method called desirability (D) function to optimize the extraction parameters by targeting maximum removal. The optimum extraction conditions of 99% of copper were found to be sulfuric acid concentration: 0.9 M, ratio of the volume of acid (mL) to the quantity of sludge (g) at 100:1 with an extraction time of 80 min. Experiments under the optimized conditions have been carried out to validate the accuracy of the Model.Keywords: acid treatment, chemical extraction, sludge, waste management
Procedia PDF Downloads 19723135 An Approach to Capture, Evaluate and Handle Complexity of Engineering Change Occurrences in New Product Development
Authors: Mohammad Rostami Mehr, Seyed Arya Mir Rashed, Arndt Lueder, Magdalena Missler-Behr
Abstract:
This paper represents the conception that complex problems do not necessarily need a similar complex solution in order to cope with the complexity. Furthermore, a simple solution based on established methods can provide a sufficient way to deal with the complexity. To verify this conception, the presented paper focuses on the field of change management as a part of the new product development process in the automotive sector. In this field, dealing with increasing complexity is essential, while only non-flexible rigid processes that are not designed to handle complexity are available. The basic methodology of this paper can be divided into four main sections: 1) analyzing the complexity of the change management, 2) literature review in order to identify potential solutions and methods, 3) capturing and implementing expertise of experts from the change management field of an automobile manufacturing company and 4) systematical comparison of the identified methods from literature and connecting these with defined requirements of the complexity of the change management in order to develop a solution. As a practical outcome, this paper provides a method to capture the complexity of engineering changes (EC) and includes it within the EC evaluation process, following case-related process guidance to cope with the complexity. Furthermore, this approach supports the conception that dealing with complexity is possible while utilizing rather simple and established methods by combining them into a powerful tool.Keywords: complexity management, new product development, engineering change management, flexibility
Procedia PDF Downloads 19523134 An Investigation of How Salad Rocket May Provide Its Own Defence Against Spoilage Bacteria
Authors: Huda Aldossari
Abstract:
Members of the Brassicaceae family, such as rocket species, have high concentrations of glucosinolates (GLSs). GSLs and isothiocyanates (ITCs), the product of GLSs hydrolysis, are the most influential compounds that affect flavour in rocket species. Aside from their contribution to the flavour, GSLs and ITCs are of particular interest due to their potential ability to inhibit the growth of human pathogenic bacteria such as E. coli O157. Quantitative and qualitative analysis of glucosinolate compounds in rocket extracts was obtained by Liquid Chromatography-Mass Spectrometry (LC–MS).Each individual component of non-volatile GLSs and ITCs was isolated by High-Performance Liquid Chromatography (HPLC) fractionation. The identity and purity of each fraction were confirmed using Ultra High-Performance Liquid Chromatography (UPLC). The separation of glucosinolates in the complex rocket extractions was performed by optimizing a HPLC fractionation method through changing the mobile phase composition, solvent gradient, and the flow rate. As a result, six glucosinolates compounds (Glucosativin, 4-Methoxyglucobrassicin, Glucotropaeolin GTP, Glucoiberin GIB, Diglucothiobenin, and Sinigrin) have been isolated, identified and quantified in the complex samples. This step aims to evaluate the antibacterial activity of glucosinolates and their enzymatic hydrolysis against bacterial growth of E.coli k12. Therefore, fractions from this study will be used to determine the most active compounds by investigating the efficacy of each component of GLSs and ITCs at inhibiting bacterial growth.Keywords: rocket, glucosinolates, E.coli k12., HPLC fractionatio
Procedia PDF Downloads 9423133 Soft Computing Employment to Optimize Safety Stock Levels in Supply Chain Dairy Product under Supply and Demand Uncertainty
Authors: Riyadh Jamegh, Alla Eldin Kassam, Sawsan Sabih
Abstract:
In order to overcome uncertainty conditions and inability to meet customers' requests due to these conditions, organizations tend to reserve a certain safety stock level (SSL). This level must be chosen carefully in order to avoid the increase in holding cost due to excess in SSL or shortage cost due to too low SSL. This paper used soft computing fuzzy logic to identify optimal SSL; this fuzzy model uses the dynamic concept to cope with high complexity environment status. The proposed model can deal with three input variables, i.e., demand stability level, raw material availability level, and on hand inventory level by using dynamic fuzzy logic to obtain the best SSL as an output. In this model, demand stability, raw material, and on hand inventory levels are described linguistically and then treated by inference rules of the fuzzy model to extract the best level of safety stock. The aim of this research is to provide dynamic approach which is used to identify safety stock level, and it can be implanted in different industries. Numerical case study in the dairy industry with Yogurt 200 gm cup product is explained to approve the validity of the proposed model. The obtained results are compared with the current level of safety stock which is calculated by using the traditional approach. The importance of the proposed model has been demonstrated by the significant reduction in safety stock level.Keywords: inventory optimization, soft computing, safety stock optimization, dairy industries inventory optimization
Procedia PDF Downloads 12323132 A Reduced Ablation Model for Laser Cutting and Laser Drilling
Authors: Torsten Hermanns, Thoufik Al Khawli, Wolfgang Schulz
Abstract:
In laser cutting as well as in long pulsed laser drilling of metals, it can be demonstrated that the ablation shape (the shape of cut faces respectively the hole shape) that is formed approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from the ultrashort pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in laser cutting and long pulse drilling of metals is identified, its underlying mechanism numerically implemented, tested and clearly confirmed by comparison with experimental data. In detail, there now is a model that allows the simulation of the temporal (pulse-resolved) evolution of the hole shape in laser drilling as well as the final (asymptotic) shape of the cut faces in laser cutting. This simulation especially requires much less in the way of resources, such that it can even run on common desktop PCs or laptops. Individual parameters can be adjusted using sliders – the simulation result appears in an adjacent window and changes in real time. This is made possible by an application-specific reduction of the underlying ablation model. Because this reduction dramatically decreases the complexity of calculation, it produces a result much more quickly. This means that the simulation can be carried out directly at the laser machine. Time-intensive experiments can be reduced and set-up processes can be completed much faster. The high speed of simulation also opens up a range of entirely different options, such as metamodeling. Suitable for complex applications with many parameters, metamodeling involves generating high-dimensional data sets with the parameters and several evaluation criteria for process and product quality. These sets can then be used to create individual process maps that show the dependency of individual parameter pairs. This advanced simulation makes it possible to find global and local extreme values through mathematical manipulation. Such simultaneous optimization of multiple parameters is scarcely possible by experimental means. This means that new methods in manufacturing such as self-optimization can be executed much faster. However, the software’s potential does not stop there; time-intensive calculations exist in many areas of industry. In laser welding or laser additive manufacturing, for example, the simulation of thermal induced residual stresses still uses up considerable computing capacity or is even not possible. Transferring the principle of reduced models promises substantial savings there, too.Keywords: asymptotic ablation shape, interactive process simulation, laser drilling, laser cutting, metamodeling, reduced modeling
Procedia PDF Downloads 21323131 Small Molecule Inhibitors of PD1-PDL1 Interaction
Authors: K. Żak, S. Przetocka, R. Kitel, K. Guzik, B. Musielak, S. Malicki, G. Dubin, T. A. Holak
Abstract:
Studies on tumor genesis revealed a number of factors that may potentially serve as molecular targets for immunotherapies. One of such promising targets are PD1 and PDL1 proteins. PD1 (Programmed cell death protein 1) is expressed by activated T cells and plays a critical role in modulation of the host's immune response. One of the PD1 ligands -PDL1- is expressed by macrophages, monocytes and cancer cells which exploit it to avoid immune attack. The notion of the mechanisms used by cancer cells to block the immune system response was utilized in the development of therapies blocking PD1-PDL1 interaction. Up to date, human PD1-PDL1 complex has not been crystallized and structure of the mouse-human complex does not provide a complete view of the molecular basis of PD1-PDL1 interactions. The purpose of this study is to obtain crystal structure of the human PD1-PDL1 complex which shall allow rational design of small molecule inhibitors of the interaction. In addition, the study presents results of binding small-molecules to PD1 and fragment docking towards PD1 protein which will facilitate the design and development of small–molecule inhibitors of PD1-PDL1 interaction.Keywords: PD1, PDL1, cancer, small molecule, drug discovery
Procedia PDF Downloads 39323130 Enhancing Spatial Interpolation: A Multi-Layer Inverse Distance Weighting Model for Complex Regression and Classification Tasks in Spatial Data Analysis
Authors: Yakin Hajlaoui, Richard Labib, Jean-François Plante, Michel Gamache
Abstract:
This study introduces the Multi-Layer Inverse Distance Weighting Model (ML-IDW), inspired by the mathematical formulation of both multi-layer neural networks (ML-NNs) and Inverse Distance Weighting model (IDW). ML-IDW leverages ML-NNs' processing capabilities, characterized by compositions of learnable non-linear functions applied to input features, and incorporates IDW's ability to learn anisotropic spatial dependencies, presenting a promising solution for nonlinear spatial interpolation and learning from complex spatial data. it employ gradient descent and backpropagation to train ML-IDW, comparing its performance against conventional spatial interpolation models such as Kriging and standard IDW on regression and classification tasks using simulated spatial datasets of varying complexity. the results highlight the efficacy of ML-IDW, particularly in handling complex spatial datasets, exhibiting lower mean square error in regression and higher F1 score in classification.Keywords: deep learning, multi-layer neural networks, gradient descent, spatial interpolation, inverse distance weighting
Procedia PDF Downloads 5223129 The Value and Role of Higher Education in the Police Profession
Authors: Habib Ahmadi, Mohamad Ali Ameri
Abstract:
In this research, the perception and understanding of police officers about the value of higher education have been investigated. A qualitative research approach and phenomenological method were used, and in data analysis, the Claizi method was used. In this research, 17 people with different degrees and occupations were selected by purposive sampling method until saturation and were investigated using a semi-structured interview tool. After the data was collected, recorded, and coded in the Atlas T software, it was formulated in the form of main categories and concepts. The general views of police officers participating in this research show the importance of university education in police jobs(76%). The analysis of participants' experiences led to the identification of seven main categories of the value and role of higher education, including; 1- Improvement of behavior and social skills, 2- Opportunities to improve and improve job performance, 3- Professionalization of police work, 4- Financial motivation, 5- People's satisfaction with police services, 6- Improvement of writing and technical skills Statement, 7- Raising the level of expectation and expectations was misplaced (negative perception). The findings of this study support the positive attitude and professionalism of the educated police. Therefore, considering the change of paradigm in society as well as the change of technologies, more complex organizational designs, and the perception of police officers, it is concluded that the police field needs officers with higher education to enable them to understand the new global environment.Keywords: lived experience, higher education, police professionalization, perceptions of police officers
Procedia PDF Downloads 8123128 The Effect of Land Cover on Movement of Vehicles in the Terrain
Authors: Krisstalova Dana, Mazal Jan
Abstract:
This article deals with geographical conditions in terrain and their effect on the movement of vehicles, their effect on speed and safety of movement of people and vehicles. Finding of the optimal routes outside the communication is studied in the army environment, but it occur in civilian as well, primarily in crisis situation, or by the provision of assistance when natural disasters such as floods, fires, storms etc., have happened. These movements require the optimization of routes when effects of geographical factors should be included. The most important factor is the surface of a terrain. It is based on several geographical factors as are slopes, soil conditions, micro-relief, a type of surface and meteorological conditions. Their mutual impact has been given by coefficient of deceleration. This coefficient can be used for the commander`s decision. New approaches and methods of terrain testing, mathematical computing, mathematical statistics or cartometric investigation are necessary parts of this evaluation.Keywords: movement in a terrain, geographical factors, surface of a field, mathematical evaluation, optimization and searching paths
Procedia PDF Downloads 42323127 The Effects of Key Factors in Traffic-Oriented Road Alignment Adjustment for Low Emissions Profile: A Case Study in Norway
Authors: Gaylord K. Booto, Marinelli Giuseppe, Helge Brattebø, Rolf A. Bohne
Abstract:
Emissions reduction has emerged among the principal targets in the process of planning and designing road alignments today. Intelligent road design methods that can result in optimized alignment constitute concrete and innovative responses towards better alternatives and more sustainable road infrastructures. As the largest amount of emissions of road infrastructures occur in the operation stage, it becomes very important to consider traffic weight and distribution in alignment design process. This study analyzes the effects of four traffic factors (i.e. operating speed, vehicle category, technology and fuel type) on adjusting the vertical alignment of a given road, using optimization techniques. Further, factors’ effects are assessed qualitatively and quantitatively, and the emission profiles of resulting alignment alternatives are compared.Keywords: alignment adjustment, emissions reduction, optimization, traffic-oriented
Procedia PDF Downloads 36823126 Reduction of Differential Column Shortening in Tall Buildings
Authors: Hansoo Kim, Seunghak Shin
Abstract:
The differential column shortening in tall buildings can be reduced by improving material and structural characteristics of the structural systems. This paper proposes structural methods to reduce differential column shortening in reinforced concrete tall buildings; connecting columns with rigidly jointed horizontal members, using outriggers, and placing additional reinforcement at the columns. The rigidly connected horizontal members including outriggers reduce the differential shortening between adjacent vertical members. The axial stiffness of columns with greater shortening can be effectively increased by placing additional reinforcement at the columns, thus the differential column shortening can be reduced in the design stage. The optimum distribution of additional reinforcement can be determined by applying a gradient based optimization technique.Keywords: column shortening, long-term behavior, optimization, tall building
Procedia PDF Downloads 248