Search results for: dual phase lag model
5470 Assessment of Carbon Dioxide Separation by Amine Solutions Using Electrolyte Non-Random Two-Liquid and Peng-Robinson Models: Carbon Dioxide Absorption Efficiency
Authors: Arash Esmaeili, Zhibang Liu, Yang Xiang, Jimmy Yun, Lei Shao
Abstract:
A high pressure carbon dioxide (CO2) absorption from a specific gas in a conventional column has been evaluated by the Aspen HYSYS simulator using a wide range of single absorbents and blended solutions to estimate the outlet CO2 concentration, absorption efficiency and CO2 loading to choose the most proper solution in terms of CO2 capture for environmental concerns. The property package (Acid Gas-Chemical Solvent) which is compatible with all applied solutions for the simulation in this study, estimates the properties based on an electrolyte non-random two-liquid (E-NRTL) model for electrolyte thermodynamics and Peng-Robinson equation of state for the vapor and liquid hydrocarbon phases. Among all the investigated single amines as well as blended solutions, piperazine (PZ) and the mixture of piperazine and monoethanolamine (MEA) have been found as the most effective absorbents respectively for CO2 absorption with high reactivity based on the simulated operational conditions.Keywords: absorption, amine solutions, Aspen HYSYS, carbon dioxide, simulation
Procedia PDF Downloads 1925469 Learning Curve Effect on Materials Procurement Schedule of Multiple Sister Ships
Authors: Vijaya Dixit Aasheesh Dixit
Abstract:
Shipbuilding industry operates in Engineer Procure Construct (EPC) context. Product mix of a shipyard comprises of various types of ships like bulk carriers, tankers, barges, coast guard vessels, sub-marines etc. Each order is unique based on the type of ship and customized requirements, which are engineered into the product right from design stage. Thus, to execute every new project, a shipyard needs to upgrade its production expertise. As a result, over the long run, holistic learning occurs across different types of projects which contributes to the knowledge base of the shipyard. Simultaneously, in the short term, during execution of a project comprising of multiple sister ships, repetition of similar tasks leads to learning at activity level. This research aims to capture above learnings of a shipyard and incorporate learning curve effect in project scheduling and materials procurement to improve project performance. Extant literature provides support for the existence of such learnings in an organization. In shipbuilding, there are sequences of similar activities which are expected to exhibit learning curve behavior. For example, the nearly identical structural sub-blocks which are successively fabricated, erected, and outfitted with piping and electrical systems. Learning curve representation can model not only a decrease in mean completion time of an activity, but also a decrease in uncertainty of activity duration. Sister ships have similar material requirements. The same supplier base supplies materials for all the sister ships within a project. On one hand, this provides an opportunity to reduce transportation cost by batching the order quantities of multiple ships. On the other hand, it increases the inventory holding cost at shipyard and the risk of obsolescence. Further, due to learning curve effect the production scheduled of each consequent ship gets compressed. Thus, the material requirement schedule of every next ship differs from its previous ship. As more and more ships get constructed, compressed production schedules increase the possibility of batching the orders of sister ships. This work aims at integrating materials management with project scheduling of long duration projects for manufacturing of multiple sister ships. It incorporates the learning curve effect on progressively compressing material requirement schedules and addresses the above trade-off of transportation cost and inventory holding and shortage costs while satisfying budget constraints of various stages of the project. The activity durations and lead time of items are not crisp and are available in the form of probabilistic distribution. A Stochastic Mixed Integer Programming (SMIP) model is formulated which is solved using evolutionary algorithm. Its output provides ordering dates of items and degree of order batching for all types of items. Sensitivity analysis determines the threshold number of sister ships required in a project to leverage the advantage of learning curve effect in materials management decisions. This analysis will help materials managers to gain insights about the scenarios: when and to what degree is it beneficial to treat a multiple ship project as an integrated one by batching the order quantities and when and to what degree to practice distinctive procurement for individual ship.Keywords: learning curve, materials management, shipbuilding, sister ships
Procedia PDF Downloads 5025468 Evaluation Methods for Question Decomposition Formalism
Authors: Aviv Yaniv, Ron Ben Arosh, Nadav Gasner, Michael Konviser, Arbel Yaniv
Abstract:
This paper introduces two methods for the evaluation of Question Decomposition Meaning Representation (QDMR) as predicted by sequence-to-sequence model and COPYNET parser for natural language questions processing, motivated by the fact that previous evaluation metrics used for this task do not take into account some characteristics of the representation, such as partial ordering structure. To this end, several heuristics to extract such partial dependencies are formulated, followed by the hereby proposed evaluation methods denoted as Proportional Graph Matcher (PGM) and Conversion to Normal String Representation (Nor-Str), designed to better capture the accuracy level of QDMR predictions. Experiments are conducted to demonstrate the efficacy of the proposed evaluation methods and show the added value suggested by one of them- the Nor-Str, for better distinguishing between high and low-quality QDMR when predicted by models such as COPYNET. This work represents an important step forward in the development of better evaluation methods for QDMR predictions, which will be critical for improving the accuracy and reliability of natural language question-answering systems.Keywords: NLP, question answering, question decomposition meaning representation, QDMR evaluation metrics
Procedia PDF Downloads 815467 A Weighted K-Medoids Clustering Algorithm for Effective Stability in Vehicular Ad Hoc Networks
Authors: Rejab Hajlaoui, Tarek Moulahi, Hervé Guyennet
Abstract:
In a highway scenario, the vehicle speed can exceed 120 kmph. Therefore, any vehicle can enter or leave the network within a very short time. This mobility adversely affects the network connectivity and decreases the life time of all established links. To ensure an effective stability in vehicular ad hoc networks with minimum broadcasting storm, we have developed a weighted algorithm based on the k-medoids clustering algorithm (WKCA). Indeed, the number of clusters and the initial cluster heads will not be selected randomly as usual, but considering the available transmission range and the environment size. Then, to ensure optimal assignment of nodes to clusters in both k-medoids phases, the combined weight of any node will be computed according to additional metrics including direction, relative speed and proximity. Empirical results prove that in addition to the convergence speed that characterizes the k-medoids algorithm, our proposed model performs well both AODV-Clustering and OLSR-Clustering protocols under different densities and velocities in term of end-to-end delay, packet delivery ratio, and throughput.Keywords: communication, clustering algorithm, k-medoids, sensor, vehicular ad hoc network
Procedia PDF Downloads 2415466 Optimization of Effecting Parameters for the Removal of H₂S Gas in Self Priming Venturi Scrubber Using Response Surface Methodology
Authors: Manisha Bal, B. C. Meikap
Abstract:
Highly toxic and corrosive gas H₂S is recognized as one of the hazardous air pollutants which has significant effect on the human health. Abatement of H₂S gas from the air is very necessary. H₂S gas is mainly released from the industries like paper and leather industry as well as during the production of crude oil, during wastewater treatment, etc. But the emission of H₂S gas in high concentration may cause immediate death while at lower concentrations can cause various respiratory problems. In the present study, self priming venturi scrubber is used to remove the H₂S gas from the air. Response surface methodology with central composite design has been chosen to observe the effect of process parameters on the removal efficiency of H₂S. Experiments were conducted by varying the throat gas velocity, liquid level in outer cylinder, and inlet H₂S concentration. ANOVA test confirmed the significant effect of parameters on the removal efficiency. A quadratic equation has been obtained which predicts the removal efficiency very well. The suitability of the developed model has been judged by the higher R² square value which obtained from the regression analysis. From the investigation, it was found that the throat gas velocity has most significant effect and inlet concentration of H₂S has less effect on H₂S removal efficiency.Keywords: desulfurization, pollution control, response surface methodology, venturi scrubber
Procedia PDF Downloads 1425465 Assessing the Physical Conditions of Motorcycle Taxi Stands and Comfort Conditions of the Drivers in the Central Business District of Bangkok
Authors: Nissa Phloimontri
Abstract:
This research explores the current physical conditions of motorcycle taxi stands located near the BTS stations in the central business district (CBD) and the comfort conditions of motorcycle taxi drivers. The criteria set up for physical stand survey and assessment are the integration of multimodal access design guidelines. After the survey, stands that share similar characteristics are classified into a series of typologies. Based on the environmental comfort model, questionnaires and in-depth interviews are conducted to evaluate the comfort levels of drivers including physical, functional, and psychological comfort. The results indicate that there are a number of motorcycle taxi stands that are not up to standard and are not conducive to the work-related activities of drivers. The study concludes by recommending public policy for integrated paratransit stops that support the multimodal transportation and seamless mobility concepts within the specific context of Bangkok as well as promote the quality of work life of motorcycle taxi drivers.Keywords: motorcycle taxi, paratransit stops, environmental comfort, quality of work life
Procedia PDF Downloads 1195464 Prediction of Structural Response of Reinforced Concrete Buildings Using Artificial Intelligence
Authors: Juan Bojórquez, Henry E. Reyes, Edén Bojórquez, Alfredo Reyes-Salazar
Abstract:
This paper addressed the use of Artificial Intelligence to obtain the structural reliability of reinforced concrete buildings. For this purpose, artificial neuronal networks (ANN) are developed to predict seismic demand hazard curves. In order to have enough input-output data to train the ANN, a set of reinforced concrete buildings (low, mid, and high rise) are designed, then a probabilistic seismic hazard analysis is made to obtain the seismic demand hazard curves. The results are then used as input-output data to train the ANN in a feedforward backpropagation model. The predicted values of the seismic demand hazard curves found by the ANN are then compared. Finally, it is concluded that the computer time analysis is significantly lower and the predictions obtained from the ANN were accurate in comparison to the values obtained from the conventional methods.Keywords: structural reliability, seismic design, machine learning, artificial neural network, probabilistic seismic hazard analysis, seismic demand hazard curves
Procedia PDF Downloads 2015463 A Study on Characteristics of Hedonic Price Models in Korea Based on Meta-Regression Analysis
Authors: Minseo Jo
Abstract:
The purpose of this paper is to examine the factors in the hedonic price models, that has significance impact in determining the price of apartments. There are many variables employed in the hedonic price models and their effectiveness vary differently according to the researchers and the regions they are analysing. In order to consider various conditions, the meta-regression analysis has been selected for the study. In this paper, four meta-independent variables, from the 65 hedonic price models to analysis. The factors that influence the prices of apartments, as well as including factors that influence the prices of apartments, regions, which are divided into two of the research performed, years of research performed, the coefficients of the functions employed. The covariance between the four meta-variables and p-value of the coefficients and the four meta-variables and number of data used in the 65 hedonic price models have been analyzed in this study. The six factors that are most important in deciding the prices of apartments are positioning of apartments, the noise of the apartments, points of the compass and views from the apartments, proximity to the public transportations, companies that have constructed the apartments, social environments (such as schools etc.).Keywords: hedonic price model, housing price, meta-regression analysis, characteristics
Procedia PDF Downloads 4055462 Interaction between Unsteady Supersonic Jet and Vortex Rings
Authors: Kazumasa Kitazono, Hiroshi Fukuoka, Nao Kuniyoshi, Minoru Yaga, Eri Ueno, Naoaki Fukuda, Toshio Takiya
Abstract:
The unsteady supersonic jet formed by a shock tube with a small high-pressure chamber was used as a simple alternative model for pulsed laser ablation. Understanding the vortex ring formed by the shock wave is crucial in clarifying the behavior of unsteady supersonic jet discharged from an elliptical cell. Therefore, this study investigated the behavior of vortex rings and a jet. The experiment and numerical calculation were conducted using the schlieren method and by solving the axisymmetric two-dimensional compressible Navier–Stokes equations, respectively. In both, the calculation and the experiment, laser ablation is conducted for a certain duration, followed by discharge through the exit. Moreover, a parametric study was performed to demonstrate the effect of pressure ratio on the interaction among vortex rings and the supersonic jet. The interaction between the supersonic jet and the vortex rings increased the velocity of the supersonic jet up to the magnitude of the velocity at the center of the vortex rings. The interaction between the vortex rings increased the velocity at the center of the vortex ring.Keywords: computational fluid dynamics, shock-wave, unsteady jet, vortex ring
Procedia PDF Downloads 4735461 Non-Population Search Algorithms for Capacitated Material Requirement Planning in Multi-Stage Assembly Flow Shop with Alternative Machines
Authors: Watcharapan Sukkerd, Teeradej Wuttipornpun
Abstract:
This paper aims to present non-population search algorithms called tabu search (TS), simulated annealing (SA) and variable neighborhood search (VNS) to minimize the total cost of capacitated MRP problem in multi-stage assembly flow shop with two alternative machines. There are three main steps for the algorithm. Firstly, an initial sequence of orders is constructed by a simple due date-based dispatching rule. Secondly, the sequence of orders is repeatedly improved to reduce the total cost by applying TS, SA and VNS separately. Finally, the total cost is further reduced by optimizing the start time of each operation using the linear programming (LP) model. Parameters of the algorithm are tuned by using real data from automotive companies. The result shows that VNS significantly outperforms TS, SA and the existing algorithm.Keywords: capacitated MRP, tabu search, simulated annealing, variable neighborhood search, linear programming, assembly flow shop, application in industry
Procedia PDF Downloads 2395460 Effects of Dividend Policy on Firm Profitability and Growth in Light of Present Economic Conditions
Authors: Madani Chahinaz
Abstract:
This study aims to shed light on the impact of dividend policy on corporate profitability and its relationship to growth, considering the economic developments taking place. The study was conducted on a sample of seven companies for the period from 2014 to 2020, based on a set of determinants to select variables affecting dividend distribution, where the descriptive analytical approach relied upon using graphical data models. The study concluded that companies that follow a well-studied dividend distribution policy enjoy higher profitability rates, which contributes to enhancing their growth in light of the economic developments taking place. There is also no statistically significant relationship between the variables of total asset growth and fixed asset growth and profitability. The study also concluded that there is statistical significance for the relationship between the sales volume growth variable, the self-financing ratio variable, and dividend distribution at a significance level of 0.05, as the random effects model was able to explain 68% of the changes in dividend distribution policy.Keywords: dividend distribution policy, profitability, growth, self-financing ratio
Procedia PDF Downloads 225459 Modeling the Effect of Scale Deposition on Heat Transfer in Desalination Multi-Effect Distillation Evaporators
Authors: K. Bourouni, M. Chacha, T. Jaber, A. Tchantchane
Abstract:
In Multi-Effect Distillation (MED) desalination evaporators, the scale deposit outside the tubes presents a barrier to heat transfers reducing the global heat transfer coefficient and causing a decrease in water production; hence a loss of efficiency and an increase in operating and maintenance costs. Scale removal (by acid cleaning) is the main maintenance operation and constitutes the major reason for periodic plant shutdowns. A better understanding of scale deposition mechanisms will lead to an accurate determination of the variation of scale thickness around the tubes and an improved accuracy of the overall heat transfer coefficient calculation. In this paper, a coupled heat transfer-calcium carbonate scale deposition model on a horizontal tube bundle is presented. The developed tool is used to determine precisely the heat transfer area leading to a significant cost reduction for a given water production capacity. Simulations are carried to investigate the influence of different parameters such as water salinity, temperature, etc. on the heat transfer.Keywords: multi-effect-evaporator, scale deposition, water desalination, heat transfer coefficient
Procedia PDF Downloads 1565458 The Application of FSI Techniques in Modeling of Realist Pulmonary Systems
Authors: Abdurrahim Bolukbasi, Hassan Athari, Dogan Ciloglu
Abstract:
The modeling lung respiratory system which has complex anatomy and biophysics presents several challenges including tissue-driven flow patterns and wall motion. Also, the lung pulmonary system because of that they stretch and recoil with each breath, has not static walls and structures. The direct relationship between air flow and tissue motion in the lung structures naturally prefers an FSI simulation technique. Therefore, in order to toward the realistic simulation of pulmonary breathing mechanics the development of a coupled FSI computational model is an important step. A simple but physiologically-relevant three dimensional deep long geometry is designed and fluid-structure interaction (FSI) coupling technique is utilized for simulating the deformation of the lung parenchyma tissue which produces airflow fields. The real understanding of respiratory tissue system as a complex phenomenon have been investigated with respect to respiratory patterns, fluid dynamics and tissue visco-elasticity and tidal breathing period. Procedia PDF Downloads 3285457 Stripping of Flavour-Active Compounds from Aqueous Food Streams: Effect of Liquid Matrix on Vapour-Liquid Equilibrium in a Beer-Like Solution
Authors: Ali Ammari, Karin Schroen
Abstract:
In brewing industries, stripping is a downstream process to separate volatiles from beer. Due to physiochemical similarities between flavour components, the selectivity of this method is not favourable. Besides, the presence of non-volatile compounds such as proteins and carbohydrates may affect the separation of flavours due to their retaining properties. By using a stripping column with structured packing coupled with a gas chromatography, in this work, the overall mass transfer coefficient along with their corresponding equilibrium data was investigated for a model solution consist of water, ethanol, ethyl acetate and isoamyl acetate. Static headspace analysis also was employed to derive equilibrium data for flavours in the presence of beer dry matter. As it was expected ethanol and dry matter showed retention properties; however, the effect of viscosity in mass transfer coefficient was discarded due to the fact that the viscosity of solution decreased during stripping. The effect of ethanol and beer dry matter were mapped to be used for designing stripping could.Keywords: flavour, headspace, Henry’s coefficient, mass transfer coefficient, stripping
Procedia PDF Downloads 1955456 Benders Decomposition Approach to Solve the Hybrid Flow Shop Scheduling Problem
Authors: Ebrahim Asadi-Gangraj
Abstract:
Hybrid flow shop scheduling problem (HFS) contains sequencing in a flow shop where, at any stage, there exist one or more related or unrelated parallel machines. This production system is a common manufacturing environment in many real industries, such as the steel manufacturing, ceramic tile manufacturing, and car assembly industries. In this research, a mixed integer linear programming (MILP) model is presented for the hybrid flow shop scheduling problem, in which, the objective consists of minimizing the maximum completion time (makespan). For this purpose, a Benders Decomposition (BD) method is developed to solve the research problem. The proposed approach is tested on some test problems, small to moderate scale. The experimental results show that the Benders decomposition approach can solve the hybrid flow shop scheduling problem in a reasonable time, especially for small and moderate-size test problems.Keywords: hybrid flow shop, mixed integer linear programming, Benders decomposition, makespan
Procedia PDF Downloads 2015455 Intelligent Quality Management System on the Example оf Bread Baking
Authors: Irbulat Utepbergenov, Lyazzat Issabekova, Shara Toybayeva
Abstract:
This article discusses quality management using the bread baking process as an example. The baking process must be strictly controlled and repeatable. Automation and monitoring of quality management systems can help. After baking bread, quality control of the finished product should be carried out. This may include an evaluation of appearance, weight, texture, and flavor. It is important to continuously work to improve processes and products based on data and feedback from the quality management system. A method and model of automated quality management and an intelligent automated management system based on intelligent technologies are proposed, which allow to automate the processes of QMS implementation and support and improve the validity, efficiency, and effectiveness of management decisions by automating a number of functions of decision makers and staff. This project is supported by the grant of the Ministry of Education and Science of the Republic of Kazakhstan (Zhas Galym project No. AR 13268939 Research and development of digital technologies to ensure consistency of the carriers of normative documents of the quality management system).Keywords: automated control system, quality management, efficiency evaluation, bakery oven, intelligent system
Procedia PDF Downloads 455454 A Generative Adversarial Framework for Bounding Confounded Causal Effects
Authors: Yaowei Hu, Yongkai Wu, Lu Zhang, Xintao Wu
Abstract:
Causal inference from observational data is receiving wide applications in many fields. However, unidentifiable situations, where causal effects cannot be uniquely computed from observational data, pose critical barriers to applying causal inference to complicated real applications. In this paper, we develop a bounding method for estimating the average causal effect (ACE) under unidentifiable situations due to hidden confounders. We propose to parameterize the unknown exogenous random variables and structural equations of a causal model using neural networks and implicit generative models. Then, with an adversarial learning framework, we search the parameter space to explicitly traverse causal models that agree with the given observational distribution and find those that minimize or maximize the ACE to obtain its lower and upper bounds. The proposed method does not make any assumption about the data generating process and the type of the variables. Experiments using both synthetic and real-world datasets show the effectiveness of the method.Keywords: average causal effect, hidden confounding, bound estimation, generative adversarial learning
Procedia PDF Downloads 1965453 Measurement of Operational and Environmental Performance of the Coal-Fired Power Plants in India by Using Data Envelopment Analysis
Authors: Vijay Kumar Bajpai, Sudhir Kumar Singh
Abstract:
In this study, the performance analyses of the twenty five coal-fired power plants (CFPPs) used for electricity generation are carried out through various data envelopment analysis (DEA) models. Three efficiency indices are defined and pursued. During the calculation of the operational performance, energy and non-energy variables are used as input, and net electricity produced is used as desired output. CO2 emitted to the environment is used as the undesired output in the computation of the pure environmental performance while in Model-3 CO2 emissions is considered as detrimental input in the calculation of operational and environmental performance. Empirical results show that most of the plants are operating in increasing returns to scale region and Mettur plant is efficient one with regards to energy use and environment. The result also indicates that the undesirable output effect is insignificant in the research sample. The present study will provide clues to plant operators towards raising the operational and environmental performance of CFPPs.Keywords: coal fired power plants, environmental performance, data envelopment analysis, operational performance
Procedia PDF Downloads 4585452 Navigating the Future: Evaluating the Market Potential and Drivers for High-Definition Mapping in the Autonomous Vehicle Era
Authors: Loha Hashimy, Isabella Castillo
Abstract:
In today's rapidly evolving technological landscape, the importance of precise navigation and mapping systems cannot be understated. As various sectors undergo transformative changes, the market potential for Advanced Mapping and Management Systems (AMMS) emerges as a critical focus area. The Galileo/GNSS-Based Autonomous Mobile Mapping System (GAMMS) project, specifically targeted toward high-definition mapping (HDM), endeavours to provide insights into this market within the broader context of the geomatics and navigation fields. With the growing integration of Autonomous Vehicles (AVs) into our transportation systems, the relevance and demand for sophisticated mapping solutions like HDM have become increasingly pertinent. The research employed a meticulous, lean, stepwise, and interconnected methodology to ensure a comprehensive assessment. Beginning with the identification of pivotal project results, the study progressed into a systematic market screening. This was complemented by an exhaustive desk research phase that delved into existing literature, data, and trends. To ensure the holistic validity of the findings, extensive consultations were conducted. Academia and industry experts provided invaluable insights through interviews, questionnaires, and surveys. This multi-faceted approach facilitated a layered analysis, juxtaposing secondary data with primary inputs, ensuring that the conclusions were both accurate and actionable. Our investigation unearthed a plethora of drivers steering the HD maps landscape. These ranged from technological leaps, nuanced market demands, and influential economic factors to overarching socio-political shifts. The meteoric rise of Autonomous Vehicles (AVs) and the shift towards app-based transportation solutions, such as Uber, stood out as significant market pull factors. A nuanced PESTEL analysis further enriched our understanding, shedding light on political, economic, social, technological, environmental, and legal facets influencing the HD maps market trajectory. Simultaneously, potential roadblocks were identified. Notable among these were barriers related to high initial costs, concerns around data quality, and the challenges posed by a fragmented and evolving regulatory landscape. The GAMMS project serves as a beacon, illuminating the vast opportunities that lie ahead for the HD mapping sector. It underscores the indispensable role of HDM in enhancing navigation, ensuring safety, and providing pinpoint, accurate location services. As our world becomes more interconnected and reliant on technology, HD maps emerge as a linchpin, bridging gaps and enabling seamless experiences. The research findings accentuate the imperative for stakeholders across industries to recognize and harness the potential of HD mapping, especially as we stand on the cusp of a transportation revolution heralded by Autonomous Vehicles and advanced geomatic solutions.Keywords: high-definition mapping (HDM), autonomous vehicles, PESTEL analysis, market drivers
Procedia PDF Downloads 915451 Implicit Force Control of a Position Controlled Robot - A Comparison with Explicit Algorithms
Authors: Alexander Winkler, Jozef Suchý
Abstract:
This paper investigates simple implicit force control algorithms realizable with industrial robots. A lot of approaches already published are difficult to implement in commercial robot controllers, because the access to the robot joint torques is necessary or the complete dynamic model of the manipulator is used. In the past we already deal with explicit force control of a position controlled robot. Well known schemes of implicit force control are stiffness control, damping control and impedance control. Using such algorithms the contact force cannot be set directly. It is further the result of controller impedance, environment impedance and the commanded robot motion/position. The relationships of these properties are worked out in this paper in detail for the chosen implicit approaches. They have been adapted to be implementable on a position controlled robot. The behaviors of stiffness control and damping control are verified by practical experiments. For this purpose a suitable test bed was configured. Using the full mechanical impedance within the controller structure will not be practical in the case when the robot is in physical contact with the environment. This fact will be verified by simulation.Keywords: robot force control, stiffness control, damping control, impedance control, stability
Procedia PDF Downloads 5225450 A New Distributed Computing Environment Based On Mobile Agents for Massively Parallel Applications
Authors: Fatéma Zahra Benchara, Mohamed Youssfi, Omar Bouattane, Hassan Ouajji, Mohamed Ouadi Bensalah
Abstract:
In this paper, we propose a new distributed environment for High Performance Computing (HPC) based on mobile agents. It allows us to perform parallel programs execution as distributed one over a flexible grid constituted by a cooperative mobile agent team works. The distributed program to be performed is encapsulated on team leader agent which deploys its team workers as Agent Virtual Processing Unit (AVPU). Each AVPU is asked to perform its assigned tasks and provides the computational results which make the data and team works tasks management difficult for the team leader agent and that influence the performance computing. In this work we focused on the implementation of the Mobile Provider Agent (MPA) in order to manage the distribution of data and instructions and to ensure a load balancing model. It grants also some interesting mechanisms to manage the others computing challenges thanks to the mobile agents several skills.Keywords: image processing, distributed environment, mobile agents, parallel and distributed computing
Procedia PDF Downloads 4125449 Examining Influence of The Ultrasonic Power and Frequency on Microbubbles Dynamics Using Real-Time Visualization of Synchrotron X-Ray Imaging: Application to Membrane Fouling Control
Authors: Masoume Ehsani, Ning Zhu, Huu Doan, Ali Lohi, Amira Abdelrasoul
Abstract:
Membrane fouling poses severe challenges in membrane-based wastewater treatment applications. Ultrasound (US) has been considered an effective fouling remediation technique in filtration processes. Bubble cavitation in the liquid medium results from the alternating rarefaction and compression cycles during the US irradiation at sufficiently high acoustic pressure. Cavitation microbubbles generated under US irradiation can cause eddy current and turbulent flow within the medium by either oscillating or discharging energy to the system through microbubble explosion. Turbulent flow regime and shear forces created close to the membrane surface cause disturbing the cake layer and dislodging the foulants, which in turn improve the cleaning efficiency and filtration performance. Therefore, the number, size, velocity, and oscillation pattern of the microbubbles created in the liquid medium play a crucial role in foulant detachment and permeate flux recovery. The goal of the current study is to gain in depth understanding of the influence of the US power intensity and frequency on the microbubble dynamics and its characteristics generated under US irradiation. In comparison with other imaging techniques, the synchrotron in-line Phase Contrast Imaging technique at the Canadian Light Source (CLS) allows in-situ observation and real-time visualization of microbubble dynamics. At CLS biomedical imaging and therapy (BMIT) polychromatic beamline, the effective parameters were optimized to enhance the contrast gas/liquid interface for the accuracy of the qualitative and quantitative analysis of bubble cavitation within the system. With the high flux of photons and the high-speed camera, a typical high projection speed was achieved; and each projection of microbubbles in water was captured in 0.5 ms. ImageJ software was used for post-processing the raw images for the detailed quantitative analyses of microbubbles. The imaging has been performed under the US power intensity levels of 50 W, 60 W, and 100 W, in addition to the US frequency levels of 20 kHz, 28 kHz, and 40 kHz. For the duration of 2 seconds of imaging, the effect of the US power and frequency on the average number, size, and fraction of the area occupied by bubbles were analyzed. Microbubbles’ dynamics in terms of their velocity in water was also investigated. For the US power increase of 50 W to 100 W, the average bubble number and the average bubble diameter were increased from 746 to 880 and from 36.7 µm to 48.4 µm, respectively. In terms of the influence of US frequency, a fewer number of bubbles were created at 20 kHz (average of 176 bubbles rather than 808 bubbles at 40 kHz), while the average bubble size was significantly larger than that of 40 kHz (almost seven times). The majority of bubbles were captured close to the membrane surface in the filtration unit. According to the study observations, membrane cleaning efficiency is expected to be improved at higher US power and lower US frequency due to the higher energy release to the system by increasing the number of bubbles or growing their size during oscillation (optimum condition is expected to be at 20 kHz and 100 W).Keywords: bubble dynamics, cavitational bubbles, membrane fouling, ultrasonic cleaning
Procedia PDF Downloads 1555448 Determination of LS-DYNA MAT162 Material input Parameters for Low Velocity Impact Analysis of Layered Composites
Authors: Mustafa Albayrak, Mete Onur Kaman, Ilyas Bozkurt
Abstract:
In this study, the necessary material parameters were determined to be able to conduct progressive damage analysis of layered composites under low velocity impact by using the MAT162 material module in the LS-DYNA program. The material module MAT162 based on Hashin failure criterion requires 34 parameters in total. Some of these parameters were obtained directly as a result of dynamic and quasi-static mechanical tests, and the remaining part was calibrated and determined by comparing numerical and experimental results. Woven glass/epoxy was used as the composite material and it was produced by vacuum infusion method. In the numerical model, composites are modeled as three-dimensional and layered. As a result, the acquisition of MAT162 material module parameters, which will enable progressive damage analysis, is given in detail and step by step, and the selection methods of the parameters are explained. Numerical data consistent with the experimental results are given in graphics.Keywords: Composite Impact, Finite Element Simulation, Progressive Damage Analyze, LS-DYNA, MAT162
Procedia PDF Downloads 1135447 Performences of Type-2 Fuzzy Logic Control and Neuro-Fuzzy Control Based on DPC for Grid Connected DFIG with Fixed Switching Frequency
Authors: Fayssal Amrane, Azeddine Chaiba
Abstract:
In this paper, type-2 fuzzy logic control (T2FLC) and neuro-fuzzy control (NFC) for a doubly fed induction generator (DFIG) based on direct power control (DPC) with a fixed switching frequency is proposed for wind generation application. First, a mathematical model of the doubly-fed induction generator implemented in d-q reference frame is achieved. Then, a DPC algorithm approach for controlling active and reactive power of DFIG via fixed switching frequency is incorporated using PID. The performance of T2FLC and NFC, which is based on the DPC algorithm, are investigated and compared to those obtained from the PID controller. Finally, simulation results demonstrate that the NFC is more robust, superior dynamic performance for wind power generation system applications.Keywords: doubly fed induction generator (DFIG), direct power control (DPC), neuro-fuzzy control (NFC), maximum power point tracking (MPPT), space vector modulation (SVM), type 2 fuzzy logic control (T2FLC)
Procedia PDF Downloads 4235446 An Analysis of a Queueing System with Heterogeneous Servers Subject to Catastrophes
Authors: M. Reni Sagayaraj, S. Anand Gnana Selvam, R. Reynald Susainathan
Abstract:
This study analyzed a queueing system with blocking and no waiting line. The customers arrive according to a Poisson process and the service times follow exponential distribution. There are two non-identical servers in the system. The queue discipline is FCFS, and the customers select the servers on fastest server first (FSF) basis. The service times are exponentially distributed with parameters μ1 and μ2 at servers I and II, respectively. Besides, the catastrophes occur in a Poisson manner with rate γ in the system. When server I is busy or blocked, the customer who arrives in the system leaves the system without being served. Such customers are called lost customers. The probability of losing a customer was computed for the system. The explicit time dependent probabilities of system size are obtained and a numerical example is presented in order to show the managerial insights of the model. Finally, the probability that arriving customer finds system busy and average number of server busy in steady state are obtained numerically.Keywords: queueing system, blocking, poisson process, heterogeneous servers, queue discipline FCFS, busy period
Procedia PDF Downloads 5075445 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks
Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi
Abstract:
Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex
Procedia PDF Downloads 1835444 An Evolutionary Multi-Objective Optimization for Airport Gate Assignment Problem
Authors: Seyedmirsajad Mokhtarimousavi, Danial Talebi, Hamidreza Asgari
Abstract:
Gate Assignment Problem (GAP) is one of the most substantial issues in airport operation. In principle, GAP intends to maintain the maximum capacity of the airport through the best possible allocation of the resources (gates) in order to reach the optimum outcome. The problem involves a wide range of dependent and independent resources and their limitations, which add to the complexity of GAP from both theoretical and practical perspective. In this study, GAP was mathematically formulated as a three-objective problem. The preliminary goal of multi-objective formulation was to address a higher number of objectives that can be simultaneously optimized and therefore increase the practical efficiency of the final solution. The problem is solved by applying the second version of Non-dominated Sorting Genetic Algorithm (NSGA-II). Results showed that the proposed mathematical model could address most of major criteria in the decision-making process in airport management in terms of minimizing both airport/airline cost and passenger walking distance time. Moreover, the proposed approach could properly find acceptable possible answers.Keywords: airport management, gate assignment problem, mathematical modeling, genetic algorithm, NSGA-II
Procedia PDF Downloads 3035443 Techno-Economic Assessment of Distributed Heat Pumps Integration within a Swedish Neighborhood: A Cosimulation Approach
Authors: Monica Arnaudo, Monika Topel, Bjorn Laumert
Abstract:
Within the Swedish context, the current trend of relatively low electricity prices promotes the electrification of the energy infrastructure. The residential heating sector takes part in this transition by proposing a switch from a centralized district heating system towards a distributed heat pumps-based setting. When it comes to urban environments, two issues arise. The first, seen from an electricity-sector perspective, is related to the fact that existing networks are limited with regards to their installed capacities. Additional electric loads, such as heat pumps, can cause severe overloads on crucial network elements. The second, seen from a heating-sector perspective, has to do with the fact that the indoor comfort conditions can become difficult to handle when the operation of the heat pumps is limited by a risk of overloading on the distribution grid. Furthermore, the uncertainty of the electricity market prices in the future introduces an additional variable. This study aims at assessing the extent to which distributed heat pumps can penetrate an existing heat energy network while respecting the technical limitations of the electricity grid and the thermal comfort levels in the buildings. In order to account for the multi-disciplinary nature of this research question, a cosimulation modeling approach was adopted. In this way, each energy technology is modeled in its customized simulation environment. As part of the cosimulation methodology: a steady-state power flow analysis in pandapower was used for modeling the electrical distribution grid, a thermal balance model of a reference building was implemented in EnergyPlus to account for space heating and a fluid-cycle model of a heat pump was implemented in JModelica to account for the actual heating technology. With the models set in place, different scenarios based on forecasted electricity market prices were developed both for present and future conditions of Hammarby Sjöstad, a neighborhood located in the south-east of Stockholm (Sweden). For each scenario, the technical and the comfort conditions were assessed. Additionally, the average cost of heat generation was estimated in terms of levelized cost of heat. This indicator enables a techno-economic comparison study among the different scenarios. In order to evaluate the levelized cost of heat, a yearly performance simulation of the energy infrastructure was implemented. The scenarios related to the current electricity prices show that distributed heat pumps can replace the district heating system by covering up to 30% of the heating demand. By lowering of 2°C, the minimum accepted indoor temperature of the apartments, this level of penetration can increase up to 40%. Within the future scenarios, if the electricity prices will increase, as most likely expected within the next decade, the penetration of distributed heat pumps can be limited to 15%. In terms of levelized cost of heat, a residential heat pump technology becomes competitive only within a scenario of decreasing electricity prices. In this case, a district heating system is characterized by an average cost of heat generation 7% higher compared to a distributed heat pumps option.Keywords: cosimulation, distributed heat pumps, district heating, electrical distribution grid, integrated energy systems
Procedia PDF Downloads 1555442 Imputation of Urban Movement Patterns Using Big Data
Authors: Eusebio Odiari, Mark Birkin, Susan Grant-Muller, Nicolas Malleson
Abstract:
Big data typically refers to consumer datasets revealing some detailed heterogeneity in human behavior, which if harnessed appropriately, could potentially revolutionize our understanding of the collective phenomena of the physical world. Inadvertent missing values skew these datasets and compromise the validity of the thesis. Here we discuss a conceptually consistent strategy for identifying other relevant datasets to combine with available big data, to plug the gaps and to create a rich requisite comprehensive dataset for subsequent analysis. Specifically, emphasis is on how these methodologies can for the first time enable the construction of more detailed pictures of passenger demand and drivers of mobility on the railways. These methodologies can predict the influence of changes within the network (like a change in time-table or impact of a new station), explain local phenomena outside the network (like rail-heading) and the other impacts of urban morphology. Our analysis also reveals that our new imputation data model provides for more equitable revenue sharing amongst network operators who manage different parts of the integrated UK railways.Keywords: big-data, micro-simulation, mobility, ticketing-data, commuters, transport, synthetic, population
Procedia PDF Downloads 2315441 Water Supply and Utility Management to Address Urban Sanitation Issues
Authors: Akshaya P., Priyanjali Prabhkaran
Abstract:
The paper examines the formulation of strategies to develop a comprehensive model of city level water utility management to addressing urban sanitation issues. The water is prime life sustaining natural resources and nature’s gifts to all living beings on the earth multiple urban sanitation issues are addressed in the supply of water in a city. Many of these urban sanitation issues are linked to population expansion and economic inequity. Increased usage of water and the development caused water scarcity. The lack of water supply results increases the chance of unhygienic situations in the cities. In this study, the urban sanitation issues are identified with respect to water supply and utility management. The study compared based on their best practices and initiatives. From this, best practices and initiatives identify suitable sustainable measures to address water supply issues in the city level. The paper concludes with the listed provision that should be considered suitable measures for water supply and utility management in city level to address the urban sanitation issues.Keywords: water, benchmarking water supply, water supply networks, water supply management
Procedia PDF Downloads 113