Search results for: optimization intelligence strategy
5505 Effect of Fatiguing Hip Muscles on Dynamic Posture Control in Recurrent Ankle Sprain
Authors: Radwa El Shorbagy, Alaa El Din Balbaa, Khaled Ayad, Waleed Reda
Abstract:
Ankle sprain is a common lower limb injury that is complicated by high recurrence rate. The cause of recurrence is not clear; however, changes in motor control have been postulated.Objective: to determine the contribution of proximal hip strategy to dynamic posture control in patients with recurrent ankle sprain. Methods:Fifteen subjects with recurrent ankle sprain (Group A) and fifteen healthy control subjects (Group B) participated in this study. Abductor-adductor as well as flexor-extensor hip musculature control was abolished by fatigue using the Biodex Isokinatic System. Dynamic posture control was measured before and after fatigue by the Biodex Balance System. Results: Repeated measures MANOVA was used to compare within group differences. In group A fatiguing of hip muscles (flexors-extensors and abductors-adductors) lowered overall stability index (OASI), anteroposterior stability index (APSI) and mediolateral stability index (MLSI) significantly (p=0.00) whereas; in group B fatiguing of hip flexors-extensors lowered significantly OASI and APSI only (p= 0.017, 0.010; respectively) while fatiguing of hip abductors-adductors has no significant effect on these variables. Conclusion: fatiguing of hip muscles has a significant deleterious effect on dynamic posture control in patient with recurrent ankle sprain indicating their increased dependence on hip strategy.Keywords: ankle sprain, fatigue hip muscles, dynamic balance, ankle sprain
Procedia PDF Downloads 3555504 Multi-Objective Optimization for Aircraft Fleet Management: A Parametric Approach
Authors: Xin-Yu Li, Dung-Ying Lin
Abstract:
Fleet availability is a crucial indicator for an aircraft fleet. However, in practice, fleet planning involves many resource and safety constraints, such as annual and monthly flight training targets and maximum engine usage limits. Due to safety considerations, engines must be removed for mandatory maintenance and replacement of key components. This situation is known as the "threshold." The annual number of thresholds is a key factor in maintaining fleet availability. However, the traditional method heavily relies on experience and manual planning, which may result in ineffective engine usage and affect the flight missions. This study aims to address the challenges of fleet planning and availability maintenance in aircraft fleets with resource and safety constraints. The goal is to effectively optimize engine usage and maintenance tasks. This study has four objectives: minimizing the number of engine thresholds, minimizing the monthly lack of flight hours, minimizing the monthly excess of flight hours, and minimizing engine disassembly frequency. To solve the resulting formulation, this study uses parametric programming techniques and ϵ-constraint method to reformulate multi-objective problems into single-objective problems, efficiently generating Pareto fronts. This method is advantageous when handling multiple conflicting objectives. It allows for an effective trade-off between these competing objectives. Empirical results and managerial insights will be provided.Keywords: aircraft fleet, engine utilization planning, multi-objective optimization, parametric method, Pareto optimality
Procedia PDF Downloads 235503 Integrated Performance Management System a Conceptual Design for PT. XYZ
Authors: Henrie Yunianto, Dermawan Wibisono
Abstract:
PT. XYZ is a family business (private company) in Indonesia that provide an educational program and consultation services. Since its establishment in 2011, the company has run without any strategic management system implemented. Though the company could survive until now. The management of PT. XYZ sees the business opportunity for such product is huge, even though the targeted market is very specific (niche), the volume is large (due to large population of Indonesia) and numbers of competitors are low (now). It can be said if the product life cycle is in between ‘Introduction stage’ and ‘growth’ stage. It is observed that nowadays the new entrants (competitors) are increasing, thus PT. XYZ consider reacting in facing the intense business rivalry by conducting the business in an appropriate manner. A Performance Management System is important to be implemented in accordance with the business sustainability and growth. The framework of Performance Management System chosen is Integrated Performance Management System (IPMS). IPMS framework has the advantages of its simplicity, linkage between its business variables and indicators where the company can see the connections between all factors measured. IPMS framework consists of perspectives: (1) Business Result, (2) Internal Processes, (3) Resource Availability. Variables and indicators were examined through deep analysis of the business external and internal environments, Strength-Weakness-Opportunity-Threat (SWOT) analysis, Porter’s five forces analysis. Analytical Hierarchy Process (AHP) analysis was then used to quantify the weight of each variable/indicators. AHP is needed since in this study, PT. XYZ, the data of existing performance indicator was not available. Later, where the IPMS is implemented, the real data measured can be examined to determine the weight factor of each indicators using correlation analysis (or other methods). In this study of IPMS design for PT. XYZ, the analysis shows that with current company goals, along with the AHP methodology, the critical indicators for each perspective are: (1) Business results: Customer satisfaction and Employee satisfaction, (2) Internal process: Marketing performance, Supplier quality, Production quality, Continues improvement; (3) Resources Availability: Leadership and company culture & value, Personal Competences, Productivity. Company and/or organization require performance management system to help them in achieving their vision and mission. Company strategy will be effectively defined and addressed by using performance management system. Integrated Performance Management System (IPMS) framework and AHP analysis help us in quantifying the factors which influence the business output expected.Keywords: analytical hierarchy process, business strategy, differentiation strategy, integrated performance management system
Procedia PDF Downloads 3075502 The Evaluation of Event Sport Tourism on Regional Economic Development
Authors: Huei-Wen Lin, Huei-Fu Lu
Abstract:
Event sport tourism (EST) has become an especially important economic sector around the world. As the magnitude continues to grow, attracting more tourists, media, and investment for the host community, and many local areas/regions and states have identified the expenditures by visitors as a potential source of economic or employment growth. The main purposes of this study are to investigate stakeholders’ insights into the feature of hosting EST and using them as a regional development strategy. Continuing the focus of previous literature on the regional development and economic benefits by hosting EST, a total of fıve semi-structured interview questions are designed and a thematic analysis is employed to conduct with eight key sport and tourism decision makers in Atlanta during July to August 2016. Through the depth interviews, the study will contribute to a better understanding of stakeholders’ decision-making, identifying benefits and constraints as well as leveraging the impacts of hosting EST. These findings have provided stakeholders’ perspectives of hosting EST and using them as a reference of regional development in emerging sport tourism markets in the US. Additionally, this study examines key considerations and issues that affect and are critical to reliable understanding of the economic impacts of hosting EST on the regional development, and it will be able to benefit future management authorities (i.e. governments and communities) in their sport tourism development endeavors in defining and hosting successful EST. Furthermore, the insights gained from the qualitative analysis could help other cities/regions analyzing the economic impacts of hosting EST and using it as an instrument of city development strategy.Keywords: economic impacts, event sport tourism, regional economic development, longitudinal analysis
Procedia PDF Downloads 3135501 Flow Field Optimization for Proton Exchange Membrane Fuel Cells
Authors: Xiao-Dong Wang, Wei-Mon Yan
Abstract:
The flow field design in the bipolar plates affects the performance of the proton exchange membrane (PEM) fuel cell. This work adopted a combined optimization procedure, including a simplified conjugate-gradient method and a completely three-dimensional, two-phase, non-isothermal fuel cell model, to look for optimal flow field design for a single serpentine fuel cell of size 9×9 mm with five channels. For the direct solution, the two-fluid method was adopted to incorporate the heat effects using energy equations for entire cells. The model assumes that the system is steady; the inlet reactants are ideal gases; the flow is laminar; and the porous layers such as the diffusion layer, catalyst layer and PEM are isotropic. The model includes continuity, momentum and species equations for gaseous species, liquid water transport equations in the channels, gas diffusion layers, and catalyst layers, water transport equation in the membrane, electron and proton transport equations. The Bulter-Volumer equation was used to describe electrochemical reactions in the catalyst layers. The cell output power density Pcell is maximized subjected to an optimal set of channel heights, H1-H5, and channel widths, W2-W5. The basic case with all channel heights and widths set at 1 mm yields a Pcell=7260 Wm-2. The optimal design displays a tapered characteristic for channels 1, 3 and 4, and a diverging characteristic in height for channels 2 and 5, producing a Pcell=8894 Wm-2, about 22.5% increment. The reduced channel heights of channels 2-4 significantly increase the sub-rib convection and widths for effectively removing liquid water and oxygen transport in gas diffusion layer. The final diverging channel minimizes the leakage of fuel to outlet via sub-rib convection from channel 4 to channel 5. Near-optimal design without huge loss in cell performance but is easily manufactured is tested. The use of a straight, final channel of 0.1 mm height has led to 7.37% power loss, while the design with all channel widths to be 1 mm with optimal channel heights obtained above yields only 1.68% loss of current density. The presence of a final, diverging channel has greater impact on cell performance than the fine adjustment of channel width at the simulation conditions set herein studied.Keywords: optimization, flow field design, simplified conjugate-gradient method, serpentine flow field, sub-rib convection
Procedia PDF Downloads 2965500 A Mixed-Integer Nonlinear Program to Optimally Pace and Fuel Ultramarathons
Authors: Kristopher A. Pruitt, Justin M. Hill
Abstract:
The purpose of this research is to determine the pacing and nutrition strategies which minimize completion time and carbohydrate intake for athletes competing in ultramarathon races. The model formulation consists of a two-phase optimization. The first-phase mixed-integer nonlinear program (MINLP) determines the minimum completion time subject to the altitude, terrain, and distance of the race, as well as the mass and cardiovascular fitness of the athlete. The second-phase MINLP determines the minimum total carbohydrate intake required for the athlete to achieve the completion time prescribed by the first phase, subject to the flow of carbohydrates through the stomach, liver, and muscles. Consequently, the second phase model provides the optimal pacing and nutrition strategies for a particular athlete for each kilometer of a particular race. Validation of the model results over a wide range of athlete parameters against completion times for real competitive events suggests strong agreement. Additionally, the kilometer-by-kilometer pacing and nutrition strategies, the model prescribes for a particular athlete suggest unconventional approaches could result in lower completion times. Thus, the MINLP provides prescriptive guidance that athletes can leverage when developing pacing and nutrition strategies prior to competing in ultramarathon races. Given the highly-variable topographical characteristics common to many ultramarathon courses and the potential inexperience of many athletes with such courses, the model provides valuable insight to competitors who might otherwise fail to complete the event due to exhaustion or carbohydrate depletion.Keywords: nutrition, optimization, pacing, ultramarathons
Procedia PDF Downloads 1895499 The Impact of Cooperative Learning on EFL Learners Oral Performance
Authors: Narimen Hamdini
Abstract:
The mastery of a foreign language often implies adequate speaking competency and communication. However, it has been marked that the Algerian students’ oral performance is affected by the lack of language practice opportunities. The present study aims at investigating the impact of cooperative learning strategies on the learners’ oral performance through integrating some learning strategies in oral expression classes. Thus, a quasi-experimental study with one group pretest-posttest design was conducted. A convenience sample of 27 second-year students from the University of Jijel, Algeria, was taught during three consecutive weeks through cooperative learning activities in conjunction with regular language instruction in oral expression classes. Regarding data collection, the study makes use of students’ questionnaire, a semi-structured interview with the teachers of oral expression, and orally scored pre-posttest. While the students’ questionnaire aims at exploring the learners ‘speaking difficulties and attitudes towards the implementation of the strategy, the semi-structured interview aims at revealing the teachers’ instructional practices and attitudes toward the integration of CL activities. Finally, the oral tests were conducted before and after the intervention to measure the effect of the strategy on the learners’ oral production. The findings showed that the experimental group scored higher in the posttest. Cooperative learning promotes not only the learner’s oral performances, but also motivation and social skills. Consequently, its implementation in the oral expression classes is validated and recommended.Keywords: cooperative learning, learning, oral performance, teaching
Procedia PDF Downloads 1295498 High Aspect Ratio Micropillar Array Based Microfluidic Viscometer
Authors: Ahmet Erten, Adil Mustafa, Ayşenur Eser, Özlem Yalçın
Abstract:
We present a new viscometer based on a microfluidic chip with elastic high aspect ratio micropillar arrays. The displacement of pillar tips in flow direction can be used to analyze viscosity of liquid. In our work, Computational Fluid Dynamics (CFD) is used to analyze pillar displacement of various micropillar array configurations in flow direction at different viscosities. Following CFD optimization, micro-CNC based rapid prototyping is used to fabricate molds for microfluidic chips. Microfluidic chips are fabricated out of polydimethylsiloxane (PDMS) using soft lithography methods with molds machined out of aluminum. Tip displacements of micropillar array (300 µm in diameter and 1400 µm in height) in flow direction are recorded using a microscope mounted camera, and the displacements are analyzed using image processing with an algorithm written in MATLAB. Experiments are performed with water-glycerol solutions mixed at 4 different ratios to attain 1 cP, 5 cP, 10 cP and 15 cP viscosities at room temperature. The prepared solutions are injected into the microfluidic chips using a syringe pump at flow rates from 10-100 mL / hr and the displacement versus flow rate is plotted for different viscosities. A displacement of around 1.5 µm was observed for 15 cP solution at 60 mL / hr while only a 1 µm displacement was observed for 10 cP solution. The presented viscometer design optimization is still in progress for better sensitivity and accuracy. Our microfluidic viscometer platform has potential for tailor made microfluidic chips to enable real time observation and control of viscosity changes in biological or chemical reactions.Keywords: Computational Fluid Dynamics (CFD), high aspect ratio, micropillar array, viscometer
Procedia PDF Downloads 2465497 Machine Learning in Agriculture: A Brief Review
Authors: Aishi Kundu, Elhan Raza
Abstract:
"Necessity is the mother of invention" - Rapid increase in the global human population has directed the agricultural domain toward machine learning. The basic need of human beings is considered to be food which can be satisfied through farming. Farming is one of the major revenue generators for the Indian economy. Agriculture is not only considered a source of employment but also fulfils humans’ basic needs. So, agriculture is considered to be the source of employment and a pillar of the economy in developing countries like India. This paper provides a brief review of the progress made in implementing Machine Learning in the agricultural sector. Accurate predictions are necessary at the right time to boost production and to aid the timely and systematic distribution of agricultural commodities to make their availability in the market faster and more effective. This paper includes a thorough analysis of various machine learning algorithms applied in different aspects of agriculture (crop management, soil management, water management, yield tracking, livestock management, etc.).Due to climate changes, crop production is affected. Machine learning can analyse the changing patterns and come up with a suitable approach to minimize loss and maximize yield. Machine Learning algorithms/ models (regression, support vector machines, bayesian models, artificial neural networks, decision trees, etc.) are used in smart agriculture to analyze and predict specific outcomes which can be vital in increasing the productivity of the Agricultural Food Industry. It is to demonstrate vividly agricultural works under machine learning to sensor data. Machine Learning is the ongoing technology benefitting farmers to improve gains in agriculture and minimize losses. This paper discusses how the irrigation and farming management systems evolve in real-time efficiently. Artificial Intelligence (AI) enabled programs to emerge with rich apprehension for the support of farmers with an immense examination of data.Keywords: machine Learning, artificial intelligence, crop management, precision farming, smart farming, pre-harvesting, harvesting, post-harvesting
Procedia PDF Downloads 1055496 Coupling of Microfluidic Droplet Systems with ESI-MS Detection for Reaction Optimization
Authors: Julia R. Beulig, Stefan Ohla, Detlev Belder
Abstract:
In contrast to off-line analytical methods, lab-on-a-chip technology delivers direct information about the observed reaction. Therefore, microfluidic devices make an important scientific contribution, e.g. in the field of synthetic chemistry. Herein, the rapid generation of analytical data can be applied for the optimization of chemical reactions. These microfluidic devices enable a fast change of reaction conditions as well as a resource saving method of operation. In the presented work, we focus on the investigation of multiphase regimes, more specifically on a biphasic microfluidic droplet systems. Here, every single droplet is a reaction container with customized conditions. The biggest challenge is the rapid qualitative and quantitative readout of information as most detection techniques for droplet systems are non-specific, time-consuming or too slow. An exception is the electrospray mass spectrometry (ESI-MS). The combination of a reaction screening platform with a rapid and specific detection method is an important step in droplet-based microfluidics. In this work, we present a novel approach for synthesis optimization on the nanoliter scale with direct ESI-MS detection. The development of a droplet-based microfluidic device, which enables the modification of different parameters while simultaneously monitoring the effect on the reaction within a single run, is shown. By common soft- and photolithographic techniques a polydimethylsiloxane (PDMS) microfluidic chip with different functionalities is developed. As an interface for the MS detection, we use a steel capillary for ESI and improve the spray stability with a Teflon siphon tubing, which is inserted underneath the steel capillary. By optimizing the flow rates, it is possible to screen parameters of various reactions, this is exemplarity shown by a Domino Knoevenagel Hetero-Diels-Alder reaction. Different starting materials, catalyst concentrations and solvent compositions are investigated. Due to the high repetition rate of the droplet production, each set of reaction condition is examined hundreds of times. As a result, of the investigation, we receive possible reagents, the ideal water-methanol ratio of the solvent and the most effective catalyst concentration. The developed system can help to determine important information about the optimal parameters of a reaction within a short time. With this novel tool, we make an important step on the field of combining droplet-based microfluidics with organic reaction screening.Keywords: droplet, mass spectrometry, microfluidics, organic reaction, screening
Procedia PDF Downloads 3015495 Artificial Neural Network Based Parameter Prediction of Miniaturized Solid Rocket Motor
Authors: Hao Yan, Xiaobing Zhang
Abstract:
The working mechanism of miniaturized solid rocket motors (SRMs) is not yet fully understood. It is imperative to explore its unique features. However, there are many disadvantages to using common multi-objective evolutionary algorithms (MOEAs) in predicting the parameters of the miniaturized SRM during its conceptual design phase. Initially, the design variables and objectives are constrained in a lumped parameter model (LPM) of this SRM, which leads to local optima in MOEAs. In addition, MOEAs require a large number of calculations due to their population strategy. Although the calculation time for simulating an LPM just once is usually less than that of a CFD simulation, the number of function evaluations (NFEs) is usually large in MOEAs, which makes the total time cost unacceptably long. Moreover, the accuracy of the LPM is relatively low compared to that of a CFD model due to its assumptions. CFD simulations or experiments are required for comparison and verification of the optimal results obtained by MOEAs with an LPM. The conceptual design phase based on MOEAs is a lengthy process, and its results are not precise enough due to the above shortcomings. An artificial neural network (ANN) based parameter prediction is proposed as a way to reduce time costs and improve prediction accuracy. In this method, an ANN is used to build a surrogate model that is trained with a 3D numerical simulation. In design, the original LPM is replaced by a surrogate model. Each case uses the same MOEAs, in which the calculation time of the two models is compared, and their optimization results are compared with 3D simulation results. Using the surrogate model for the parameter prediction process of the miniaturized SRMs results in a significant increase in computational efficiency and an improvement in prediction accuracy. Thus, the ANN-based surrogate model does provide faster and more accurate parameter prediction for an initial design scheme. Moreover, even when the MOEAs converge to local optima, the time cost of the ANN-based surrogate model is much lower than that of the simplified physical model LPM. This means that designers can save a lot of time during code debugging and parameter tuning in a complex design process. Designers can reduce repeated calculation costs and obtain accurate optimal solutions by combining an ANN-based surrogate model with MOEAs.Keywords: artificial neural network, solid rocket motor, multi-objective evolutionary algorithm, surrogate model
Procedia PDF Downloads 905494 Use of Galileo Advanced Features in Maritime Domain
Authors: Olivier Chaigneau, Damianos Oikonomidis, Marie-Cecile Delmas
Abstract:
GAMBAS (Galileo Advanced features for the Maritime domain: Breakthrough Applications for Safety and security) is a project funded by the European Space Program Agency (EUSPA) aiming at identifying the search-and-rescue and ship security alert system needs for maritime users (including operators and fishing stakeholders) and developing operational concepts to answer these needs. The general objective of the GAMBAS project is to support the deployment of Galileo exclusive features in the maritime domain in order to improve safety and security at sea, detection of illegal activities and associated surveillance means, resilience to natural and human-induced emergency situations, and develop, integrate, demonstrate, standardize and disseminate these new associated capabilities. The project aims to demonstrate: improvement of the SAR (Search And Rescue) and SSAS (Ship Security Alert System) detection and response to maritime distress through the integration of new features into the beacon for SSAS in terms of cost optimization, user-friendly aspects, integration of Galileo and OS NMA (Open Service Navigation Message Authentication) reception for improved authenticated localization performance and reliability, and at sea triggering capabilities, optimization of the responsiveness of RCCs (Rescue Co-ordination Centre) towards the distress situations affecting vessels, the adaptation of the MCCs (Mission Control Center) and MEOLUT (Medium Earth Orbit Local User Terminal) to the data distribution of SSAS alerts.Keywords: Galileo new advanced features, maritime, safety, security
Procedia PDF Downloads 925493 An Integrated Approach for Optimal Selection of Machining Parameters in Laser Micro-Machining Process
Authors: A. Gopala Krishna, M. Lakshmi Chaitanya, V. Kalyana Manohar
Abstract:
In the existent analysis, laser micro machining (LMM) of Silicon carbide (SiCp) reinforced Aluminum 7075 Metal Matrix Composite (Al7075/SiCp MMC) was studied. While machining, Because of the intense heat generated, A layer gets formed on the work piece surface which is called recast layer and this layer is detrimental to the surface quality of the component. The recast layer needs to be as small as possible for precise applications. Therefore, The height of recast layer and the depth of groove which are conflicting in nature were considered as the significant manufacturing criteria, Which determines the pursuit of a machining process obtained in LMM of Al7075/10%SiCp composite. The present work formulates the depth of groove and height of recast layer in relation to the machining parameters using the Response Surface Methodology (RSM) and correspondingly, The formulated mathematical models were put to use for optimization. Since the effect of machining parameters on the depth of groove and height of recast layer was contradictory, The problem was explicated as a multi objective optimization problem. Moreover, An evolutionary Non-dominated sorting genetic algorithm (NSGA-II) was employed to optimize the model established by RSM. Subsequently this algorithm was also adapted to achieve the Pareto optimal set of solutions that provide a detailed illustration for making the optimal solutions. Eventually experiments were conducted to affirm the results obtained from RSM and NSGA-II.Keywords: Laser Micro Machining (LMM), depth of groove, Height of recast layer, Response Surface Methodology (RSM), non-dominated sorting genetic algorithm
Procedia PDF Downloads 3455492 Effects of AI-driven Applications on Bank Performance in West Africa
Authors: Ani Wilson Uchenna, Ogbonna Chikodi
Abstract:
This study examined the impact of artificial intelligence driven applications on banks’ performance in West Africa using Nigeria and Ghana as case studies. Specifically, the study examined the extent to which deployment of smart automated teller machine impacts the banks’ net worth within the reference period in Nigeria and Ghana. It ascertained the impact of point of sale on banks’ net worth within the reference period in Nigeria and Ghana. Thirdly, it verified the extent to which webpay services can influence banks’ performance in Nigeria and Ghana and finally, determined the impact of mobile pay services on banks’ performance in Nigeria and Ghana. The study used automated teller machine (ATM), Point of sale services (POS), Mobile pay services (MOP) and Web pay services (WBP) as proxies for explanatory variables while Bank net worth was used as explained variable for the study. The data for this study were sourced from central bank of Nigeria (CBN) Statistical Bulletin as well as Bank of Ghana (BoGH) Statistical Bulletin, Ghana payment systems oversight annual report and world development indicator (WDI). Furthermore, the mixed order of integration observed from the panel unit test result justified the use of autoregressive distributed lag (ARDL) approach to data analysis which the study adopted. While the cointegration test showed the existence of cointegration among the studied variables, bound test result justified the presence of long-run relationship among the series. Again, ARDL error correction estimate established satisfactory (13.92%) speed of adjustment from long run disequilibrium back to short run dynamic relationship. The study found that while Automated teller machine (ATM) had statistically significant impact on bank net worth (BNW) of Nigeria and Ghana, point of sale services application (POS) statistically and significantly impact on bank net worth within the study period, mobile pay services application was statistically significant in impacting the changes in the bank net worth of the countries of study while web pay services (WBP) had no statistically significant impact on bank net worth of the countries of reference. The study concluded that artificial intelligence driven application have significant an positive impact on bank performance with exception of web pay which had negative impact on bank net worth. The study recommended that management of banks both in Nigerian and Ghanaian should encourage more investments in AI-powered smart ATMs aimed towards delivering more secured banking services in order to increase revenue, discourage excessive queuing in the banking hall, reduced fraud and minimize error in processing transaction. Banks within the scope of this study should leverage on modern technologies to checkmate the excesses of the private operators POS in order to build more confidence on potential customers. Government should convert mobile pay services to a counter terrorism tool by ensuring that restrictions on over-the-counter withdrawals to a minimum amount is maintained and place sanctions on withdrawals above that limit.Keywords: artificial intelligence (ai), bank performance, automated teller machines (atm), point of sale (pos)
Procedia PDF Downloads 75491 Application of Data Driven Based Models as Early Warning Tools of High Stream Flow Events and Floods
Authors: Mohammed Seyam, Faridah Othman, Ahmed El-Shafie
Abstract:
The early warning of high stream flow events (HSF) and floods is an important aspect in the management of surface water and rivers systems. This process can be performed using either process-based models or data driven-based models such as artificial intelligence (AI) techniques. The main goal of this study is to develop efficient AI-based model for predicting the real-time hourly stream flow (Q) and apply it as early warning tool of HSF and floods in the downstream area of the Selangor River basin, taken here as a paradigm of humid tropical rivers in Southeast Asia. The performance of AI-based models has been improved through the integration of the lag time (Lt) estimation in the modelling process. A total of 8753 patterns of Q, water level, and rainfall hourly records representing one-year period (2011) were utilized in the modelling process. Six hydrological scenarios have been arranged through hypothetical cases of input variables to investigate how the changes in RF intensity in upstream stations can lead formation of floods. The initial SF was changed for each scenario in order to include wide range of hydrological situations in this study. The performance evaluation of the developed AI-based model shows that high correlation coefficient (R) between the observed and predicted Q is achieved. The AI-based model has been successfully employed in early warning throughout the advance detection of the hydrological conditions that could lead to formations of floods and HSF, where represented by three levels of severity (i.e., alert, warning, and danger). Based on the results of the scenarios, reaching the danger level in the downstream area required high RF intensity in at least two upstream areas. According to results of applications, it can be concluded that AI-based models are beneficial tools to the local authorities for flood control and awareness.Keywords: floods, stream flow, hydrological modelling, hydrology, artificial intelligence
Procedia PDF Downloads 2485490 Developing the Principal Change Leadership Non-Technical Competencies Scale: An Exploratory Factor Analysis
Authors: Tai Mei Kin, Omar Abdull Kareem
Abstract:
In light of globalization, educational reform has become a top priority for many countries. However, the task of leading change effectively requires a multidimensional set of competencies. Over the past two decades, technical competencies of principal change leadership have been extensively analysed and discussed. Comparatively, little research has been conducted in Malaysian education context on non-technical competencies or popularly known as emotional intelligence, which is equally crucial for the success of change. This article provides a validation of the Principal Change Leadership Non-Technical Competencies (PCLnTC) Scale, a tool that practitioners can easily use to assess school principals’ level of change leadership non-technical competencies that facilitate change and maximize change effectiveness. The overall coherence of the PCLnTC model was constructed by incorporating three theories: a)the change leadership theory whereby leading change is the fundamental role of a leader; b)competency theory in which leadership can be taught and learned; and c)the concept of emotional intelligence whereby it can be developed, fostered and taught. An exploratory factor analysis (EFA) was used to determine the underlying factor structure of PCLnTC model. Before conducting EFA, five important pilot test approaches were conducted to ensure the validity and reliability of the instrument: a)reviewed by academic colleagues; b)verification and comments from panel; c)evaluation on questionnaire format, syntax, design, and completion time; d)evaluation of item clarity; and e)assessment of internal consistency reliability. A total of 335 teachers from 12 High Performing Secondary School in Malaysia completed the survey. The PCLnTCS with six points Liker-type scale were subjected to Principal Components Analysis. The analysis yielded a three-factor solution namely, a)Interpersonal Sensitivity; b)Flexibility; and c)Motivation, explaining a total 74.326 per cent of the variance. Based on the results, implications for instrument revisions are discussed and specifications for future confirmatory factor analysis are delineated.Keywords: exploratory factor analysis, principal change leadership non-technical competencies (PCLnTC), interpersonal sensitivity, flexibility, motivation
Procedia PDF Downloads 4255489 A Study on the Measurement of Spatial Mismatch and the Influencing Factors of “Job-Housing” in Affordable Housing from the Perspective of Commuting
Authors: Daijun Chen
Abstract:
Affordable housing is subsidized by the government to meet the housing demand of low and middle-income urban residents in the process of urbanization and to alleviate the housing inequality caused by market-based housing reforms. It is a recognized fact that the living conditions of the insured have been improved while constructing the subsidized housing. However, the choice of affordable housing is mostly in the suburbs, where the surrounding urban functions and infrastructure are incomplete, resulting in the spatial mismatch of "jobs-housing" in affordable housing. The main reason for this problem is that the residents of affordable housing are more sensitive to the spatial location of their residence, but their selectivity and controllability to the housing location are relatively weak, which leads to higher commuting costs. Their real cost of living has not been effectively reduced. In this regard, 92 subsidized housing communities in Nanjing, China, are selected as the research sample in this paper. The residents of the affordable housing and their commuting Spatio-temporal behavior characteristics are identified based on the LBS (location-based service) data. Based on the spatial mismatch theory, spatial mismatch indicators such as commuting distance and commuting time are established to measure the spatial mismatch degree of subsidized housing in different districts of Nanjing. Furthermore, the geographically weighted regression model is used to analyze the influencing factors of the spatial mismatch of affordable housing in terms of the provision of employment opportunities, traffic accessibility and supporting service facilities by using spatial, functional and other multi-source Spatio-temporal big data. The results show that the spatial mismatch of affordable housing in Nanjing generally presents a "concentric circle" pattern of decreasing from the central urban area to the periphery. The factors affecting the spatial mismatch of affordable housing in different spatial zones are different. The main reasons are the number of enterprises within 1 km of the affordable housing district and the shortest distance to the subway station. And the low spatial mismatch is due to the diversity of services and facilities. Based on this, a spatial optimization strategy for different levels of spatial mismatch in subsidized housing is proposed. And feasible suggestions for the later site selection of subsidized housing are also provided. It hopes to avoid or mitigate the impact of "spatial mismatch," promote the "spatial adaptation" of "jobs-housing," and truly improve the overall welfare level of affordable housing residents.Keywords: affordable housing, spatial mismatch, commuting characteristics, spatial adaptation, welfare benefits
Procedia PDF Downloads 1085488 Sustainable Environmental Management through the Comparative Study of Two Recreational Parks in Nigeria
Authors: Oluwagbemiga Paul Agboola, Cornelius Olatunji Omojola, Dayo Martins Oyeshomo
Abstract:
The role of a recreational park in human and environmental development has attracted much interest in the recent time. Recreation parks' development could act as an effective planning strategy to enhance environmental sustainability, social cohesiveness, and users' quality of life. Similarly, parks enhance neighbourhood's aesthetics, refresh the air and enhance humans' contact with nature. In this connection, recreation parks create natural surroundings of rural areas for leisure, relaxation, recreation, psychological and physical comfort of the people. The purpose of this paper is to investigate the effectiveness of the two recreational parks' development as a strategy for neighbourhood's environmental improvement, sustainability and the recreationists' cohesiveness. A total number of 158 survey questionnaires were distributed to the tourists at Ikogosi cold and warm spring in Ekiti state as well as Olumirin waterfalls, Erin-Ijesa, Osun State, in South-West, Nigeria. The quantitative results of the analyzed data with Relative Importance Index (RII) revealed that recreation parks provide optimum opportunities for users' social cohesiveness and well-being while parks' sustainable environment could be enhanced base on the provision of essential facilities, services, and future developmental plans. It is recommended that for recreation parks to realize their full potential in environmental sustainability, adequate maintenance and provision of essential facilities becomes imperative.Keywords: environmental sustainability, neighbourhood development, recreational park, Nigeria
Procedia PDF Downloads 2345487 Design of an Artificial Oil Body-Cyanogen Bromide Technology Platform for the Expression of Small Bioactive Peptide, Mastoparan B
Authors: Tzyy-Rong Jinn, Sheng-Kuo Hsieh, Yi-Ching Chung, Feng-Chia Hsieh
Abstract:
In this study, we attempted to develop a recombinant oleosin-based fusion expression strategy in Escherichia coli (E. coli) and coupled with the artificial oil bodies (AOB)-cyanogen bromide technology platform to produce bioactive mastoparan B (MP-B). As reported, the oleosin in AOB system plays a carrier (fusion with target protein), since oleosin possess two amphipathic regions (at the N-terminus and C-terminus), which result in the N-terminus and C-terminus of oleosin could be arranged on the surface of AOB. Thus, the target protein fused to the N-terminus or C-terminus of oleosin which also is exposed on the surface of AOB, and this process will greatly facilitate the subsequent separation and purification of target protein from AOB. In addition, oleosin, a unique structural protein of seed oil bodies, has the added advantage of helping the fused MP-B expressed in inclusion bodies, which can protect from proteolytic degradation. In this work, MP-B was fused to the C-terminus of oleosin and then was expressed in E. coli as an insoluble recombinant protein. As a consequence, we successfully developed a reliable recombinant oleosin-based fusion expression strategy in Escherichia coli and coupled with the artificial oil bodies (AOB)-cyanogen bromide technology platform to produce the small peptide, MP-B. Take together, this platform provides an insight into the production of active MP-B, which will facilitate studies and applications of this peptide in the future.Keywords: artificial oil bodies, Escherichia coli, Oleosin-fusion protein, Mastoparan-B
Procedia PDF Downloads 4515486 The Relationship between Procurement Strategies and Sustainability Outcomes: A Systematic Literature Review
Authors: Cathy T. Mpanga Kowet, Aghaegbuna Obinna U. Ozumba
Abstract:
This study examined and identified the inconsistencies, relationships, gaps and recurring themes in literature regarding the relationship between procurement strategies employed in the construction projects for sustainable buildings and realization of sustainability goals. A systematic literature review of studies on the relationship between various procurement strategies and attainment of sustainability outcomes was conducted. Using specific terms, papers published between 2002 and 2018 were identified and screened according to an inclusion and exclusion criteria. Current findings reveal that, although the attainment of sustainability goals is achievable with both traditional and contemporary procurement strategies, only projects delivered using modern procurement strategies are capable of meeting and exceeding targeted sustainability objectives. However, traditional procurement strategy remains the preferred method for most green building construction projects. The results suggest implications for decision makers in considering the impact of selected procurement strategies on targeted sustainability goals, in the early stages of sustainable building construction projects. The study shows that there is a gap between the reported appropriate procurement strategies and what is being practiced currently. Theoretically, the study expands on the literature on adoption and diffusion of contemporary procurement strategies, by consolidating existing studies to highlight the current gaps. While the study is at the literature review stage, deductions will serve as basis for field work involving empirical data.Keywords: green buildings construction, procurement method, procurement strategy, sustainability objectives, sustainability outcomes
Procedia PDF Downloads 1725485 A Preliminary Literature Review of Digital Transformation Case Studies
Authors: Vesna Bosilj Vukšić, Lucija Ivančić, Dalia Suša Vugec
Abstract:
While struggling to succeed in today’s complex market environment and provide better customer experience and services, enterprises encompass digital transformation as a means for reaching competitiveness and foster value creation. A digital transformation process consists of information technology implementation projects, as well as organizational factors such as top management support, digital transformation strategy, and organizational changes. However, to the best of our knowledge, there is little evidence about digital transformation endeavors in organizations and how they perceive it – is it only about digital technologies adoption or a true organizational shift is needed? In order to address this issue and as the first step in our research project, a literature review is conducted. The analysis included case study papers from Scopus and Web of Science databases. The following attributes are considered for classification and analysis of papers: time component; country of case origin; case industry and; digital transformation concept comprehension, i.e. focus. Research showed that organizations – public, as well as private ones, are aware of change necessity and employ digital transformation projects. Also, the changes concerning digital transformation affect both manufacturing and service-based industries. Furthermore, we discovered that organizations understand that besides technologies implementation, organizational changes must also be adopted. However, with only 29 relevant papers identified, research positioned digital transformation as an unexplored and emerging phenomenon in information systems research. The scarcity of evidence-based papers calls for further examination of this topic on cases from practice.Keywords: digital strategy, digital technologies, digital transformation, literature review
Procedia PDF Downloads 2185484 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel
Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler
Abstract:
Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process
Procedia PDF Downloads 1355483 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).Keywords: time history dynamic analysis, basic modal displacement, earthquake-induced demands, shear steel structures
Procedia PDF Downloads 3555482 Graffiti as Intelligence: an Analysis of Encoded Messages in Gang Graffiti Renderings
Authors: Timothy Kephart
Abstract:
Many law enforcement officials believe that gangs communicate messages to both the community and to rival gangs through graffiti. Some social scientists have documented this as well, however no recent research has examined gang graffiti for its underlying meaning. Empirical research on gang graffiti and gang communication through graffiti is limited. This research can be described as an exploratory effort to better understand how, and perhaps why, gangs employ this medium for communication. Furthermore this research showcases how law enforcement agencies can utilize this hidden form of communication to better direct resources and impact gang violence.Keywords: gangs, graffiti, juvenile justice, policing
Procedia PDF Downloads 4395481 A New Multi-Target, Multi-Agent Search and Rescue Path Planning Approach
Authors: Jean Berger, Nassirou Lo, Martin Noel
Abstract:
Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.Keywords: search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization
Procedia PDF Downloads 3715480 Development of a Bioprocess Technology for the Production of Vibrio midae, a Probiotic for Use in Abalone Aquaculture
Authors: Ghaneshree Moonsamy, Nodumo N. Zulu, Rajesh Lalloo, Suren Singh, Santosh O. Ramchuran
Abstract:
The abalone industry of South Africa is under severe pressure due to illegal harvesting and poaching of this seafood delicacy. These abalones are harvested excessively; as a result, these animals do not have a chance to replace themselves in their habitats, ensuing in a drastic decrease in natural stocks of abalone. Abalone has an extremely slow growth rate and takes approximately four years to reach a size that is market acceptable; therefore, it was imperative to investigate methods to boost the overall growth rate and immunity of the animal. The University of Cape Town (UCT) began to research, which resulted in the isolation of two microorganisms, a yeast isolate Debaryomyces hansenii and a bacterial isolate Vibrio midae, from the gut of the abalone and characterised them for their probiotic abilities. This work resulted in an internationally competitive concept technology that was patented. The next stage of research was to develop a suitable bioprocess to enable commercial production. Numerous steps were taken to develop an efficient production process for V. midae, one of the isolates found by UCT. The initial stages of research involved the development of a stable and robust inoculum and the optimization of physiological growth parameters such as temperature and pH. A range of temperature and pH conditions were evaluated, and data obtained revealed an optimum growth temperature of 30ᵒC and a pH of 6.5. Once these critical growth parameters were established further media optimization studies were performed. Corn steep liquor (CSL) and high test molasses (HTM) were selected as suitable alternatives to more expensive, conventionally used growth medium additives. The optimization of CSL (6.4 g.l⁻¹) and HTM (24 g.l⁻¹) concentrations in the growth medium resulted in a 180% increase in cell concentration, a 5716-fold increase in cell productivity and a 97.2% decrease in the material cost of production in comparison to conventional growth conditions and parameters used at the onset of the study. In addition, a stable market-ready liquid probiotic product, encompassing the viable but not culturable (VBNC) state of Vibrio midae cells, was developed during the downstream processing aspect of the study. The demonstration of this technology at a full manufacturing scale has further enhanced the attractiveness and commercial feasibility of this production process.Keywords: probiotics, abalone aquaculture, bioprocess technology, manufacturing scale technology development
Procedia PDF Downloads 1525479 Development of a Plug-In Hybrid Powertrain System with Double Continuously Variable Transmissions
Authors: Cheng-Chi Yu, Chi-Shiun Chiou
Abstract:
This study developed a plug-in hybrid powertrain system which consisted of two continuous variable transmissions. By matching between the engine, motor, generator, and dual continuous variable transmissions, this integrated power system can take advantages of the components. The hybrid vehicle can be driven by the internal combustion engine, or electric motor alone, or by these two power sources together when the vehicle is driven in hard acceleration or high load. The energy management of this integrated hybrid system controls the power systems based on rule-based control strategy to achieve better fuel economy. When the vehicle driving power demand is low, the internal combustion engine is operating in the low efficiency region, so the internal combustion engine is shut down, and the vehicle is driven by motor only. When the vehicle driving power demand is high, internal combustion engine would operate in the high efficiency region; then the vehicle could be driven by internal combustion engine. This strategy would operate internal combustion engine only in optimal efficiency region to improve the fuel economy. In this research, the vehicle simulation model was built in MATLAB/ Simulink environment. The analysis results showed that the power coupled efficiency of the hybrid powertrain system with dual continuous variable transmissions was better than that of the Honda hybrid system on the market.Keywords: plug-in hybrid power system, fuel economy, performance, continuously variable transmission
Procedia PDF Downloads 2895478 Quantum Statistical Machine Learning and Quantum Time Series
Authors: Omar Alzeley, Sergey Utev
Abstract:
Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series
Procedia PDF Downloads 4695477 Distant Speech Recognition Using Laser Doppler Vibrometer
Authors: Yunbin Deng
Abstract:
Most existing applications of automatic speech recognition relies on cooperative subjects at a short distance to a microphone. Standoff speech recognition using microphone arrays can extend the subject to sensor distance somewhat, but it is still limited to only a few feet. As such, most deployed applications of standoff speech recognitions are limited to indoor use at short range. Moreover, these applications require air passway between the subject and the sensor to achieve reasonable signal to noise ratio. This study reports long range (50 feet) automatic speech recognition experiments using a Laser Doppler Vibrometer (LDV) sensor. This study shows that the LDV sensor modality can extend the speech acquisition standoff distance far beyond microphone arrays to hundreds of feet. In addition, LDV enables 'listening' through the windows for uncooperative subjects. This enables new capabilities in automatic audio and speech intelligence, surveillance, and reconnaissance (ISR) for law enforcement, homeland security and counter terrorism applications. The Polytec LDV model OFV-505 is used in this study. To investigate the impact of different vibrating materials, five parallel LDV speech corpora, each consisting of 630 speakers, are collected from the vibrations of a glass window, a metal plate, a plastic box, a wood slate, and a concrete wall. These are the common materials the application could encounter in a daily life. These data were compared with the microphone counterpart to manifest the impact of various materials on the spectrum of the LDV speech signal. State of the art deep neural network modeling approaches is used to conduct continuous speaker independent speech recognition on these LDV speech datasets. Preliminary phoneme recognition results using time-delay neural network, bi-directional long short term memory, and model fusion shows great promise of using LDV for long range speech recognition. To author’s best knowledge, this is the first time an LDV is reported for long distance speech recognition application.Keywords: covert speech acquisition, distant speech recognition, DSR, laser Doppler vibrometer, LDV, speech intelligence surveillance and reconnaissance, ISR
Procedia PDF Downloads 1795476 Simulation of Wet Scrubbers for Flue Gas Desulfurization
Authors: Anders Schou Simonsen, Kim Sorensen, Thomas Condra
Abstract:
Wet scrubbers are used for flue gas desulfurization by injecting water directly into the flue gas stream from a set of sprayers. The water droplets will flow freely inside the scrubber, and flow down along the scrubber walls as a thin wall film while reacting with the gas phase to remove SO₂. This complex multiphase phenomenon can be divided into three main contributions: the continuous gas phase, the liquid droplet phase, and the liquid wall film phase. This study proposes a complete model, where all three main contributions are taken into account and resolved using OpenFOAM for the continuous gas phase, and MATLAB for the liquid droplet and wall film phases. The 3D continuous gas phase is composed of five species: CO₂, H₂O, O₂, SO₂, and N₂, which are resolved along with momentum, energy, and turbulence. Source terms are present for four species, energy and momentum, which are affecting the steady-state solution. The liquid droplet phase experiences breakup, collisions, dynamics, internal chemistry, evaporation and condensation, species mass transfer, energy transfer and wall film interactions. Numerous sub-models have been implemented and coupled to realise the above-mentioned phenomena. The liquid wall film experiences impingement, acceleration, atomization, separation, internal chemistry, evaporation and condensation, species mass transfer, and energy transfer, which have all been resolved using numerous sub-models as well. The continuous gas phase has been coupled with the liquid phases using source terms by an approach, where the two software packages are couples using a link-structure. The complete CFD model has been verified using 16 experimental tests from an existing scrubber installation, where a gradient-based pattern search optimization algorithm has been used to tune numerous model parameters to match the experimental results. The CFD model needed to be fast for evaluation in order to apply this optimization routine, where approximately 1000 simulations were needed. The results show that the complex multiphase phenomena governing wet scrubbers can be resolved in a single model. The optimization routine was able to tune the model to accurately predict the performance of an existing installation. Furthermore, the study shows that a coupling between OpenFOAM and MATLAB is realizable, where the data and source term exchange increases the computational requirements by approximately 5%. This allows for exploiting the benefits of both software programs.Keywords: desulfurization, discrete phase, scrubber, wall film
Procedia PDF Downloads 264