Search results for: process capability indices
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16992

Search results for: process capability indices

16422 The Optimization of Immobilization Conditions for Biohydrogen Production from Palm Industry Wastewater

Authors: A. W. Zularisam, Sveta Thakur, Lakhveer Singh, Mimi Sakinah Abdul Munaim

Abstract:

Clostridium sp. LS2 was immobilised by entrapment in polyethylene glycol (PEG) gel beads to improve the biohydrogen production rate from palm oil mill effluent (POME). We sought to explore and optimise the hydrogen production capability of the immobilised cells by studying the conditions for cell immobilisation, including PEG concentration, cell loading and curing times, as well as the effects of temperature and K2HPO4 (500–2000 mg/L), NiCl2 (0.1–5.0 mg/L), FeCl2 (100–400 mg/L) MgSO4 (50–200 mg/L) concentrations on hydrogen production rate. The results showed that by optimising the PEG concentration (10% w/v), initial biomass (2.2 g dry weight), curing time (80 min) and temperature (37 °C), as well as the concentrations of K2HPO4 (2000 mg/L), NiCl2 (1 mg/L), FeCl2 (300 mg/L) and MgSO4 (100 mg/L), a maximum hydrogen production rate of 7.3 L/L-POME/day and a yield of 0.31 L H2/g chemical oxygen demand were obtained during continuous operation. We believe that this process may be potentially expanded for sustained and large-scale hydrogen production.

Keywords: hydrogen, polyethylene glycol, immobilised cell, fermentation, palm oil mill effluent

Procedia PDF Downloads 271
16421 Validation of Escherichia coli O157:H7 Inactivation on Apple-Carrot Juice Treated with Manothermosonication by Kinetic Models

Authors: Ozan Kahraman, Hao Feng

Abstract:

Several models such as Weibull, Modified Gompertz, Biphasic linear, and Log-logistic models have been proposed in order to describe non-linear inactivation kinetics and used to fit non-linear inactivation data of several microorganisms for inactivation by heat, high pressure processing or pulsed electric field. First-order kinetic parameters (D-values and z-values) have often been used in order to identify microbial inactivation by non-thermal processing methods such as ultrasound. Most ultrasonic inactivation studies employed first-order kinetic parameters (D-values and z-values) in order to describe the reduction on microbial survival count. This study was conducted to analyze the E. coli O157:H7 inactivation data by using five microbial survival models (First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic). First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic kinetic models were used for fitting inactivation curves of Escherichia coli O157:H7. The residual sum of squares and the total sum of squares criteria were used to evaluate the models. The statistical indices of the kinetic models were used to fit inactivation data for E. coli O157:H7 by MTS at three temperatures (40, 50, and 60 0C) and three pressures (100, 200, and 300 kPa). Based on the statistical indices and visual observations, the Weibull and Biphasic models were best fitting of the data for MTS treatment as shown by high R2 values. The non-linear kinetic models, including the Modified Gompertz, First-order, and Log-logistic models did not provide any better fit to data from MTS compared the Weibull and Biphasic models. It was observed that the data found in this study did not follow the first-order kinetics. It is possibly because of the cells which are sensitive to ultrasound treatment were inactivated first, resulting in a fast inactivation period, while those resistant to ultrasound were killed slowly. The Weibull and biphasic models were found as more flexible in order to determine the survival curves of E. coli O157:H7 treated by MTS on apple-carrot juice.

Keywords: Weibull, Biphasic, MTS, kinetic models, E.coli O157:H7

Procedia PDF Downloads 366
16420 Integrating Data Mining within a Strategic Knowledge Management Framework: A Platform for Sustainable Competitive Advantage within the Australian Minerals and Metals Mining Sector

Authors: Sanaz Moayer, Fang Huang, Scott Gardner

Abstract:

In the highly leveraged business world of today, an organisation’s success depends on how it can manage and organize its traditional and intangible assets. In the knowledge-based economy, knowledge as a valuable asset gives enduring capability to firms competing in rapidly shifting global markets. It can be argued that ability to create unique knowledge assets by configuring ICT and human capabilities, will be a defining factor for international competitive advantage in the mid-21st century. The concept of KM is recognized in the strategy literature, and increasingly by senior decision-makers (particularly in large firms which can achieve scalable benefits), as an important vehicle for stimulating innovation and organisational performance in the knowledge economy. This thinking has been evident in professional services and other knowledge intensive industries for over a decade. It highlights the importance of social capital and the value of the intellectual capital embedded in social and professional networks, complementing the traditional focus on creation of intellectual property assets. Despite the growing interest in KM within professional services there has been limited discussion in relation to multinational resource based industries such as mining and petroleum where the focus has been principally on global portfolio optimization with economies of scale, process efficiencies and cost reduction. The Australian minerals and metals mining industry, although traditionally viewed as capital intensive, employs a significant number of knowledge workers notably- engineers, geologists, highly skilled technicians, legal, finance, accounting, ICT and contracts specialists working in projects or functions, representing potential knowledge silos within the organisation. This silo effect arguably inhibits knowledge sharing and retention by disaggregating corporate memory, with increased operational and project continuity risk. It also may limit the potential for process, product, and service innovation. In this paper the strategic application of knowledge management incorporating contemporary ICT platforms and data mining practices is explored as an important enabler for knowledge discovery, reduction of risk, and retention of corporate knowledge in resource based industries. With reference to the relevant strategy, management, and information systems literature, this paper highlights possible connections (currently undergoing empirical testing), between an Strategic Knowledge Management (SKM) framework incorporating supportive Data Mining (DM) practices and competitive advantage for multinational firms operating within the Australian resource sector. We also propose based on a review of the relevant literature that more effective management of soft and hard systems knowledge is crucial for major Australian firms in all sectors seeking to improve organisational performance through the human and technological capability captured in organisational networks.

Keywords: competitive advantage, data mining, mining organisation, strategic knowledge management

Procedia PDF Downloads 415
16419 The Financial and Metallurgical Benefits of Niobium Grain Refined As-Rolled 460 MPa H-Beam to the Construction Industry in SE Asia

Authors: Michael Wright, Tiago Costa

Abstract:

The construction industry in SE Asia has been relying on S355 MPa “as rolled” H-beams for many years now. It is an easily sourced, metallurgically simple, reliable product that all designers, fabricators and constructors are familiar with. However, as the Global demand to better use our finite resources gets stronger, the need for an as-rolled S460 MPa H-Beam is becoming more apparent. The Financial benefits of an “as-rolled” S460 MPa H-beam are obvious. The S460 MPa beam which is currently available and used is fabricated from rolled strip. However, making H-beam from 3 x 460 MPa strips requires costly equipment, valuable welding skills & production time, all of which can be in short supply or better used for other purposes. The Metallurgical benefits of an “as-rolled” S460 MPa H-beam are consistency in the product. Fabricated H-beams have inhomogeneous areas where the strips have been welded together - parent metal, heat affected zone and weld metal all in the one body. They also rely heavily on the skill of the welder to guarantee a perfect, defect free weld. If this does not occur, the beam is intrinsically flawed and could lead to failure in service. An as-rolled beam is a relatively homogenous product, with the optimum strength and ductility produced by delivering steel with as fine as possible uniform cross sectional grain size. This is done by cost effective alloy design coupled with proper metallurgical process control implemented into an existing mill’s equipment capability and layout. This paper is designed to highlight the benefits of bring an “as-rolled” S460 MPa H-beam to the construction market place in SE Asia, and hopefully encourage the current “as-rolled” H-beam producers to rise to the challenge and produce an innovative high quality product for the local market.

Keywords: fine grained, As-rolled, long products, process control, metallurgy

Procedia PDF Downloads 300
16418 Regression Approach for Optimal Purchase of Hosts Cluster in Fixed Fund for Hadoop Big Data Platform

Authors: Haitao Yang, Jianming Lv, Fei Xu, Xintong Wang, Yilin Huang, Lanting Xia, Xuewu Zhu

Abstract:

Given a fixed fund, purchasing fewer hosts of higher capability or inversely more of lower capability is a must-be-made trade-off in practices for building a Hadoop big data platform. An exploratory study is presented for a Housing Big Data Platform project (HBDP), where typical big data computing is with SQL queries of aggregate, join, and space-time condition selections executed upon massive data from more than 10 million housing units. In HBDP, an empirical formula was introduced to predict the performance of host clusters potential for the intended typical big data computing, and it was shaped via a regression approach. With this empirical formula, it is easy to suggest an optimal cluster configuration. The investigation was based on a typical Hadoop computing ecosystem HDFS+Hive+Spark. A proper metric was raised to measure the performance of Hadoop clusters in HBDP, which was tested and compared with its predicted counterpart, on executing three kinds of typical SQL query tasks. Tests were conducted with respect to factors of CPU benchmark, memory size, virtual host division, and the number of element physical host in cluster. The research has been applied to practical cluster procurement for housing big data computing.

Keywords: Hadoop platform planning, optimal cluster scheme at fixed-fund, performance predicting formula, typical SQL query tasks

Procedia PDF Downloads 232
16417 Spring Water Quality Appraisement for Drinking and Irrigation Application in Nigeria: A Muliti-Criteria Approach

Authors: Hillary Onyeka Abugu, Valentine Chinakwugwo Ezea, Janefrances Ngozi Ihedioha, Nwachukwu Romanus Ekere

Abstract:

The study assessed the spring water quality in Igbo-Etiti, Nigeria, for drinking and irrigation application using Physico-chemical parameters, water quality index, mineral and trace elements, pollution indices and risk assessment. Standard methods were used to determine the physicochemical properties of the spring water in rainy and dry seasons. Trace metals such as Pb, Cd, Zn and Cu were determined with atomic absorption spectrophotometer. The results showed that most of the physicochemical properties studied were within the guideline values set by Nigeria Standard for Drinking Water Quality (NSDWQ), WHO and US EPA for drinking water purposes. However, pH of all the spring water (4.27- 4.73; and 4.95- 5.73), lead (Pb) (0.01-1.08 mg/L) and cadmium (Cd) (0.01-0.15 mg/L) concentrations were above the guideline values in both seasons. This could be attributed to the lithography of the study area, which is the Nsukka formation. Leaching of lead and sulphides from the embedded coal deposits could have led to the increased lead levels and made the water acidic. Two-way ANOVA showed significant differences in most of the parameters studied in dry and rainy seasons. Pearson correlation analysis and cluster analysis showed strong significant positive and negative correlations in some of the parameters studied in both seasons. The water quality index showed that none of the spring water had excellent water status. However, one spring (Iyi Ase) had poor water status in dry season and is considered unsafe for drinking. Iyi Ase was also considered not suitable for irrigation application as predicted by most of the pollution indices, while others were generally considered suitable for irrigation application. Probable cancer and non-cancer risk assessment revealed a probable risk associated with the consumption of the spring in the Igbo-Ettiti area, Nigeria.

Keywords: water quality, pollution index, risk assessment, physico-chemical parameters

Procedia PDF Downloads 166
16416 Conceptualizing IoT Based Framework for Enhancing Environmental Accounting By ERP Systems

Authors: Amin Ebrahimi Ghadi, Morteza Moalagh

Abstract:

This research is carried out to find how a perfect combination of IoT architecture (Internet of Things) and ERP system can strengthen environmental accounting to incorporate both economic and environmental information. IoT (e.g., sensors, software, and other technologies) can be used in the company’s value chain from raw material extraction through materials processing, manufacturing products, distribution, use, repair, maintenance, and disposal or recycling products (Cradle to Grave model). The desired ERP software then will have the capability to track both midpoint and endpoint environmental impacts on a green supply chain system for the whole life cycle of a product. All these enable environmental accounting to calculate, and real-time analyze the operation environmental impacts, control costs, prepare for environmental legislation and enhance the decision-making process. In this study, we have developed a model on how to use IoT devices in life cycle assessment (LCA) to gather emissions, energy consumption, hazards, and wastes information to be processed in different modules of ERP systems in an integrated way for using in environmental accounting to achieve sustainability.

Keywords: ERP, environmental accounting, green supply chain, IOT, life cycle assessment, sustainability

Procedia PDF Downloads 172
16415 Linearization and Process Standardization of Construction Design Engineering Workflows

Authors: T. R. Sreeram, S. Natarajan, C. Jena

Abstract:

Civil engineering construction is a network of tasks involving varying degree of complexity and streamlining, and standardization is the only way to establish a systemic approach to design. While there are off the shelf tools such as AutoCAD that play a role in the realization of design, the repeatable process in which these tools are deployed often is ignored. The present paper addresses this challenge through a sustainable design process and effective standardizations at all stages in the design workflow. The same is demonstrated through a case study in the context of construction, and further improvement points are highlighted.

Keywords: syste, lean, value stream, process improvement

Procedia PDF Downloads 123
16414 Simplified Measurement of Occupational Energy Expenditure

Authors: J. Wicks

Abstract:

Aim: To develop a simple methodology to allow collected heart rate (HR) data from inexpensive wearable devices to be expressed in a suitable format (METs) to quantitate occupational (and recreational) activity. Introduction: Assessment of occupational activity is commonly done by utilizing questionnaires in combination with prescribed MET levels of a vast range of previously measured activities. However for any individual the intensity of performing a specific activity can vary significantly. Ideally objective measurement of individual activity is preferred. Though there are a wide range of HR recording devices there is a distinct lack methodology to allow processing of collected data to quantitate energy expenditure (EE). The HR index equation expresses METs in relation to relative HR i.e. the ratio of activity HR to resting HR. The use of this equation provides a simple utility for objective measurement of EE. Methods: During a typical occupational work period of approximately 8 hours HR data was recorded using a Polar RS 400 wrist monitor. Recorded data was downloaded to a Windows PC and non HR data was stripped from the ASCII file using ‘Notepad’. The HR data was exported to a spread sheet program and sorted by HR range into a histogram format. Three HRs were determined, namely a resting HR (the HR delimiting the lowest 30 minutes of recorded data), a mean HR and a peak HR (the HR delimiting the highest 30 minutes of recorded data). HR indices were calculated (mean index equals mean HR/rest HR and peak index equals peak HR/rest HR) with mean and peak indices being converted to METs using the HR index equation. Conclusion: Inexpensive HR recording devices can be utilized to make reasonable estimates of occupational (or recreational) EE suitable for large scale demographic screening by utilizing the HR index equation. The intrinsic value of the HR index equation is that it is independent of factors that influence absolute HR, namely fitness, smoking and beta-blockade.

Keywords: energy expenditure, heart rate histograms, heart rate index, occupational activity

Procedia PDF Downloads 296
16413 Elaboration of Ceramic Metal Accident Tolerant Fuels by Additive Manufacturing

Authors: O. Fiquet, P. Lemarignier

Abstract:

Additive manufacturing may find numerous applications in the nuclear industry, for the same reason as for other industries, to enlarge design possibilities and performances and develop fabrication methods as a flexible route for future innovation. Additive Manufacturing applications in the design of structural metallic components for reactors are already developed at a high Technology Readiness Level (TRL). In the case of a Pressured Water Reactor using uranium oxide fuel pellets, which are ceramics, the transposition of already optimized Additive Manufacturing (AM) processes to UO₂ remains a challenge, and the progress remains slow because, to our best knowledge, only a few laboratories have the capability of developing processes applicable to UO₂. After the Fukushima accident, numerous research fields emerged with the study of ATF (Accident tolerant Fuel) fuel concepts, which aimed to improve fuel behaviour. One item concerns the increase of the pellet thermal performance by, for example, the addition of high thermal conductivity material into fissile UO₂. This additive phase may be metallic, and the end product will constitute a CERMET composite. Innovative designs of an internal metallic framework are proposed based on predictive calculations. However, because the well-known reference pellet manufacturing methods impose many limitations, manufacturing such a composite remains an arduous task. Therefore, the AM process appears as a means of broadening the design possibilities of CERMET manufacturing. If the external form remains a standard cylindrical fuel pellet, the internal metallic design remains to be optimized based on process capabilities. This project also considers the limitation to a maximum of 10% volume of metal, which is a constraint neutron physics considerations impose. The AM technique chosen for this development is robocasting because of its simplicity and low-cost equipment. It remains, however, a challenge to adapt a ceramic 3D printing process for the fabrication of UO₂ fuel. The investigation starts with surrogate material, and the optimization of slurry feedstock is based on alumina. The paper will present the first printing of Al2O3-Mo CERMET and the expected transition from ceramic-based alumina to UO₂ CERMET.

Keywords: nuclear, fuel, CERMET, robocasting

Procedia PDF Downloads 68
16412 Crude Distillation Process Simulation Using Unisim Design Simulator

Authors: C. Patrascioiu, M. Jamali

Abstract:

The paper deals with the simulation of the crude distillation process using the Unisim Design simulator. The necessity of simulating this process is argued both by considerations related to the design of the crude distillation column, but also by considerations related to the design of advanced control systems. In order to use the Unisim Design simulator to simulate the crude distillation process, the identification of the simulators used in Romania and an analysis of the PRO/II, HYSYS, and Aspen HYSYS simulators were carried out. Analysis of the simulators for the crude distillation process has allowed the authors to elaborate the conclusions of the success of the crude modelling. A first aspect developed by the authors is the implementation of specific problems of petroleum liquid-vapors equilibrium using Unisim Design simulator. The second major element of the article is the development of the methodology and the elaboration of the simulation program for the crude distillation process, using Unisim Design resources. The obtained results validate the proposed methodology and will allow dynamic simulation of the process.  

Keywords: crude oil, distillation, simulation, Unisim Design, simulators

Procedia PDF Downloads 249
16411 The Study of Sintered Wick Structure of Heat Pipes with Excellent Heat Transfer Capabilities

Authors: Im-Nam Jang, Yong-Sik Ahn

Abstract:

In this study sintered wick was formed in a heat pipe through the process of sintering a mixture of copper powder with particle sizes of 100μm and 200μm, mixed with a pore-forming agent. The heat pipe's thermal resistance, which affects its heat transfer efficiency, is determined during manufacturing according to powder type, thickness of the sintered wick, and filling rate of the working fluid. Heat transfer efficiency was then tested at various inclination angles (0°, 45°, 90°) to evaluate the performance of heat pipes. Regardless of the filling amount and test angle, the 200μm copper powder type exhibited superior heat transfer efficiency compared to the 100μm type. After analyzing heat transfer performance at various filling rates between 20% and 50%, it was determined that the heat pipe's optimal heat transfer capability occurred at a working fluid filling rate of 30%. The width of the wick was directly related to the heat transfer performance.

Keywords: heat pipe, heat transfer performance, effective pore size, capillary force, sintered wick

Procedia PDF Downloads 64
16410 Land Lots and Shannon-Winner Index in Sarpolzahab Agro Ecosystems-Western Iran

Authors: Ashkan Asgari, Korous Khoshbakht, Saeid Soufizadeh

Abstract:

Various factors including land lots can affect biodiversity indices in Agricultural systems. Field study conducted to evaluate factors affecting crop diversity in Sarpolzahab in 2012. Required data were collected through direct observation of farms and filling questionnaires. Total numbers of 140 questionnaires were filled, SAS Software was used to analyse data and Ecological Methodology Program was applied to calculate Shannon-Winner index, subsequently. Results of study indicated that average number of land lots for each farmer was 2.78 and various from 2.2 in Rikhak Olia Village to 4.31 in Golam Kaboud Olia Village which shows small size of land lots due to separating larger lots by children of deceased farmers. The correlation between number of land lots and species biodiversity (0.308**) was significant and Shannon-Winner index was (0.262**). Therefore, according to the mentioned results one can assume that increase in number of land lots results in improving of the target index. Multiple land lots allow farmers to cultivate various crops which results in increasing biodiversity of crops in agro ecosystem. Subsequently, this increase will facilitate economic sustainability of the farmers and distribution of work force in the region throughout the year. The correlation of seasonal workers with biodiversity of crop species (0.256**) and Shannon-Winner (0.286**) was statistically significant and increasing number of seasonal work forces had resulted in improving crop biodiversity and decreasing dominant species or single crop farming systems. Vegetable farms which have a significant diversity, require a significant number of work forces which describes correlation between number of workers and diversity of species.

Keywords: agricultural systems, biodiversity indices, Shannon-Winner index, sustainability, rural

Procedia PDF Downloads 538
16409 A Sustainable Pt/BaCe₁₋ₓ₋ᵧZrₓGdᵧO₃ Catalyst for Dry Reforming of Methane-Derived from Recycled Primary Pt

Authors: Alessio Varotto, Lorenzo Freschi, Umberto Pasqual Laverdura, Anastasia Moschovi, Davide Pumiglia, Iakovos Yakoumis, Marta Feroci, Maria Luisa Grilli

Abstract:

Dry reforming of Methane (DRM) is considered one of the most valuable technologies for green-house gas valorization thanks to the fact that through this reaction, it is possible to obtain syngas, a mixture of H₂ and CO in an H₂/CO ratio suitable for utilization in the Fischer-Tropsch process of high value-added chemicals and fuels. Challenges of the DRM process are the reduction of costs due to the high temperature of the process and the high cost of precious metals of the catalyst, the metal particles sintering, and carbon deposition on the catalysts’ surface. The aim of this study is to demonstrate the feasibility of the synthesis of catalysts using a leachate solution containing Pt coming directly from the recovery of spent diesel oxidation catalysts (DOCs) without further purification. An unusual perovskite support for DRM, the BaCe₁₋ₓ₋ᵧZrₓGdᵧO₃ (BCZG) perovskite, has been chosen as the catalyst support because of its high thermal stability and capability to produce oxygen vacancies, which suppress the carbon deposition and enhance the catalytic activity of the catalyst. BCZG perovskite has been synthesized by a sol-gel modified Pechini process and calcinated in air at 1100 °C. BCZG supports have been impregnated with a Pt-containing leachate solution of DOC, obtained by a mild hydrometallurgical recovery process, as reported elsewhere by some of the authors of this manuscript. For comparison reasons, a synthetic solution obtained by digesting commercial Pt-black powder in aqua regia was used for BCZG support impregnation. Pt nominal content was 2% in both BCZG-based catalysts formed by real and synthetic solutions. The structure and morphology of catalysts were characterized by X-Ray Diffraction (XRD) and Scanning Electron Microscopy (SEM). Thermogravimetric Analysis (TGA) was used to study the thermal stability of the catalyst’s samples. Brunauer-Emmett-Teller (BET) analysis provided a high surface area of the catalysts. H₂-TPR (Temperature Programmed Reduction) analysis was used to study the consumption of hydrogen for reducibility, and it was associated with H₂-TPD characterization to study the dispersion of Pt on the surface of the support and calculate the number of active sites used by the precious metal. Dry reforming of methane (DRM) reaction, carried out in a fixed bed reactor, showed a high conversion efficiency of CO₂ and CH4. At 850°C, CO₂ and CH₄ conversion were close to 100% for the catalyst obtained with the aqua regia-based solution of commercial Pt-black, and ~70% (for CH₄) and ~80 % (for CO₂) in the case of real HCl-based leachate solution. H₂/CO ratios were ~0.9 and ~0.70 in the first and latter cases, respectively. As far as we know, this is the first pioneering work in which a BCGZ catalyst and a real Pt-containing leachate solution were successfully employed for DRM reaction.

Keywords: dry reforming of methane, perovskite, PGM, recycled Pt, syngas

Procedia PDF Downloads 37
16408 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion

Authors: Ali Kazemi

Abstract:

Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.

Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting

Procedia PDF Downloads 65
16407 Implementation of Deep Neural Networks for Pavement Condition Index Prediction

Authors: M. Sirhan, S. Bekhor, A. Sidess

Abstract:

In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.

Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction

Procedia PDF Downloads 137
16406 A Prediction of Cutting Forces Using Extended Kienzle Force Model Incorporating Tool Flank Wear Progression

Authors: Wu Peng, Anders Liljerehn, Martin Magnevall

Abstract:

In metal cutting, tool wear gradually changes the micro geometry of the cutting edge. Today there is a significant gap in understanding the impact these geometrical changes have on the cutting forces which governs tool deflection and heat generation in the cutting zone. Accurate models and understanding of the interaction between the work piece and cutting tool leads to improved accuracy in simulation of the cutting process. These simulations are useful in several application areas, e.g., optimization of insert geometry and machine tool monitoring. This study aims to develop an extended Kienzle force model to account for the effect of rake angle variations and tool flank wear have on the cutting forces. In this paper, the starting point sets from cutting force measurements using orthogonal turning tests of pre-machined flanches with well-defined width, using triangular coated inserts to assure orthogonal condition. The cutting forces have been measured by dynamometer with a set of three different rake angles, and wear progression have been monitored during machining by an optical measuring collaborative robot. The method utilizes the measured cutting forces with the inserts flank wear progression to extend the mechanistic cutting forces model with flank wear as an input parameter. The adapted cutting forces model is validated in a turning process with commercial cutting tools. This adapted cutting forces model shows the significant capability of prediction of cutting forces accounting for tools flank wear and different-rake-angle cutting tool inserts. The result of this study suggests that the nonlinear effect of tools flank wear and interaction between the work piece and the cutting tool can be considered by the developed cutting forces model.

Keywords: cutting force, kienzle model, predictive model, tool flank wear

Procedia PDF Downloads 108
16405 Simulation-Based Optimization Approach for an Electro-Plating Production Process Based on Theory of Constraints and Data Envelopment Analysis

Authors: Mayada Attia Ibrahim

Abstract:

Evaluating and developing the electroplating production process is a key challenge in this type of process. The process is influenced by several factors such as process parameters, process costs, and production environments. Analyzing and optimizing all these factors together requires extensive analytical techniques that are not available in real-case industrial entities. This paper presents a practice-based framework for the evaluation and optimization of some of the crucial factors that affect the costs and production times associated with this type of process, energy costs, material costs, and product flow times. The proposed approach uses Design of Experiments, Discrete-Event Simulation, and Theory of Constraints were respectively used to identify the most significant factors affecting the production process and simulate a real production line to recognize the effect of these factors and assign possible bottlenecks. Several scenarios are generated as corrective strategies for improving the production line. Following that, data envelopment analysis CCR input-oriented DEA model is used to evaluate and optimize the suggested scenarios.

Keywords: electroplating process, simulation, design of experiment, performance optimization, theory of constraints, data envelopment analysis

Procedia PDF Downloads 97
16404 Treatment of Cutting Oily-Wastewater by Sono-Fenton Process: Experimental Approach and Combined Process

Authors: Pisut Painmanakul, Thawatchai Chintateerachai, Supanid Lertlapwasin, Nusara Rojvilavan, Tanun Chalermsinsuwan, Nattawin Chawaloesphonsiya, Onanong Larpparisudthi

Abstract:

Conventional coagulation, advance oxidation process (AOPs), and the combined process were evaluated and compared for its suitability to treat the stabilized cutting-oil wastewater. The 90% efficiency was obtained from the coagulation at Al2(SO4)3 dosage of 150 mg/L and pH 7. On the other hands, efficiencies of AOPs for 30 minutes oxidation time were 10% for acoustic oxidation, 12% for acoustic oxidation with hydrogen peroxide, 76% for Fenton, and 92% sono-Fenton processes. The highest efficiency for effective oil removal of AOPs required large amount of chemical. Therefore, AOPs were studied as a post-treatment after conventional separation process. The efficiency was considerable as the effluent COD can pass the standard required for industrial wastewater discharge with less chemical and energy consumption.

Keywords: cutting oily-wastewater, advance oxidation process, sono-fenton, combined process

Procedia PDF Downloads 355
16403 Parametric Studies of Ethylene Dichloride Purification Process

Authors: Sh. Arzani, H. Kazemi Esfeh, Y. Galeh Zadeh, V. Akbari

Abstract:

Ethylene dichloride is a colorless liquid with a smell like chloroform. EDC is classified in the simple hydrocarbon group which is obtained from chlorinating ethylene gas. Its chemical formula is C2H2Cl2 which is used as the main mediator in VCM production. Therefore, the purification process of EDC is important in the petrochemical process. In this study, the purification unit of EDC was simulated, and then validation was performed. Finally, the impact of process parameter was studied for the degree of EDC purity. The results showed that by increasing the feed flow, the reflux impure combinations increase and result in an EDC purity decrease.

Keywords: ethylene dichloride, purification, edc, simulation

Procedia PDF Downloads 316
16402 Optimised Path Recommendation for a Real Time Process

Authors: Likewin Thomas, M. V. Manoj Kumar, B. Annappa

Abstract:

Traditional execution process follows the path of execution drawn by the process analyst without observing the behaviour of resource and other real-time constraints. Identifying process model, predicting the behaviour of resource and recommending the optimal path of execution for a real time process is challenging. The proposed AlfyMiner: αyM iner gives a new dimension in process execution with the novel techniques Process Model Analyser: PMAMiner and Resource behaviour Analyser: RBAMiner for recommending the probable path of execution. PMAMiner discovers next probable activity for currently executing activity in an online process using variant matching technique to identify the set of next probable activity, among which the next probable activity is discovered using decision tree model. RBAMiner identifies the resource suitable for performing the discovered next probable activity and observe the behaviour based on; load and performance using polynomial regression model, and waiting time using queueing theory. Based on the observed behaviour αyM iner recommend the probable path of execution with; next probable activity and the best suitable resource for performing it. Experiments were conducted on process logs of CoSeLoG Project1 and 72% of accuracy is obtained in identifying and recommending next probable activity and the efficiency of resource performance was optimised by 59% by decreasing their load.

Keywords: cross-organization process mining, process behaviour, path of execution, polynomial regression model

Procedia PDF Downloads 334
16401 Supply Chain Logistics Integration in Bahrain's Construction Industry

Authors: Randolf Von N. Salindo

Abstract:

The study was conducted to measure the logistics integration capabilities of selected companies in the Bahrain construction industry using the Supply Chain 2000 framework; and, determine the extent and direction of influence of these logistics capabilities and integration competencies on the supply chain performance of the firm. A total of 50 executive respondents (from supervisor to managing director level) from 22 construction and construction supplier firms participated in the study from September to November 2014. The results reveal that respondent Bahraini construction firms have significantly lower levels of logistics capabilities, but higher levels of logistics integration competencies compared to international benchmarks. Using stepwise multiple regression analysis, eight logistics capabilities of Bahraini constructions firms were identified to be positively associated with firm performance; with comprehensive metrics as the most positively dominant influential logistics capability. Activity based and total cost methodology is found to be the most negatively dominant influential logistics capability. In terms of logistics integration competencies, the study revealed that that customer integration, internal integration, and, measurement integration are negatively associated with firm performance. There was no logistics integration competency found to be positively associated with the supply chain performance among the companies who participated in the study. The research reveals that there are areas for improvement in supply chain capabilities and logistics integration competencies of the construction firms in the Kingdom of Bahrain to improve their supply chain performance to a global level.

Keywords: comprehensive metrics, customer integration, logistics integration capabilities, logistics integration competencies

Procedia PDF Downloads 641
16400 The Role of Information and Communication Technology to Enhance Transparency in Public Funds Management in the DR Congo

Authors: Itulelo Matiyabu Imaja, Manoj Maharaj, Patrick Ndayizigamiye

Abstract:

Lack of transparency in public funds management is observed in many African countries. The DR Congo is among the most corrupted countries in Africa, and this is due mainly to lack of transparency and accountability in public funds management. Corruption has a negative effect on the welfare of the country’s citizens and the national economic growth. Public funds collection and allocation are the major areas whereby malpractices such as bribe, extortion, embezzlement, nepotism and other practices related to corruption are prevalent. Hence, there is a need to implement strong mechanisms to enforce transparency in public funds management. Many researchers have suggested some control mechanisms in curbing corruption in public funds management focusing mainly on law enforcement and administrative reforms with little or no insight on the role that ICT can play in preventing and curbing the corrupt behaviour. In the Democratic Republic of Congo (DRC), there are slight indications that the government of the DR Congo is integrating ICT to fight corruption in public funds collection and allocation. However, such government initiatives are at an infancy stage, with no tangible evidence on how ICT could be used effectively to address the issue of corruption in the context of the country. Hence, this research assesses the role that ICT can play for transparency in public funds management and suggest a framework for its adoption in the Democratic Republic of Congo. This research uses the revised Capability model (Capability, Empowerment, Sustainability model) as the guiding theoretical framework. The study uses the exploratory design methodology coupled with a qualitative approach to data collection and purposive sampling as sampling strategy.

Keywords: corruption, DR congo, ICT, management, public funds, transparency

Procedia PDF Downloads 348
16399 Molecular Topology and TLC Retention Behaviour of s-Triazines: QSRR Study

Authors: Lidija R. Jevrić, Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević

Abstract:

Quantitative structure-retention relationship (QSRR) analysis was used to predict the chromatographic behavior of s-triazine derivatives by using theoretical descriptors computed from the chemical structure. Fundamental basis of the reported investigation is to relate molecular topological descriptors with chromatographic behavior of s-triazine derivatives obtained by reversed-phase (RP) thin layer chromatography (TLC) on silica gel impregnated with paraffin oil and applied ethanol-water (φ = 0.5-0.8; v/v). Retention parameter (RM0) of 14 investigated s-triazine derivatives was used as dependent variable while simple connectivity index different orders were used as independent variables. The best QSRR model for predicting RM0 value was obtained with simple third order connectivity index (3χ) in the second-degree polynomial equation. Numerical values of the correlation coefficient (r=0.915), Fisher's value (F=28.34) and root mean square error (RMSE = 0.36) indicate that model is statistically significant. In order to test the predictive power of the QSRR model leave-one-out cross-validation technique has been applied. The parameters of the internal cross-validation analysis (r2CV=0.79, r2adj=0.81, PRESS=1.89) reflect the high predictive ability of the generated model and it confirms that can be used to predict RM0 value. Multivariate classification technique, hierarchical cluster analysis (HCA), has been applied in order to group molecules according to their molecular connectivity indices. HCA is a descriptive statistical method and it is the most frequently used for important area of data processing such is classification. The HCA performed on simple molecular connectivity indices obtained from the 2D structure of investigated s-triazine compounds resulted in two main clusters in which compounds molecules were grouped according to the number of atoms in the molecule. This is in agreement with the fact that these descriptors were calculated on the basis of the number of atoms in the molecule of the investigated s-triazine derivatives.

Keywords: s-triazines, QSRR, chemometrics, chromatography, molecular descriptors

Procedia PDF Downloads 393
16398 Effect of Impurities in the Chlorination Process of TiO2

Authors: Seok Hong Min, Tae Kwon Ha

Abstract:

With the increasing interest on Ti alloys, the extraction process of Ti from its typical ore, TiO2, has long been and will be important issue. As an intermediate product for the production of pigment or titanium metal sponge, tetrachloride (TiCl4) is produced by fluidized bed using high TiO2 feedstock. The purity of TiCl4 after chlorination is subjected to the quality of the titanium feedstock. Since the impurities in the TiCl4 product are reported to final products, the purification process of the crude TiCl4 is required. The purification process includes fractional distillation and chemical treatment, which depends on the nature of the impurities present and the required quality of the final product. In this study, thermodynamic analysis on the impurity effect in the chlorination process, which is the first step of extraction of Ti from TiO2, has been conducted. All thermodynamic calculations were performed using the FactSage thermodynamical software.

Keywords: rutile, titanium, chlorination process, impurities, thermodynamic calculation, FactSage

Procedia PDF Downloads 308
16397 A Gauge Repeatability and Reproducibility Study for Multivariate Measurement Systems

Authors: Jeh-Nan Pan, Chung-I Li

Abstract:

Measurement system analysis (MSA) plays an important role in helping organizations to improve their product quality. Generally speaking, the gauge repeatability and reproducibility (GRR) study is performed according to the MSA handbook stated in QS9000 standards. Usually, GRR study for assessing the adequacy of gauge variation needs to be conducted prior to the process capability analysis. Traditional MSA only considers a single quality characteristic. With the advent of modern technology, industrial products have become very sophisticated with more than one quality characteristic. Thus, it becomes necessary to perform multivariate GRR analysis for a measurement system when collecting data with multiple responses. In this paper, we take the correlation coefficients among tolerances into account to revise the multivariate precision-to-tolerance (P/T) ratio as proposed by Majeske (2008). We then compare the performance of our revised P/T ratio with that of the existing ratios. The simulation results show that our revised P/T ratio outperforms others in terms of robustness and proximity to the actual value. Moreover, the optimal allocation of several parameters such as the number of quality characteristics (v), sample size of parts (p), number of operators (o) and replicate measurements (r) is discussed using the confidence interval of the revised P/T ratio. Finally, a standard operating procedure (S.O.P.) to perform the GRR study for multivariate measurement systems is proposed based on the research results. Hopefully, it can be served as a useful reference for quality practitioners when conducting such study in industries. Measurement system analysis (MSA) plays an important role in helping organizations to improve their product quality. Generally speaking, the gauge repeatability and reproducibility (GRR) study is performed according to the MSA handbook stated in QS9000 standards. Usually, GRR study for assessing the adequacy of gauge variation needs to be conducted prior to the process capability analysis. Traditional MSA only considers a single quality characteristic. With the advent of modern technology, industrial products have become very sophisticated with more than one quality characteristic. Thus, it becomes necessary to perform multivariate GRR analysis for a measurement system when collecting data with multiple responses. In this paper, we take the correlation coefficients among tolerances into account to revise the multivariate precision-to-tolerance (P/T) ratio as proposed by Majeske (2008). We then compare the performance of our revised P/T ratio with that of the existing ratios. The simulation results show that our revised P/T ratio outperforms others in terms of robustness and proximity to the actual value. Moreover, the optimal allocation of several parameters such as the number of quality characteristics (v), sample size of parts (p), number of operators (o) and replicate measurements (r) is discussed using the confidence interval of the revised P/T ratio. Finally, a standard operating procedure (S.O.P.) to perform the GRR study for multivariate measurement systems is proposed based on the research results. Hopefully, it can be served as a useful reference for quality practitioners when conducting such study in industries.

Keywords: gauge repeatability and reproducibility, multivariate measurement system analysis, precision-to-tolerance ratio, Gauge repeatability

Procedia PDF Downloads 262
16396 Controlling the Process of a Chicken Dressing Plant through Statistical Process Control

Authors: Jasper Kevin C. Dionisio, Denise Mae M. Unsay

Abstract:

In a manufacturing firm, controlling the process ensures that optimum efficiency, productivity, and quality in an organization are achieved. An operation with no standardized procedure yields a poor productivity, inefficiency, and an out of control process. This study focuses on controlling the small intestine processing of a chicken dressing plant through the use of Statistical Process Control (SPC). Since the operation does not employ a standard procedure and does not have an established standard time, the process through the assessment of the observed time of the overall operation of small intestine processing, through the use of X-Bar R Control Chart, is found to be out of control. In the solution of this problem, the researchers conduct a motion and time study aiming to establish a standard procedure for the operation. The normal operator was picked through the use of Westinghouse Rating System. Instead of utilizing the traditional motion and time study, the researchers used the X-Bar R Control Chart in determining the process average of the process that is used for establishing the standard time. The observed time of the normal operator was noted and plotted to the X-Bar R Control Chart. Out of control points that are due to assignable cause were removed and the process average, or the average time the normal operator conducted the process, which was already in control and free form any outliers, was obtained. The process average was then used in determining the standard time of small intestine processing. As a recommendation, the researchers suggest the implementation of the standard time established which is with consonance to the standard procedure which was adopted from the normal operator. With that recommendation, the whole operation will induce a 45.54 % increase in their productivity.

Keywords: motion and time study, process controlling, statistical process control, X-Bar R Control chart

Procedia PDF Downloads 217
16395 Determining the Width and Depths of Cut in Milling on the Basis of a Multi-Dexel Model

Authors: Jens Friedrich, Matthias A. Gebele, Armin Lechler, Alexander Verl

Abstract:

Chatter vibrations and process instabilities are the most important factors limiting the productivity of the milling process. Chatter can leads to damage of the tool, the part or the machine tool. Therefore, the estimation and prediction of the process stability is very important. The process stability depends on the spindle speed, the depth of cut and the width of cut. In milling, the process conditions are defined in the NC-program. While the spindle speed is directly coded in the NC-program, the depth and width of cut are unknown. This paper presents a new simulation based approach for the prediction of the depth and width of cut of a milling process. The prediction is based on a material removal simulation with an analytically represented tool shape and a multi-dexel approach for the work piece. The new calculation method allows the direct estimation of the depth and width of cut, which are the influencing parameters of the process stability, instead of the removed volume as existing approaches do. The knowledge can be used to predict the stability of new, unknown parts. Moreover with an additional vibration sensor, the stability lobe diagram of a milling process can be estimated and improved based on the estimated depth and width of cut.

Keywords: dexel, process stability, material removal, milling

Procedia PDF Downloads 525
16394 Tritium Activities in Romania, Potential Support for Development of ITER Project

Authors: Gheorghe Ionita, Sebastian Brad, Ioan Stefanescu

Abstract:

In any fusion device, tritium plays a key role both as a fuel component and, due to its radioactivity and easy incorporation, as tritiated water (HTO). As for the ITER project, to reduce the constant potential of tritium emission, there will be implemented a Water Detritiation System (WDS) and an Isotopic Separation System (ISS). In the same time, during operation of fission CANDU reactors, the tritium content increases in the heavy water used as moderator and cooling agent (due to neutron activation) and it has to be reduced, too. In Romania, at the National Institute for Cryogenics and Isotopic Technologies (ICIT Rm-Valcea), there is an Experimental Pilot Plant for Tritium Removal (Exp. TRF), with the aim of providing technical data on the design and operation of an industrial plant for heavy water depreciation of CANDU reactors from Cernavoda NPP. The selected technology is based on the catalyzed isotopic exchange process between deuterium and liquid water (LPCE) combined with the cryogenic distillation process (CD). This paper presents an updated review of activities in the field carried out in Romania after the year 2000 and in particular those related to the development and operation of Tritium Removal Experimental Pilot Plant. It is also presented a comparison between the experimental pilot plant and industrial plant to be implemented at Cernavoda NPP. The similarities between the experimental pilot plant from ICIT Rm-Valcea and water depreciation and isotopic separation systems from ITER are also presented and discussed. Many aspects or 'opened issues' relating to WDS and ISS could be checked and clarified by a special research program, developed within ExpTRF. By these achievements and results, ICIT Rm - Valcea has proved its expertise and capability concerning tritium management therefore its competence may be used within ITER project.

Keywords: ITER project, heavy water detritiation, tritium removal, isotopic exchange

Procedia PDF Downloads 413
16393 Ensuring Quality in DevOps Culture

Authors: Sagar Jitendra Mahendrakar

Abstract:

Integrating quality assurance (QA) practices into DevOps culture has become increasingly important in modern software development environments. Collaboration, automation and continuous feedback characterize the seamless integration of DevOps development and operations teams to achieve rapid and reliable software delivery. In this context, quality assurance plays a key role in ensuring that software products meet the highest quality, performance and reliability standards throughout the development life cycle. This brief explores key principles, challenges, and best practices related to quality assurance in a DevOps culture. This emphasizes the importance of quality transfer in the development process, as quality control processes are integrated in every step of the DevOps process. Automation is the cornerstone of DevOps quality assurance, enabling continuous testing, integration and deployment and providing rapid feedback for early problem identification and resolution. In addition, the summary addresses the cultural and organizational challenges of implementing quality assurance in DevOps, emphasizing the need to foster collaboration, break down silos, and promote a culture of continuous improvement. It also discusses the importance of toolchain integration and capability development to support effective QA practices in DevOps environments. Moreover, the abstract discusses the cultural and organizational challenges in implementing QA within DevOps, emphasizing the need for fostering collaboration, breaking down silos, and nurturing a culture of continuous improvement. It also addresses the importance of toolchain integration and skills development to support effective QA practices within DevOps environments. Overall, this collection works at the intersection of QA and DevOps culture, providing insights into how organizations can use DevOps principles to improve software quality, accelerate delivery, and meet the changing demands of today's dynamic software. landscape.

Keywords: quality engineer, devops, automation, tool

Procedia PDF Downloads 58