Search results for: multimodal transportation optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4735

Search results for: multimodal transportation optimization

2965 Self-Organizing Maps for Credit Card Fraud Detection and Visualization

Authors: Peng Chun-Yi, Chen Wei-Hsuan, Ueng Shyh-Kuang

Abstract:

This study focuses on the application of self-organizing maps (SOM) technology in analyzing credit card transaction data, aiming to enhance the accuracy and efficiency of fraud detection. Som, as an artificial neural network, is particularly suited for pattern recognition and data classification, making it highly effective for the complex and variable nature of credit card transaction data. By analyzing transaction characteristics with SOM, the research identifies abnormal transaction patterns that could indicate potentially fraudulent activities. Moreover, this study has developed a specialized visualization tool to intuitively present the relationships between SOM analysis outcomes and transaction data, aiding financial institution personnel in quickly identifying and responding to potential fraud, thereby reducing financial losses. Additionally, the research explores the integration of SOM technology with composite intelligent system technologies (including finite state machines, fuzzy logic, and decision trees) to further improve fraud detection accuracy. This multimodal approach provides a comprehensive perspective for identifying and understanding various types of fraud within credit card transactions. In summary, by integrating SOM technology with visualization tools and composite intelligent system technologies, this research offers a more effective method of fraud detection for the financial industry, not only enhancing detection accuracy but also deepening the overall understanding of fraudulent activities.

Keywords: self-organizing map technology, fraud detection, information visualization, data analysis, composite intelligent system technologies, decision support technologies

Procedia PDF Downloads 54
2964 The Role of Metaheuristic Approaches in Engineering Problems

Authors: Ferzat Anka

Abstract:

Many types of problems can be solved using traditional analytical methods. However, these methods take a long time and cause inefficient use of resources. In particular, different approaches may be required in solving complex and global engineering problems that we frequently encounter in real life. The bigger and more complex a problem, the harder it is to solve. Such problems are called Nondeterministic Polynomial time (NP-hard) in the literature. The main reasons for recommending different metaheuristic algorithms for various problems are the use of simple concepts, the use of simple mathematical equations and structures, the use of non-derivative mechanisms, the avoidance of local optima, and their fast convergence. They are also flexible, as they can be applied to different problems without very specific modifications. Thanks to these features, it can be easily embedded even in many hardware devices. Accordingly, this approach can also be used in trend application areas such as IoT, big data, and parallel structures. Indeed, the metaheuristic approaches are algorithms that return near-optimal results for solving large-scale optimization problems. This study is focused on the new metaheuristic method that has been merged with the chaotic approach. It is based on the chaos theorem and helps relevant algorithms to improve the diversity of the population and fast convergence. This approach is based on Chimp Optimization Algorithm (ChOA), that is a recently introduced metaheuristic algorithm inspired by nature. This algorithm identified four types of chimpanzee groups: attacker, barrier, chaser, and driver, and proposed a suitable mathematical model for them based on the various intelligence and sexual motivations of chimpanzees. However, this algorithm is not more successful in the convergence rate and escaping of the local optimum trap in solving high-dimensional problems. Although it and some of its variants use some strategies to overcome these problems, it is observed that it is not sufficient. Therefore, in this study, a newly expanded variant is described. In the algorithm called Ex-ChOA, hybrid models are proposed for position updates of search agents, and a dynamic switching mechanism is provided for transition phases. This flexible structure solves the slow convergence problem of ChOA and improves its accuracy in multidimensional problems. Therefore, it tries to achieve success in solving global, complex, and constrained problems. The main contribution of this study is 1) It improves the accuracy and solves the slow convergence problem of the ChOA. 2) It proposes new hybrid movement strategy models for position updates of search agents. 3) It provides success in solving global, complex, and constrained problems. 4) It provides a dynamic switching mechanism between phases. The performance of the Ex-ChOA algorithm is analyzed on a total of 8 benchmark functions, as well as a total of 2 classical and constrained engineering problems. The proposed algorithm is compared with the ChoA, and several well-known variants (Weighted-ChoA, Enhanced-ChoA) are used. In addition, an Improved algorithm from the Grey Wolf Optimizer (I-GWO) method is chosen for comparison since the working model is similar. The obtained results depict that the proposed algorithm performs better or equivalently to the compared algorithms.

Keywords: optimization, metaheuristic, chimp optimization algorithm, engineering constrained problems

Procedia PDF Downloads 71
2963 Structural Damage Detection via Incomplete Model Data Using Output Data Only

Authors: Ahmed Noor Al-qayyim, Barlas Özden Çağlayan

Abstract:

Structural failure is caused mainly by damage that often occurs on structures. Many researchers focus on obtaining very efficient tools to detect the damage in structures in the early state. In the past decades, a subject that has received considerable attention in literature is the damage detection as determined by variations in the dynamic characteristics or response of structures. This study presents a new damage identification technique. The technique detects the damage location for the incomplete structure system using output data only. The method indicates the damage based on the free vibration test data by using “Two Points - Condensation (TPC) technique”. This method creates a set of matrices by reducing the structural system to two degrees of freedom systems. The current stiffness matrices are obtained from optimization of the equation of motion using the measured test data. The current stiffness matrices are compared with original (undamaged) stiffness matrices. High percentage changes in matrices’ coefficients lead to the location of the damage. TPC technique is applied to the experimental data of a simply supported steel beam model structure after inducing thickness change in one element. Where two cases are considered, the method detects the damage and determines its location accurately in both cases. In addition, the results illustrate that these changes in stiffness matrix can be a useful tool for continuous monitoring of structural safety using ambient vibration data. Furthermore, its efficiency proves that this technique can also be used for big structures.

Keywords: damage detection, optimization, signals processing, structural health monitoring, two points–condensation

Procedia PDF Downloads 357
2962 First Order Moment Bounds on DMRL and IMRL Classes of Life Distributions

Authors: Debasis Sengupta, Sudipta Das

Abstract:

The class of life distributions with decreasing mean residual life (DMRL) is well known in the field of reliability modeling. It contains the IFR class of distributions and is contained in the NBUE class of distributions. While upper and lower bounds of the reliability distribution function of aging classes such as IFR, IFRA, NBU, NBUE, and HNBUE have discussed in the literature for a long time, there is no analogous result available for the DMRL class. We obtain the upper and lower bounds for the reliability function of the DMRL class in terms of first order finite moment. The lower bound is obtained by showing that for any fixed time, the minimization of the reliability function over the class of all DMRL distributions with a fixed mean is equivalent to its minimization over a smaller class of distribution with a special form. Optimization over this restricted set can be made algebraically. Likewise, the maximization of the reliability function over the class of all DMRL distributions with a fixed mean turns out to be a parametric optimization problem over the class of DMRL distributions of a special form. The constructive proofs also establish that both the upper and lower bounds are sharp. Further, the DMRL upper bound coincides with the HNBUE upper bound and the lower bound coincides with the IFR lower bound. We also prove that a pair of sharp upper and lower bounds for the reliability function when the distribution is increasing mean residual life (IMRL) with a fixed mean. This result is proved in a similar way. These inequalities fill a long-standing void in the literature of the life distribution modeling.

Keywords: DMRL, IMRL, reliability bounds, hazard functions

Procedia PDF Downloads 393
2961 Genetic Algorithm and Multi Criteria Decision Making Approach for Compressive Sensing Based Direction of Arrival Estimation

Authors: Ekin Nurbaş

Abstract:

One of the essential challenges in array signal processing, which has drawn enormous research interest over the past several decades, is estimating the direction of arrival (DOA) of plane waves impinging on an array of sensors. In recent years, the Compressive Sensing based DoA estimation methods have been proposed by researchers, and it has been discovered that the Compressive Sensing (CS)-based algorithms achieved significant performances for DoA estimation even in scenarios where there are multiple coherent sources. On the other hand, the Genetic Algorithm, which is a method that provides a solution strategy inspired by natural selection, has been used in sparse representation problems in recent years and provides significant improvements in performance. With all of those in consideration, in this paper, a method that combines the Genetic Algorithm (GA) and the Multi-Criteria Decision Making (MCDM) approaches for Direction of Arrival (DoA) estimation in the Compressive Sensing (CS) framework is proposed. In this method, we generate a multi-objective optimization problem by splitting the norm minimization and reconstruction loss minimization parts of the Compressive Sensing algorithm. With the help of the Genetic Algorithm, multiple non-dominated solutions are achieved for the defined multi-objective optimization problem. Among the pareto-frontier solutions, the final solution is obtained with the multiple MCDM methods. Moreover, the performance of the proposed method is compared with the CS-based methods in the literature.

Keywords: genetic algorithm, direction of arrival esitmation, multi criteria decision making, compressive sensing

Procedia PDF Downloads 140
2960 Neural Network Supervisory Proportional-Integral-Derivative Control of the Pressurized Water Reactor Core Power Load Following Operation

Authors: Derjew Ayele Ejigu, Houde Song, Xiaojing Liu

Abstract:

This work presents the particle swarm optimization trained neural network (PSO-NN) supervisory proportional integral derivative (PID) control method to monitor the pressurized water reactor (PWR) core power for safe operation. The proposed control approach is implemented on the transfer function of the PWR core, which is computed from the state-space model. The PWR core state-space model is designed from the neutronics, thermal-hydraulics, and reactivity models using perturbation around the equilibrium value. The proposed control approach computes the control rod speed to maneuver the core power to track the reference in a closed-loop scheme. The particle swarm optimization (PSO) algorithm is used to train the neural network (NN) and to tune the PID simultaneously. The controller performance is examined using integral absolute error, integral time absolute error, integral square error, and integral time square error functions, and the stability of the system is analyzed by using the Bode diagram. The simulation results indicated that the controller shows satisfactory performance to control and track the load power effectively and smoothly as compared to the PSO-PID control technique. This study will give benefit to design a supervisory controller for nuclear engineering research fields for control application.

Keywords: machine learning, neural network, pressurized water reactor, supervisory controller

Procedia PDF Downloads 149
2959 Reducing The Frequency of Flooding Accompanied by Low pH Wastewater In 100/200 Unit of Phosphate Fertilizer 1 Plant by Implementing The 3R Program (Reduce, Reuse and Recycle)

Authors: Pradipta Risang Ratna Sambawa, Driya Herseta, Mahendra Fajri Nugraha

Abstract:

In 2020, PT Petrokimia Gresik implemented a program to increase the ROP (Run Of Pile) production rate at the Phosphate Fertilizer 1 plant, causing an increase in scrubbing water consumption in the 100/200 area unit. This increase in water consumption causes a higher discharge of wastewater, which can further cause local flooding, especially during the rainy season. The 100/200 area of the Phosphate Fertilizer 1 plant is close to the warehouse and is often a passing area for trucks transporting raw materials. This causes the pH in the wastewater to become acidic (the worst point is up to pH 1). The problem of flooding and exposure to acidic wastewater in the 100/200 area of Phosphate Fertilizer Plant 1 was then resolved by PT Petrokimia Gresik through wastewater optimization steps called the 3R program (Reduce, Reuse, and Recycle). The 3R (Reduce, reuse, and recycle) program consists of an air consumption reduction program by considering the liquid/gas ratio in scrubbing unit of 100/200 Phosphate Fertilizer 1 plant, creating a wastewater interconnection line so that wastewater from unit 100/200 can be used as scrubbing water in the Phonska 1, Phonska 2, Phonska 3 and unit 300 Phosphate Fertilizer 1 plant and increasing scrubbing effectiveness through scrubbing effectiveness simulations. Through a series of wastewater optimization programs, PT Petrokimia Gresik has succeeded in reducing NaOH consumption for neutralization up to 2,880 kg/day or equivalent in saving up to 314,359.76 dollars/year and reducing process water consumption up to 600 m3/day or equivalent in saving up to 63,739.62 dollars/year.

Keywords: fertilizer, phosphate fertilizer, wastewater, wastewater treatment, water management

Procedia PDF Downloads 20
2958 Simulation and Controller Tunning in a Photo-Bioreactor Applying by Taguchi Method

Authors: Hosein Ghahremani, MohammadReza Khoshchehre, Pejman Hakemi

Abstract:

This study involves numerical simulations of a vertical plate-type photo-bioreactor to investigate the performance of Microalgae Spirulina and Control and optimization of parameters for the digital controller by Taguchi method that MATLAB software and Qualitek-4 has been made. Since the addition of parameters such as temperature, dissolved carbon dioxide, biomass, and ... Some new physical parameters such as light intensity and physiological conditions like photosynthetic efficiency and light inhibitors are involved in biological processes, control is facing many challenges. Not only facilitate the commercial production photo-bioreactor Microalgae as feed for aquaculture and food supplements are efficient systems but also as a possible platform for the production of active molecules such as antibiotics or innovative anti-tumor agents, carbon dioxide removal and removal of heavy metals from wastewater is used. Digital controller is designed for controlling the light bioreactor until Microalgae growth rate and carbon dioxide concentration inside the bioreactor is investigated. The optimal values of the controller parameters of the S/N and ANOVA analysis software Qualitek-4 obtained With Reaction curve, Cohen-Con and Ziegler-Nichols method were compared. The sum of the squared error obtained for each of the control methods mentioned, the Taguchi method as the best method for controlling the light intensity was selected photo-bioreactor. This method compared to control methods listed the higher stability and a shorter interval to be answered.

Keywords: photo-bioreactor, control and optimization, Light intensity, Taguchi method

Procedia PDF Downloads 388
2957 Superamolecular Chemistry and Packing of FAMEs in the Liquid Phase for Optimization of Combustion and Emission

Authors: Zeev Wiesman, Paula Berman, Nitzan Meiri, Charles Linder

Abstract:

Supramolecular chemistry refers to the domain of chemistry beyond that of molecules and focuses on the chemical systems made up of a discrete number of assembled molecular sub units or components. Biodiesel components self arrangements is closely related/affect their physical properties in combustion systems and emission. Due to technological difficulties, knowledge regarding the molecular packing of FAMEs (biodiesel) in the liquid phase is limited. Spectral tools such as X-ray and NMR are known to provide evidences related to molecular structure organization. Recently, it was reported by our research group that using 1H Time Domain NMR methodology based on relaxation time and self diffusion coefficients, FAMEs clusters with different motilities can be accurately studied in the liquid phase. Head to head dimarization with quasi-smectic clusters organization, based on molecular motion analysis, was clearly demonstrated. These findings about the assembly/packing of the FAME components are directly associated with fluidity/viscosity of the biodiesel. Furthermore, these findings may provide information of micro/nano-particles that are formed in the delivery and injection system of various combustion systems (affected by thermodynamic conditions). Various relevant parameters to combustion such as: distillation/Liquid Gas phase transition, cetane number/ignition delay, shoot, oxidation/NOX emission maybe predicted. These data may open the window for further optimization of FAME/diesel mixture in terms of combustion and emission.

Keywords: supermolecular chemistry, FAMEs, liquid phase, fluidity, LF-NMR

Procedia PDF Downloads 336
2956 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework

Authors: Iulia E. Falcan

Abstract:

The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.

Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization

Procedia PDF Downloads 163
2955 Research on the Function Optimization of China-Hungary Economic and Trade Cooperation Zone

Authors: Wenjuan Lu

Abstract:

China and Hungary have risen from a friendly and comprehensive cooperative relationship to a comprehensive strategic partnership in recent years, and the economic and trade relations between the two countries have developed smoothly. As an important country along the ‘Belt and Road’, Hungary and China have strong economic complementarities and have unique advantages in carrying China's industrial transfer and economic transformation and development. The construction of the China-Hungary Economic and Trade Cooperation Zone, which was initiated by the ‘Sino-Hungarian Borsod Industrial Zone’ and the ‘Hungarian Central European Trade and Logistics Cooperation Park’ has promoted infrastructure construction, optimized production capacity, promoted industrial restructuring, and formed brand and agglomeration effects. Enhancing the influence of Chinese companies in the European market has also promoted economic development in Hungary and even in Central and Eastern Europe. However, as the China-Hungary Economic and Trade Cooperation Zone is still in its infancy, there are still shortcomings such as small scale, single function, and no prominent platform. In the future, based on the needs of China's cooperation with ‘17+1’ and China-Hungary cooperation, on the basis of appropriately expanding the scale of economic and trade cooperation zones and appropriately increasing the number of economic and trade cooperation zones, it is better to focus on optimizing and adjusting its functions and highlighting different economic and trade cooperation. The differentiated function of the trade zones strengthens the multi-faceted cooperation of economic and trade cooperation zones and highlights its role as a platform for cooperation in information, capital, and services.

Keywords: ‘One Belt, One Road’ Initiative, China-Hungary economic and trade cooperation zone, function optimization, Central and Eastern Europe

Procedia PDF Downloads 177
2954 A User-Directed Approach to Optimization via Metaprogramming

Authors: Eashan Hatti

Abstract:

In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.

Keywords: optimization, metaprogramming, logic programming, abstraction

Procedia PDF Downloads 82
2953 Optimization of Lead Bioremediation by Marine Halomonas sp. ES015 Using Statistical Experimental Methods

Authors: Aliaa M. El-Borai, Ehab A. Beltagy, Eman E. Gadallah, Samy A. ElAssar

Abstract:

Bioremediation technology is now used for treatment instead of traditional metal removal methods. A strain was isolated from Marsa Alam, Red sea, Egypt showed high resistance to high lead concentration and was identified by the 16S rRNA gene sequencing technique as Halomonas sp. ES015. Medium optimization was carried out using Plackett-Burman design, and the most significant factors were yeast extract, casamino acid and inoculums size. The optimized media obtained by the statistical design raised the removal efficiency from 84% to 99% from initial concentration 250 ppm of lead. Moreover, Box-Behnken experimental design was applied to study the relationship between yeast extract concentration, casamino acid concentration and inoculums size. The optimized medium increased removal efficiency to 97% from initial concentration 500 ppm of lead. Immobilized Halomonas sp. ES015 cells on sponge cubes, using optimized medium in loop bioremediation column, showed relatively constant lead removal efficiency when reused six successive cycles over the range of time interval. Also metal removal efficiency was not affected by flow rate changes. Finally, the results of this research refer to the possibility of lead bioremediation by free or immobilized cells of Halomonas sp. ES015. Also, bioremediation can be done in batch cultures and semicontinuous cultures using column technology.

Keywords: bioremediation, lead, Box–Behnken, Halomonas sp. ES015, loop bioremediation, Plackett-Burman

Procedia PDF Downloads 190
2952 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem

Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee

Abstract:

Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.

Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research

Procedia PDF Downloads 334
2951 Robotic Arm-Automated Spray Painting with One-Shot Object Detection and Region-Based Path Optimization

Authors: Iqraq Kamal, Akmal Razif, Sivadas Chandra Sekaran, Ahmad Syazwan Hisaburi

Abstract:

Painting plays a crucial role in the aerospace manufacturing industry, serving both protective and cosmetic purposes for components. However, the traditional manual painting method is time-consuming and labor-intensive, posing challenges for the sector in achieving higher efficiency. Additionally, the current automated robot path planning has been a bottleneck for spray painting processes, as typical manual teaching methods are time-consuming, error-prone, and skill-dependent. Therefore, it is essential to develop automated tool path planning methods to replace manual ones, reducing costs and improving product quality. Focusing on flat panel painting in aerospace manufacturing, this study aims to address issues related to unreliable part identification techniques caused by the high-mixture, low-volume nature of the industry. The proposed solution involves using a spray gun and a UR10 robotic arm with a vision system that utilizes one-shot object detection (OS2D) to identify parts accurately. Additionally, the research optimizes path planning by concentrating on the region of interest—specifically, the identified part, rather than uniformly covering the entire painting tray.

Keywords: aerospace manufacturing, one-shot object detection, automated spray painting, vision-based path optimization, deep learning, automation, robotic arm

Procedia PDF Downloads 75
2950 Stability Optimization of NABH₄ via PH and H₂O:NABH₄ Ratios for Large Scale Hydrogen Production

Authors: Parth Mehta, Vedasri Bai Khavala, Prabhu Rajagopal, Tiju Thomas

Abstract:

There is an increasing need for alternative clean fuels, and hydrogen (H₂) has long been considered a promising solution with a high calorific value (142MJ/kg). However, the storage of H₂ and expensive processes for its generation have hindered its usage. Sodium borohydride (NaBH₄) can potentially be used as an economically viable means of H₂ storage. Thus far, there have been attempts to optimize the life of NaBH₄ (half-life) in aqueous media by stabilizing it with sodium hydroxide (NaOH) for various pH values. Other reports have shown that H₂ yield and reaction kinetics remained constant for all ratios of H₂O to NaBH₄ > 30:1, without any acidic catalysts. Here we highlight the importance of pH and H₂O: NaBH₄ ratio (80:1, 40:1, 20:1 and 10:1 by weight), for NaBH₄ stabilization (half-life reaction time at room temperature) and corrosion minimization of H₂ reactor components. It is interesting to observe that at any particular pH>10 (e.g., pH = 10, 11 and 12), the H₂O: NaBH₄ ratio does not have the expected linear dependence with stability. On the contrary, high stability was observed at the ratio of 10:1 H₂O: NaBH₄ across all pH>10. When the H₂O: NaBH₄ ratio is increased from 10:1 to 20:1 and beyond (till 80:1), constant stability (% degradation) is observed with respect to time. For practical usage (consumption within 6 hours of making NaBH₄ solution), 15% degradation at pH 11 and NaBH₄: H₂O ratio of 10:1 is recommended. Increasing this ratio demands higher NaOH concentration at the same pH, thus requiring a higher concentration or volume of acid (e.g., HCl) for H₂ generation. The reactions are done with tap water to render the results useful from an industrial standpoint. The observed stability regimes are rationalized based on complexes associated with NaBH₄ when solvated in water, which depend sensitively on both pH and NaBH₄: H₂O ratio.

Keywords: hydrogen, sodium borohydride, stability optimization, H₂O:NaBH₄ ratio

Procedia PDF Downloads 112
2949 Chaotic Sequence Noise Reduction and Chaotic Recognition Rate Improvement Based on Improved Local Geometric Projection

Authors: Rubin Dan, Xingcai Wang, Ziyang Chen

Abstract:

A chaotic time series noise reduction method based on the fusion of the local projection method, wavelet transform, and particle swarm algorithm (referred to as the LW-PSO method) is proposed to address the problem of false recognition due to noise in the recognition process of chaotic time series containing noise. The method first uses phase space reconstruction to recover the original dynamical system characteristics and removes the noise subspace by selecting the neighborhood radius; then it uses wavelet transform to remove D1-D3 high-frequency components to maximize the retention of signal information while least-squares optimization is performed by the particle swarm algorithm. The Lorenz system containing 30% Gaussian white noise is simulated and verified, and the phase space, SNR value, RMSE value, and K value of the 0-1 test method before and after noise reduction of the Schreiber method, local projection method, wavelet transform method, and LW-PSO method are compared and analyzed, which proves that the LW-PSO method has a better noise reduction effect compared with the other three common methods. The method is also applied to the classical system to evaluate the noise reduction effect of the four methods and the original system identification effect, which further verifies the superiority of the LW-PSO method. Finally, it is applied to the Chengdu rainfall chaotic sequence for research, and the results prove that the LW-PSO method can effectively reduce the noise and improve the chaos recognition rate.

Keywords: Schreiber noise reduction, wavelet transform, particle swarm optimization, 0-1 test method, chaotic sequence denoising

Procedia PDF Downloads 191
2948 Use of Magnesium as a Renewable Energy Source

Authors: Rafayel K. Kostanyan

Abstract:

The opportunities of use of metallic magnesium as a generator of hydrogen gas, as well as thermal and electric energy is presented in the paper. Various schemes of magnesium application are discussed and power characteristics of corresponding devices are presented. Economic estimation of hydrogen price obtained by different methods is made, including the use of magnesium as a source of hydrogen for transportation in comparison with gasoline. Details and prospects of our new inexpensive technology of magnesium production from magnesium hydroxide and magnesium bearing rocks (which are available worldwide and in Armenia) are analyzed. It is estimated the threshold cost of Mg production at which application of this metal in power engineering is economically justified.

Keywords: energy, electrodialysis, magnesium, new technology

Procedia PDF Downloads 268
2947 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method

Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry

Abstract:

The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.

Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design

Procedia PDF Downloads 149
2946 Learning Preference in Nursing Students at Boromarajonani College of Nursing Chon Buri

Authors: B. Wattanakul, G. Ngamwongwan, S. Ngamkham

Abstract:

Exposure to different learning experiences contributes to changing in learning style. Addressing students’ learning preference could help teachers provide different learning activities that encourage the student to learn effectively. Purpose: The purpose of this descriptive study was to describe learning styles of nursing students at Boromarajonani College of Nursing Chon Buri. Sample: The purposive sample was 463 nursing students who were enrolled in a nursing program at different academic levels. The 16-item VARK questionnaire with 4 multiple choices was administered at one time data collection. Choices have consisted with modalities of Visual, Aural, Read/write, and Kinesthetic measured by VARK. Results: Majority of learning preference of students at different levels was visual and read/write learning preference. Almost 67% of students have a multimodal preference, which is visual learning preference associated with read/write or kinesthetic preference. At different academic levels, multimodalities are greater than single preference. Over 30% of students have one dominant learning preference, including visual preference, read/write preference and kinesthetic preference. Analysis of Variance (ANOVA) with Bonferroni adjustment revealed a significant difference between students based on their academic level (p < 0.001). Learning style of the first-grade nursing students differed from the second-grade nursing students (p < 0.001). While learning style of nursing students in the second-grade has significantly varied from the 1st, 3rd, and 4th grade (p < 0.001), learning preference of the 3rd grade has significantly differed from the 4th grade of nursing students (p > 0.05). Conclusions: Nursing students have varied learning styles based on their different academic levels. Learning preference is not fixed attributes. This should help nursing teachers assess the types of changes in students’ learning preferences while developing teaching plans to optimize students’ learning environment and achieve the needs of the courses and help students develop learning preference to meet the need of the course.

Keywords: learning preference, VARK, learning style, nursing

Procedia PDF Downloads 352
2945 Multi-Objective Optimization for Aircraft Fleet Management: A Multi-Parametric Approach

Authors: Xin-Yu Li, Dung-Ying Lin

Abstract:

Fleet availability is a crucial indicator for an aircraft fleet. However, in practice, fleet planning involves many resource and safety constraints, such as annual and monthly flight training targets, maximum engine usage limits. Due to safety considerations, engines must be removed for mandatory maintenance and replacement of key components. This situation is known as the ‘threshold.’ The annual number of thresholds is a key factor in maintaining fleet availability. However, traditional method heavily relies on experience and manual planning, which may result in ineffective engine usage and affect the flight missions. This study aims to address the challenges of fleet planning and availability maintenance in aircraft fleet with resource and safety constraints. The goal is to effectively optimize engine usage and maintenance tasks. This study has four objectives: minimizing the number of engine thresholds, minimizing the monthly lack of flight hours, minimizing the monthly excess of flight hours, and minimizing engine disassembly frequency. To solve the resulting formulation, this study uses parametric programming techniques and ϵ-constraint method to reformulate multi-objective problems into single-objective problems, efficiently generating Pareto fronts. This method is advantageous when handling multiple conflicting objectives. It allows for an effective trade-off between these competing objectives. Empirical results and managerial insights will be provided.

Keywords: aircraft fleet, engine utilization planning, multi-objective optimization, parametric method, pareto optimality

Procedia PDF Downloads 3
2944 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads

Authors: Gaurav Kumar Sinha

Abstract:

In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.

Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies

Procedia PDF Downloads 61
2943 Flow Field Optimization for Proton Exchange Membrane Fuel Cells

Authors: Xiao-Dong Wang, Wei-Mon Yan

Abstract:

The flow field design in the bipolar plates affects the performance of the proton exchange membrane (PEM) fuel cell. This work adopted a combined optimization procedure, including a simplified conjugate-gradient method and a completely three-dimensional, two-phase, non-isothermal fuel cell model, to look for optimal flow field design for a single serpentine fuel cell of size 9×9 mm with five channels. For the direct solution, the two-fluid method was adopted to incorporate the heat effects using energy equations for entire cells. The model assumes that the system is steady; the inlet reactants are ideal gases; the flow is laminar; and the porous layers such as the diffusion layer, catalyst layer and PEM are isotropic. The model includes continuity, momentum and species equations for gaseous species, liquid water transport equations in the channels, gas diffusion layers, and catalyst layers, water transport equation in the membrane, electron and proton transport equations. The Bulter-Volumer equation was used to describe electrochemical reactions in the catalyst layers. The cell output power density Pcell is maximized subjected to an optimal set of channel heights, H1-H5, and channel widths, W2-W5. The basic case with all channel heights and widths set at 1 mm yields a Pcell=7260 Wm-2. The optimal design displays a tapered characteristic for channels 1, 3 and 4, and a diverging characteristic in height for channels 2 and 5, producing a Pcell=8894 Wm-2, about 22.5% increment. The reduced channel heights of channels 2-4 significantly increase the sub-rib convection and widths for effectively removing liquid water and oxygen transport in gas diffusion layer. The final diverging channel minimizes the leakage of fuel to outlet via sub-rib convection from channel 4 to channel 5. Near-optimal design without huge loss in cell performance but is easily manufactured is tested. The use of a straight, final channel of 0.1 mm height has led to 7.37% power loss, while the design with all channel widths to be 1 mm with optimal channel heights obtained above yields only 1.68% loss of current density. The presence of a final, diverging channel has greater impact on cell performance than the fine adjustment of channel width at the simulation conditions set herein studied.

Keywords: optimization, flow field design, simplified conjugate-gradient method, serpentine flow field, sub-rib convection

Procedia PDF Downloads 294
2942 A Mixed-Integer Nonlinear Program to Optimally Pace and Fuel Ultramarathons

Authors: Kristopher A. Pruitt, Justin M. Hill

Abstract:

The purpose of this research is to determine the pacing and nutrition strategies which minimize completion time and carbohydrate intake for athletes competing in ultramarathon races. The model formulation consists of a two-phase optimization. The first-phase mixed-integer nonlinear program (MINLP) determines the minimum completion time subject to the altitude, terrain, and distance of the race, as well as the mass and cardiovascular fitness of the athlete. The second-phase MINLP determines the minimum total carbohydrate intake required for the athlete to achieve the completion time prescribed by the first phase, subject to the flow of carbohydrates through the stomach, liver, and muscles. Consequently, the second phase model provides the optimal pacing and nutrition strategies for a particular athlete for each kilometer of a particular race. Validation of the model results over a wide range of athlete parameters against completion times for real competitive events suggests strong agreement. Additionally, the kilometer-by-kilometer pacing and nutrition strategies, the model prescribes for a particular athlete suggest unconventional approaches could result in lower completion times. Thus, the MINLP provides prescriptive guidance that athletes can leverage when developing pacing and nutrition strategies prior to competing in ultramarathon races. Given the highly-variable topographical characteristics common to many ultramarathon courses and the potential inexperience of many athletes with such courses, the model provides valuable insight to competitors who might otherwise fail to complete the event due to exhaustion or carbohydrate depletion.

Keywords: nutrition, optimization, pacing, ultramarathons

Procedia PDF Downloads 183
2941 High Aspect Ratio Micropillar Array Based Microfluidic Viscometer

Authors: Ahmet Erten, Adil Mustafa, Ayşenur Eser, Özlem Yalçın

Abstract:

We present a new viscometer based on a microfluidic chip with elastic high aspect ratio micropillar arrays. The displacement of pillar tips in flow direction can be used to analyze viscosity of liquid. In our work, Computational Fluid Dynamics (CFD) is used to analyze pillar displacement of various micropillar array configurations in flow direction at different viscosities. Following CFD optimization, micro-CNC based rapid prototyping is used to fabricate molds for microfluidic chips. Microfluidic chips are fabricated out of polydimethylsiloxane (PDMS) using soft lithography methods with molds machined out of aluminum. Tip displacements of micropillar array (300 µm in diameter and 1400 µm in height) in flow direction are recorded using a microscope mounted camera, and the displacements are analyzed using image processing with an algorithm written in MATLAB. Experiments are performed with water-glycerol solutions mixed at 4 different ratios to attain 1 cP, 5 cP, 10 cP and 15 cP viscosities at room temperature. The prepared solutions are injected into the microfluidic chips using a syringe pump at flow rates from 10-100 mL / hr and the displacement versus flow rate is plotted for different viscosities. A displacement of around 1.5 µm was observed for 15 cP solution at 60 mL / hr while only a 1 µm displacement was observed for 10 cP solution. The presented viscometer design optimization is still in progress for better sensitivity and accuracy. Our microfluidic viscometer platform has potential for tailor made microfluidic chips to enable real time observation and control of viscosity changes in biological or chemical reactions.

Keywords: Computational Fluid Dynamics (CFD), high aspect ratio, micropillar array, viscometer

Procedia PDF Downloads 242
2940 Coupling of Microfluidic Droplet Systems with ESI-MS Detection for Reaction Optimization

Authors: Julia R. Beulig, Stefan Ohla, Detlev Belder

Abstract:

In contrast to off-line analytical methods, lab-on-a-chip technology delivers direct information about the observed reaction. Therefore, microfluidic devices make an important scientific contribution, e.g. in the field of synthetic chemistry. Herein, the rapid generation of analytical data can be applied for the optimization of chemical reactions. These microfluidic devices enable a fast change of reaction conditions as well as a resource saving method of operation. In the presented work, we focus on the investigation of multiphase regimes, more specifically on a biphasic microfluidic droplet systems. Here, every single droplet is a reaction container with customized conditions. The biggest challenge is the rapid qualitative and quantitative readout of information as most detection techniques for droplet systems are non-specific, time-consuming or too slow. An exception is the electrospray mass spectrometry (ESI-MS). The combination of a reaction screening platform with a rapid and specific detection method is an important step in droplet-based microfluidics. In this work, we present a novel approach for synthesis optimization on the nanoliter scale with direct ESI-MS detection. The development of a droplet-based microfluidic device, which enables the modification of different parameters while simultaneously monitoring the effect on the reaction within a single run, is shown. By common soft- and photolithographic techniques a polydimethylsiloxane (PDMS) microfluidic chip with different functionalities is developed. As an interface for the MS detection, we use a steel capillary for ESI and improve the spray stability with a Teflon siphon tubing, which is inserted underneath the steel capillary. By optimizing the flow rates, it is possible to screen parameters of various reactions, this is exemplarity shown by a Domino Knoevenagel Hetero-Diels-Alder reaction. Different starting materials, catalyst concentrations and solvent compositions are investigated. Due to the high repetition rate of the droplet production, each set of reaction condition is examined hundreds of times. As a result, of the investigation, we receive possible reagents, the ideal water-methanol ratio of the solvent and the most effective catalyst concentration. The developed system can help to determine important information about the optimal parameters of a reaction within a short time. With this novel tool, we make an important step on the field of combining droplet-based microfluidics with organic reaction screening.

Keywords: droplet, mass spectrometry, microfluidics, organic reaction, screening

Procedia PDF Downloads 296
2939 Economic Analysis of Post-Harvest Losses in Plantain (and Banana): A Case Study of South Western Nigeria

Authors: O. R. Adeniyi, A. Ayandiji

Abstract:

Losses are common in most vegetables because the fruit ripens rapidly and most plantain products can only be stored for a few days thereby limiting their utilization. Plantain (and banana) is highly perishable at the ambient temperature prevalent in the tropics. The specific objective of this study is to identify the socioeconomic characteristics of banana/plantain dealers and determine the perceived effect of the losses incurred in the process of marketing banana/plantain. The study was carried out in Ondo and Lagos states of south-western Nigeria. Purposive sampling technique was used to collect information from “Kolawole plantain depot”, the point of purchase in Ondo State and “Alamutu plantain market” in Mushin the point of sales in Lagos state. Preliminary study was conducted with the use of primary data collected through well-structured questionnaires administered on 60 respondents and 55 fully completed ones analysed. Budgeting, gross margin and multiple linear regression were used for analyses. Most merchants were found to be in the middle age class (30-50 years), majority of whom were female and completed their secondary school education, with eighty percent having more than 5 years’ experience of in banana/plantain marketing. The highest losses were incurred during transportation and these losses constitute about 5.62 percent of the potential total revenue. On the average, loss in gross margin is about ₦6,000.00 per merchant. The impacts of these losses are reflected in the continuously reducing level of their income. Age of the respondents played a major role in determining the level of care in the handling of the fruits. The middle age class tends to be more favoured. In conclusion, the merchants need adequate and sustainable transportation and storage facilities as a matter of utmost urgency. There is the need for government to encourage producers of the product (farmers) by giving them motivating incentives and ensuring that the environment is made conducive also for dealers by providing adequate storage facilities and ready markets locally and possibly for export.

Keywords: post-harvest, losses, plantain, banana, simple regression

Procedia PDF Downloads 310
2938 Use of Galileo Advanced Features in Maritime Domain

Authors: Olivier Chaigneau, Damianos Oikonomidis, Marie-Cecile Delmas

Abstract:

GAMBAS (Galileo Advanced features for the Maritime domain: Breakthrough Applications for Safety and security) is a project funded by the European Space Program Agency (EUSPA) aiming at identifying the search-and-rescue and ship security alert system needs for maritime users (including operators and fishing stakeholders) and developing operational concepts to answer these needs. The general objective of the GAMBAS project is to support the deployment of Galileo exclusive features in the maritime domain in order to improve safety and security at sea, detection of illegal activities and associated surveillance means, resilience to natural and human-induced emergency situations, and develop, integrate, demonstrate, standardize and disseminate these new associated capabilities. The project aims to demonstrate: improvement of the SAR (Search And Rescue) and SSAS (Ship Security Alert System) detection and response to maritime distress through the integration of new features into the beacon for SSAS in terms of cost optimization, user-friendly aspects, integration of Galileo and OS NMA (Open Service Navigation Message Authentication) reception for improved authenticated localization performance and reliability, and at sea triggering capabilities, optimization of the responsiveness of RCCs (Rescue Co-ordination Centre) towards the distress situations affecting vessels, the adaptation of the MCCs (Mission Control Center) and MEOLUT (Medium Earth Orbit Local User Terminal) to the data distribution of SSAS alerts.

Keywords: Galileo new advanced features, maritime, safety, security

Procedia PDF Downloads 90
2937 An Integrated Approach for Optimal Selection of Machining Parameters in Laser Micro-Machining Process

Authors: A. Gopala Krishna, M. Lakshmi Chaitanya, V. Kalyana Manohar

Abstract:

In the existent analysis, laser micro machining (LMM) of Silicon carbide (SiCp) reinforced Aluminum 7075 Metal Matrix Composite (Al7075/SiCp MMC) was studied. While machining, Because of the intense heat generated, A layer gets formed on the work piece surface which is called recast layer and this layer is detrimental to the surface quality of the component. The recast layer needs to be as small as possible for precise applications. Therefore, The height of recast layer and the depth of groove which are conflicting in nature were considered as the significant manufacturing criteria, Which determines the pursuit of a machining process obtained in LMM of Al7075/10%SiCp composite. The present work formulates the depth of groove and height of recast layer in relation to the machining parameters using the Response Surface Methodology (RSM) and correspondingly, The formulated mathematical models were put to use for optimization. Since the effect of machining parameters on the depth of groove and height of recast layer was contradictory, The problem was explicated as a multi objective optimization problem. Moreover, An evolutionary Non-dominated sorting genetic algorithm (NSGA-II) was employed to optimize the model established by RSM. Subsequently this algorithm was also adapted to achieve the Pareto optimal set of solutions that provide a detailed illustration for making the optimal solutions. Eventually experiments were conducted to affirm the results obtained from RSM and NSGA-II.

Keywords: Laser Micro Machining (LMM), depth of groove, Height of recast layer, Response Surface Methodology (RSM), non-dominated sorting genetic algorithm

Procedia PDF Downloads 342
2936 Prioritizing Quality Dimensions in ‘Servitised’ Business through AHP

Authors: Mohita Gangwar Sharma

Abstract:

Different factors are compelling manufacturers to move towards the phenomenon of servitization i.e. when firms go beyond giving support to the customers in operating the equipment. The challenges that are being faced in this transition by the manufacturing firms from being a product provider to a product- service provider are multipronged. Product-Service Systems (PSS) lies in between the pure-product and pure-service continuum. Through this study, we wish to understand the dimensions of ‘PSS-quality’. We draw upon the quality literature for both the product and services and through an expert survey for a specific transportation sector using analytical hierarchical process (AHP) derive a conceptual model that can be used as a comprehensive measurement tool for PSS offerings.

Keywords: servitisation, quality, product-service system, AHP

Procedia PDF Downloads 303