Search results for: Gaussian process
15351 Business Process Mashup
Authors: Fethia Zenak, Salima Benbernou, Linda Zaoui
Abstract:
Recently, many companies are based on process development from scratch to achieve their business goals. The process development is not trivial and the main objective of enterprise managing processes is to decrease the software development time. Several concepts have been proposed in the field of business process-based reused development, known as BP Mashup. This concept consists of reusing existing business processes which have been modeled in order to respond to a particular goal. To meet user process requirements, our contribution is to mix parts of processes as 'processes fragments' components to build a new process (i.e. process mashup). The main idea of our paper is to offer graphical framework tool for both creating and running processes mashup. Allow users to perform a mixture of fragments, using a simple interface with set of graphical mixture operators based on a proposed formal model. A process mashup and mixture behavior are described within a new specification of a high-level language, language for process mashup (BPML).Keywords: business process, mashup, fragments, bp mashup
Procedia PDF Downloads 63615350 A New 3D Shape Descriptor Based on Multi-Resolution and Multi-Block CS-LBP
Authors: Nihad Karim Chowdhury, Mohammad Sanaullah Chowdhury, Muhammed Jamshed Alam Patwary, Rubel Biswas
Abstract:
In content-based 3D shape retrieval system, achieving high search performance has become an important research problem. A challenging aspect of this problem is to find an effective shape descriptor which can discriminate similar shapes adequately. To address this problem, we propose a new shape descriptor for 3D shape models by combining multi-resolution with multi-block center-symmetric local binary pattern operator. Given an arbitrary 3D shape, we first apply pose normalization, and generate a set of multi-viewed 2D rendered images. Second, we apply Gaussian multi-resolution filter to generate several levels of images from each of 2D rendered image. Then, overlapped sub-images are computed for each image level of a multi-resolution image. Our unique multi-block CS-LBP comes next. It allows the center to be composed of m-by-n rectangular pixels, instead of a single pixel. This process is repeated for all the 2D rendered images, derived from both ‘depth-buffer’ and ‘silhouette’ rendering. Finally, we concatenate all the features vectors into one dimensional histogram as our proposed 3D shape descriptor. Through several experiments, we demonstrate that our proposed 3D shape descriptor outperform the previous methods by using a benchmark dataset.Keywords: 3D shape retrieval, 3D shape descriptor, CS-LBP, overlapped sub-images
Procedia PDF Downloads 44815349 Application of Failure Mode and Effects Analysis (FMEA) on the Virtual Process Hazard Analysis of Acetone Production Process
Authors: Princes Ann E. Prieto, Denise F. Alpuerto, John Rafael C. Unlayao, Neil Concibido, Monet Concepcion Maguyon-Detras
Abstract:
Failure Mode and Effects Analysis (FMEA) has been used in the virtual Process Hazard Analysis (PHA) of the Acetone production process through the dehydrogenation of isopropyl alcohol, for which very limited process risk assessment has been published. In this study, the potential failure modes, effects, and possible causes of selected major equipment in the process were identified. During the virtual FMEA mock sessions, the risks in the process were evaluated and recommendations to reduce and/or mitigate the process risks were formulated. The risk was estimated using the calculated risk priority number (RPN) and was classified into four (4) levels according to their effects on acetone production. Results of this study were also used to rank the criticality of equipment in the process based on the calculated criticality rating (CR). Bow tie diagrams were also created for the critical hazard scenarios identified in the study.Keywords: chemical process safety, failure mode and effects analysis (FMEA), process hazard analysis (PHA), process safety management (PSM)
Procedia PDF Downloads 13815348 A Holistic Workflow Modeling Method for Business Process Redesign
Authors: Heejung Lee
Abstract:
In a highly competitive environment, it becomes more important to shorten the whole business process while delivering or even enhancing the business value to the customers and suppliers. Although the workflow management systems receive much attention for its capacity to practically support the business process enactment, the effective workflow modeling method remain still challenging and the high degree of process complexity makes it more difficult to gain the short lead time. This paper presents a workflow structuring method in a holistic way that can reduce the process complexity using activity-needs and formal concept analysis, which eventually enhances the key performance such as quality, delivery, and cost in business process.Keywords: workflow management, re-engineering, formal concept analysis, business process
Procedia PDF Downloads 41115347 Use of SUDOKU Design to Assess the Implications of the Block Size and Testing Order on Efficiency and Precision of Dulce De Leche Preference Estimation
Authors: Jéssica Ferreira Rodrigues, Júlio Silvio De Sousa Bueno Filho, Vanessa Rios De Souza, Ana Carla Marques Pinheiro
Abstract:
This study aimed to evaluate the implications of the block size and testing order on efficiency and precision of preference estimation for Dulce de leche samples. Efficiency was defined as the inverse of the average variance of pairwise comparisons among treatments. Precision was defined as the inverse of the variance of treatment means (or effects) estimates. The experiment was originally designed to test 16 treatments as a series of 8 Sudoku 16x16 designs being 4 randomized independently and 4 others in the reverse order, to yield balance in testing order. Linear mixed models were assigned to the whole experiment with 112 testers and all their grades, as well as their partially balanced subgroups, namely: a) experiment with the four initial EU; b) experiment with EU 5 to 8; c) experiment with EU 9 to 12; and b) experiment with EU 13 to 16. To record responses we used a nine-point hedonic scale, it was assumed a mixed linear model analysis with random tester and treatments effects and with fixed test order effect. Analysis of a cumulative random effects probit link model was very similar, with essentially no different conclusions and for simplicity, we present the results using Gaussian assumption. R-CRAN library lme4 and its function lmer (Fit Linear Mixed-Effects Models) was used for the mixed models and libraries Bayesthresh (default Gaussian threshold function) and ordinal with the function clmm (Cumulative Link Mixed Model) was used to check Bayesian analysis of threshold models and cumulative link probit models. It was noted that the number of samples tested in the same session can influence the acceptance level, underestimating the acceptance. However, proving a large number of samples can help to improve the samples discrimination.Keywords: acceptance, block size, mixed linear model, testing order, testing order
Procedia PDF Downloads 32215346 A Case Study of Conceptual Framework for Process Performance
Authors: Ljubica Milanović Glavan, Vesna Bosilj Vukšić, Dalia Suša
Abstract:
In order to gain a competitive advantage, many companies are focusing on reorganization of their business processes and implementing process-based management. In this context, assessing process performance is essential because it enables individuals and groups to assess where they stand in comparison to their competitors. In this paper, it is argued that process performance measurement is a necessity for a modern process-oriented company and it should be supported by a holistic process performance measurement system. It seems very unlikely that a universal set of performance indicators can be applied successfully to all business processes. Thus, performance indicators must be process-specific and have to be derived from both the strategic enterprise-wide goals and the process goals. Based on the extensive literature review and interviews conducted in Croatian company a conceptual framework for process performance measurement system was developed. The main objective of such system is to help process managers by providing comprehensive and timely information on the performance of business processes. This information can be used to communicate goals and current performance of a business process directly to the process team, to improve resource allocation and process output regarding quantity and quality, to give early warning signals, to make a diagnosis of the weaknesses of a business process, to decide whether corrective actions are needed and to assess the impact of actions taken.Keywords: Croatia, key performance indicators, performance measurement, process performance
Procedia PDF Downloads 67615345 Performance Comparison of Non-Binary RA and QC-LDPC Codes
Abstract:
Repeat–Accumulate (RA) codes are subclass of LDPC codes with fast encoder structures. In this paper, we consider a nonbinary extension of binary LDPC codes over GF(q) and construct a non-binary RA code and a non-binary QC-LDPC code over GF(2^4), we construct non-binary RA codes with linear encoding method and non-binary QC-LDPC codes with algebraic constructions method. And the BER performance of RA and QC-LDPC codes over GF(q) are compared with BP decoding and by simulation over the Additive White Gaussian Noise (AWGN) channels.Keywords: non-binary RA codes, QC-LDPC codes, performance comparison, BP algorithm
Procedia PDF Downloads 37715344 Fixed Points of Contractive-Like Operators by a Faster Iterative Process
Authors: Safeer Hussain Khan
Abstract:
In this paper, we prove a strong convergence result using a recently introduced iterative process with contractive-like operators. This improves and generalizes corresponding results in the literature in two ways: the iterative process is faster, operators are more general. In the end, we indicate that the results can also be proved with the iterative process with error terms.Keywords: contractive-like operator, iterative process, fixed point, strong convergence
Procedia PDF Downloads 43415343 Decision Making Communication in the Process of Technologies Commercialization: Archival Analysis of the Process Content
Authors: Vaida Zemlickiene
Abstract:
Scientists around the world and practitioners are working to identify the factors that influence the results of technology commercialization and to propose the ideal model for the technology commercialization process. In other words, all stakeholders of technology commercialization seek to find a formula or set of rules to succeed in commercializing technologies in order to avoid unproductive investments. In this article, the process of commercialization technology is understood as the process of transforming inventions into marketable products, services, and processes, or the path from the idea of using an invention to a product that incorporates process from 1 to 9 technology readiness level (TRL). There are many publications in the field of management literature, which are aimed at managing the commercialization process. However, there is an apparent lack of research for communication in decision-making in the process of technology commercialization. Works were done in the past, and the last decade's global research analysis led to the unambiguous conclusion that the methodological framework is not mature enough to be of practical use in business. The process of technology commercialization and the decisions made in the process should be explored in-depth. An archival analysis is performed to find insights into decision-making communication in the process of technologies commercialization, to find out the content of technology commercialization process: decision-making stages and participants, to analyze the internal factors of technology commercialization, to perform their critical analysis, to analyze the concept of successful/unsuccessful technology commercialization.Keywords: the process of technology commercialization, communication in decision-making process, the content of technology commercialization process, successful/unsuccessful technology commercialization
Procedia PDF Downloads 15315342 Multiscale Modelization of Multilayered Bi-Dimensional Soils
Authors: I. Hosni, L. Bennaceur Farah, N. Saber, R Bennaceur
Abstract:
Soil moisture content is a key variable in many environmental sciences. Even though it represents a small proportion of the liquid freshwater on Earth, it modulates interactions between the land surface and the atmosphere, thereby influencing climate and weather. Accurate modeling of the above processes depends on the ability to provide a proper spatial characterization of soil moisture. The measurement of soil moisture content allows assessment of soil water resources in the field of hydrology and agronomy. The second parameter in interaction with the radar signal is the geometric structure of the soil. Most traditional electromagnetic models consider natural surfaces as single scale zero mean stationary Gaussian random processes. Roughness behavior is characterized by statistical parameters like the Root Mean Square (RMS) height and the correlation length. Then, the main problem is that the agreement between experimental measurements and theoretical values is usually poor due to the large variability of the correlation function, and as a consequence, backscattering models have often failed to predict correctly backscattering. In this study, surfaces are considered as band-limited fractal random processes corresponding to a superposition of a finite number of one-dimensional Gaussian process each one having a spatial scale. Multiscale roughness is characterized by two parameters, the first one is proportional to the RMS height, and the other one is related to the fractal dimension. Soil moisture is related to the complex dielectric constant. This multiscale description has been adapted to two-dimensional profiles using the bi-dimensional wavelet transform and the Mallat algorithm to describe more correctly natural surfaces. We characterize the soil surfaces and sub-surfaces by a three layers geo-electrical model. The upper layer is described by its dielectric constant, thickness, a multiscale bi-dimensional surface roughness model by using the wavelet transform and the Mallat algorithm, and volume scattering parameters. The lower layer is divided into three fictive layers separated by an assumed plane interface. These three layers were modeled by an effective medium characterized by an apparent effective dielectric constant taking into account the presence of air pockets in the soil. We have adopted the 2D multiscale three layers small perturbations model including, firstly air pockets in the soil sub-structure, and then a vegetable canopy in the soil surface structure, that is to simulate the radar backscattering. A sensitivity analysis of backscattering coefficient dependence on multiscale roughness and new soil moisture has been performed. Later, we proposed to change the dielectric constant of the multilayer medium because it takes into account the different moisture values of each layer in the soil. A sensitivity analysis of the backscattering coefficient, including the air pockets in the volume structure with respect to the multiscale roughness parameters and the apparent dielectric constant, was carried out. Finally, we proposed to study the behavior of the backscattering coefficient of the radar on a soil having a vegetable layer in its surface structure.Keywords: multiscale, bidimensional, wavelets, backscattering, multilayer, SPM, air pockets
Procedia PDF Downloads 12515341 Optimization of the Transfer Molding Process by Implementation of Online Monitoring Techniques for Electronic Packages
Authors: Burcu Kaya, Jan-Martin Kaiser, Karl-Friedrich Becker, Tanja Braun, Klaus-Dieter Lang
Abstract:
Quality of the molded packages is strongly influenced by the process parameters of the transfer molding. To achieve a better package quality and a stable transfer molding process, it is necessary to understand the influence of the process parameters on the package quality. This work aims to comprehend the relationship between the process parameters, and to identify the optimum process parameters for the transfer molding process in order to achieve less voids and wire sweep. To achieve this, a DoE is executed for process optimization and a regression analysis is carried out. A systematic approach is represented to generate models which enable an estimation of the number of voids and wire sweep. Validation experiments are conducted to verify the model and the results are presented.Keywords: dielectric analysis, electronic packages, epoxy molding compounds, transfer molding process
Procedia PDF Downloads 38215340 Optimal Performance of Plastic Extrusion Process Using Fuzzy Goal Programming
Authors: Abbas Al-Refaie
Abstract:
This study optimized the performance of plastic extrusion process of drip irrigation pipes using fuzzy goal programming. Two main responses were of main interest; roll thickness and hardness. Four main process factors were studied. The L18 array was then used for experimental design. The individual-moving range control charts were used to assess the stability of the process, while the process capability index was used to assess process performance. Confirmation experiments were conducted at the obtained combination of optimal factor setting by fuzzy goal programming. The results revealed that process capability was improved significantly from -1.129 to 0.8148 for roll thickness and from 0.0965 to 0.714 and hardness. Such improvement results in considerable savings in production and quality costs.Keywords: fuzzy goal programming, extrusion process, process capability, irrigation plastic pipes
Procedia PDF Downloads 26815339 Embedding Knowledge Management in Business Process
Authors: Paul Ihuoma Oluikpe
Abstract:
The purpose of this paper is to explore and highlight the process of creating value for strategy management by embedding knowledge management in the business process. Knowledge management can be seen from a three-dimensional perspective of content, connections and competencies. These dimensions can be embedded in the knowledge processes (create, capture, share, and apply) and operationalized within a business process to effectively create a scenario where knowledge can be focused on enabling a process and the process in turn generates outcomes. The application of knowledge management on business processes of organizations is rare and underreported. Few researches have explored this paradigm although researches have tended to reinforce the notion that competitive advantage sits within the internal aspects of the firm. Given this notion, it is surprising that knowledge management research and practice have not focused sufficiently on the business process which is the basic unit of organizational decision implementation. This research serves to generate understanding on applying KM in business process using a large multinational in Sub-Saharan Africa.Keywords: knowledge management, business process, strategy, multinational
Procedia PDF Downloads 69315338 Distribution of Maximum Loss of Fractional Brownian Motion with Drift
Authors: Ceren Vardar Acar, Mine Caglar
Abstract:
In finance, the price of a volatile asset can be modeled using fractional Brownian motion (fBm) with Hurst parameter H>1/2. The Black-Scholes model for the values of returns of an asset using fBm is given as, 〖Y_t=Y_0 e^((r+μ)t+σB)〗_t^H, 0≤t≤T where Y_0 is the initial value, r is constant interest rate, μ is constant drift and σ is constant diffusion coefficient of fBm, which is denoted by B_t^H where t≥0. Black-Scholes model can be constructed with some Markov processes such as Brownian motion. The advantage of modeling with fBm to Markov processes is its capability of exposing the dependence between returns. The real life data for a volatile asset display long-range dependence property. For this reason, using fBm is a more realistic model compared to Markov processes. Investors would be interested in any kind of information on the risk in order to manage it or hedge it. The maximum possible loss is one way to measure highest possible risk. Therefore, it is an important variable for investors. In our study, we give some theoretical bounds on the distribution of maximum possible loss of fBm. We provide both asymptotical and strong estimates for the tail probability of maximum loss of standard fBm and fBm with drift and diffusion coefficients. In the investment point of view, these results explain, how large values of possible loss behave and its bounds.Keywords: maximum drawdown, maximum loss, fractional brownian motion, large deviation, Gaussian process
Procedia PDF Downloads 48315337 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings
Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir
Abstract:
Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine
Procedia PDF Downloads 16215336 Design of an Automated Deep Learning Recurrent Neural Networks System Integrated with IoT for Anomaly Detection in Residential Electric Vehicle Charging in Smart Cities
Authors: Wanchalerm Patanacharoenwong, Panaya Sudta, Prachya Bumrungkun
Abstract:
The paper focuses on the development of a system that combines Internet of Things (IoT) technologies and deep learning algorithms for anomaly detection in residential Electric Vehicle (EV) charging in smart cities. With the increasing number of EVs, ensuring efficient and reliable charging systems has become crucial. The aim of this research is to develop an integrated IoT and deep learning system for detecting anomalies in residential EV charging and enhancing EV load profiling and event detection in smart cities. This approach utilizes IoT devices equipped with infrared cameras to collect thermal images and household EV charging profiles from the database of Thailand utility, subsequently transmitting this data to a cloud database for comprehensive analysis. The methodology includes the use of advanced deep learning techniques such as Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) algorithms. IoT devices equipped with infrared cameras are used to collect thermal images and EV charging profiles. The data is transmitted to a cloud database for comprehensive analysis. The researchers also utilize feature-based Gaussian mixture models for EV load profiling and event detection. Moreover, the research findings demonstrate the effectiveness of the developed system in detecting anomalies and critical profiles in EV charging behavior. The system provides timely alarms to users regarding potential issues and categorizes the severity of detected problems based on a health index for each charging device. The system also outperforms existing models in event detection accuracy. This research contributes to the field by showcasing the potential of integrating IoT and deep learning techniques in managing residential EV charging in smart cities. The system ensures operational safety and efficiency while also promoting sustainable energy management. The data is collected using IoT devices equipped with infrared cameras and is stored in a cloud database for analysis. The collected data is then analyzed using RNN, LSTM, and feature-based Gaussian mixture models. The approach includes both EV load profiling and event detection, utilizing a feature-based Gaussian mixture model. This comprehensive method aids in identifying unique power consumption patterns among EV owners and outperforms existing models in event detection accuracy. In summary, the research concludes that integrating IoT and deep learning techniques can effectively detect anomalies in residential EV charging and enhance EV load profiling and event detection accuracy. The developed system ensures operational safety and efficiency, contributing to sustainable energy management in smart cities.Keywords: cloud computing framework, recurrent neural networks, long short-term memory, Iot, EV charging, smart grids
Procedia PDF Downloads 6815335 Optimization of Electrocoagulation Process Using Duelist Algorithm
Authors: Totok R. Biyanto, Arif T. Mardianto, M. Farid R. R., Luthfi Machmudi, kandi mulakasti
Abstract:
The main objective of this research is optimizing the electrocoagulation process design as a post-treatment for biologically vinasse effluent process. The first principle model with three independent variables that affect the energy consumption of electrocoagulation process i.e. current density, electrode distance, and time of treatment process are chosen as optimized variables. The process condition parameters were determined with the value of pH, electrical conductivity, and temperature of vinasse about 6.5, 28.5 mS/cm, 52 oC, respectively. Aluminum was chosen as the electrode material of electrocoagulation process. Duelist algorithm was used as optimization technique due to its capability to reach a global optimum. The optimization results show that the optimal process can be reached in the conditions of current density of 2.9976 A/m2, electrode distance of 1.5 cm and electrolysis time of 119 min. The optimized energy consumption during process is 34.02 Wh.Keywords: optimization, vinasse effluent, electrocoagulation, energy consumption
Procedia PDF Downloads 47015334 Block Mining: Block Chain Enabled Process Mining Database
Authors: James Newman
Abstract:
Process mining is an emerging technology that looks to serialize enterprise data in time series data. It has been used by many companies and has been the subject of a variety of research papers. However, the majority of current efforts have looked at how to best create process mining from standard relational databases. This paper is the first pass at outlining a database custom-built for the minimal viable product of process mining. We present Block Miner, a blockchain protocol to store process mining data across a distributed network. We demonstrate the feasibility of storing process mining data on the blockchain. We present a proof of concept and show how the intersection of these two technologies helps to solve a variety of issues, including but not limited to ransomware attacks, tax documentation, and conflict resolution.Keywords: blockchain, process mining, memory optimization, protocol
Procedia PDF Downloads 10415333 Data-Mining Approach to Analyzing Industrial Process Information for Real-Time Monitoring
Authors: Seung-Lock Seo
Abstract:
This work presents a data-mining empirical monitoring scheme for industrial processes with partially unbalanced data. Measurement data of good operations are relatively easy to gather, but in unusual special events or faults it is generally difficult to collect process information or almost impossible to analyze some noisy data of industrial processes. At this time some noise filtering techniques can be used to enhance process monitoring performance in a real-time basis. In addition, pre-processing of raw process data is helpful to eliminate unwanted variation of industrial process data. In this work, the performance of various monitoring schemes was tested and demonstrated for discrete batch process data. It showed that the monitoring performance was improved significantly in terms of monitoring success rate of given process faults.Keywords: data mining, process data, monitoring, safety, industrial processes
Procedia PDF Downloads 40115332 BIASS in the Estimation of Covariance Matrices and Optimality Criteria
Authors: Juan M. Rodriguez-Diaz
Abstract:
The precision of parameter estimators in the Gaussian linear model is traditionally accounted by the variance-covariance matrix of the asymptotic distribution. However, this measure can underestimate the true variance, specially for small samples. Traditionally, optimal design theory pays attention to this variance through its relationship with the model's information matrix. For this reason it seems convenient, at least in some cases, adapt the optimality criteria in order to get the best designs for the actual variance structure, otherwise the loss in efficiency of the designs obtained with the traditional approach may be very important.Keywords: correlated observations, information matrix, optimality criteria, variance-covariance matrix
Procedia PDF Downloads 44415331 Multivariate Statistical Process Monitoring of Base Metal Flotation Plant Using Dissimilarity Scale-Based Singular Spectrum Analysis
Authors: Syamala Krishnannair
Abstract:
A multivariate statistical process monitoring methodology using dissimilarity scale-based singular spectrum analysis (SSA) is proposed for the detection and diagnosis of process faults in the base metal flotation plant. Process faults are detected based on the multi-level decomposition of process signals by SSA using the dissimilarity structure of the process data and the subsequent monitoring of the multiscale signals using the unified monitoring index which combines T² with SPE. Contribution plots are used to identify the root causes of the process faults. The overall results indicated that the proposed technique outperformed the conventional multivariate techniques in the detection and diagnosis of the process faults in the flotation plant.Keywords: fault detection, fault diagnosis, process monitoring, dissimilarity scale
Procedia PDF Downloads 20915330 Bridging the Gap between Different Interfaces for Business Process Modeling
Authors: Katalina Grigorova, Kaloyan Mironov
Abstract:
The paper focuses on the benefits of business process modeling. Although this discipline is developing for many years, there is still necessity of creating new opportunities to meet the ever-increasing users’ needs. Because one of these needs is related to the conversion of business process models from one standard to another, the authors have developed a converter between BPMN and EPC standards using workflow patterns as intermediate tool. Nowadays there are too many systems for business process modeling. The variety of output formats is almost the same as the systems themselves. This diversity additionally hampers the conversion of the models. The presented study is aimed at discussing problems due to differences in the output formats of various modeling environments.Keywords: business process modeling, business process modeling standards, workflow patterns, converting models
Procedia PDF Downloads 58815329 Laboratory Investigation of Alkali-Surfactant-Alternate Gas (ASAG) Injection – a Novel EOR Process for a Light Oil Sandstone Reservoir
Authors: Vidit Mohan, Ashwin P. Ramesh, Anirudh Toshniwal
Abstract:
Alkali-Surfactant-Alternate-Gas(ASAG) injection, a novel EOR process has the potential to improve displacement efficiency over Surfactant-Alternate-Gas(SAG) by addressing the problem of surfactant adsorption by clay minerals in rock matrix. A detailed laboratory investigation on ASAG injection process was carried out with encouraging results. To further enhance recovery over WAG injection process, SAG injection was investigated at laboratory scale. SAG injection yielded marginal incremental displacement efficiency over WAG process. On investigation, it was found that, clay minerals in rock matrix adsorbed the surfactants and were detrimental for SAG process. Hence, ASAG injection was conceptualized using alkali as a clay stabilizer. The experiment of ASAG injection with surfactant concentration of 5000 ppm and alkali concentration of 0.5 weight% yields incremental displacement efficiency of 5.42% over WAG process. The ASAG injection is a new process and has potential to enhance efficiency of WAG/SAG injection process.Keywords: alkali surfactant alternate gas (ASAG), surfactant alternate gas (SAG), laboratory investigation, EOR process
Procedia PDF Downloads 47915328 BER Estimate of WCDMA Systems with MATLAB Simulation Model
Authors: Suyeb Ahmed Khan, Mahmood Mian
Abstract:
Simulation plays an important role during all phases of the design and engineering of communications systems, from early stages of conceptual design through the various stages of implementation, testing, and fielding of the system. In the present paper, a simulation model has been constructed for the WCDMA system in order to evaluate the performance. This model describes multiusers effects and calculation of BER (Bit Error Rate) in 3G mobile systems using Simulink MATLAB 7.1. Gaussian Approximation defines the multi-user effect on system performance. BER has been analyzed with comparison between transmitting data and receiving data.Keywords: WCDMA, simulations, BER, MATLAB
Procedia PDF Downloads 59315327 An Evaluation on the Methodology of Manufacturing High Performance Organophilic Clay at the Most Efficient and Cost Effective Process
Authors: Siti Nur Izati Azmi, Zatil Afifah Omar, Kathi Swaran, Navin Kumar
Abstract:
Organophilic Clays, also known as Organoclays, is used as a viscosifier in Oil based Drilling fluids. Most often, Organophilic clay are produced from modified Sodium and Calcium based Bentonite. Many studies and data show that Organophilic Clay using Hectorite based clays provide the best yield and good fluid loss properties in an oil-based drilling fluid at a higher cost. In terms of the manufacturing process, the two common methods of manufacturing organophilic clays are a Wet Process and a Dry Process. Wet process is known to produce better performance product at a higher cost while Dry Process shorten the production time. Hence, the purpose of this study is to evaluate the various formulation of an organophilic clay and its performance vs. the cost, as well as to determine the most efficient and cost-effective method of manufacturing organophilic clays.Keywords: organophilic clay, viscosifier, wet process, dry process
Procedia PDF Downloads 22815326 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios
Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu
Abstract:
Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method
Procedia PDF Downloads 16815325 Quantification of Dispersion Effects in Arterial Spin Labelling Perfusion MRI
Authors: Rutej R. Mehta, Michael A. Chappell
Abstract:
Introduction: Arterial spin labelling (ASL) is an increasingly popular perfusion MRI technique, in which arterial blood water is magnetically labelled in the neck before flowing into the brain, providing a non-invasive measure of cerebral blood flow (CBF). The accuracy of ASL CBF measurements, however, is hampered by dispersion effects; the distortion of the ASL labelled bolus during its transit through the vasculature. In spite of this, the current recommended implementation of ASL – the white paper (Alsop et al., MRM, 73.1 (2015): 102-116) – does not account for dispersion, which leads to the introduction of errors in CBF. Given that the transport time from the labelling region to the tissue – the arterial transit time (ATT) – depends on the region of the brain and the condition of the patient, it is likely that these errors will also vary with the ATT. In this study, various dispersion models are assessed in comparison with the white paper (WP) formula for CBF quantification, enabling the errors introduced by the WP to be quantified. Additionally, this study examines the relationship between the errors associated with the WP and the ATT – and how this is influenced by dispersion. Methods: Data were simulated using the standard model for pseudo-continuous ASL, along with various dispersion models, and then quantified using the formula in the WP. The ATT was varied from 0.5s-1.3s, and the errors associated with noise artefacts were computed in order to define the concept of significant error. The instantaneous slope of the error was also computed as an indicator of the sensitivity of the error with fluctuations in ATT. Finally, a regression analysis was performed to obtain the mean error against ATT. Results: An error of 20.9% was found to be comparable to that introduced by typical measurement noise. The WP formula was shown to introduce errors exceeding 20.9% for ATTs beyond 1.25s even when dispersion effects were ignored. Using a Gaussian dispersion model, a mean error of 16% was introduced by using the WP, and a dispersion threshold of σ=0.6 was determined, beyond which the error was found to increase considerably with ATT. The mean error ranged from 44.5% to 73.5% when other physiologically plausible dispersion models were implemented, and the instantaneous slope varied from 35 to 75 as dispersion levels were varied. Conclusion: It has been shown that the WP quantification formula holds only within an ATT window of 0.5 to 1.25s, and that this window gets narrower as dispersion occurs. Provided that the dispersion levels fall below the threshold evaluated in this study, however, the WP can measure CBF with reasonable accuracy if dispersion is correctly modelled by the Gaussian model. However, substantial errors were observed with other common models for dispersion with dispersion levels similar to those that have been observed in literature.Keywords: arterial spin labelling, dispersion, MRI, perfusion
Procedia PDF Downloads 37215324 Metal-Oxide-Semiconductor-Only Process Corner Monitoring Circuit
Authors: Davit Mirzoyan, Ararat Khachatryan
Abstract:
A process corner monitoring circuit (PCMC) is presented in this work. The circuit generates a signal, the logical value of which depends on the process corner only. The signal can be used in both digital and analog circuits for testing and compensation of process variations (PV). The presented circuit uses only metal-oxide-semiconductor (MOS) transistors, which allow increasing its detection accuracy, decrease power consumption and area. Due to its simplicity the presented circuit can be easily modified to monitor parametrical variations of only n-type and p-type MOS (NMOS and PMOS, respectively) transistors, resistors, as well as their combinations. Post-layout simulation results prove correct functionality of the proposed circuit, i.e. ability to monitor the process corner (equivalently die-to-die variations) even in the presence of within-die variations.Keywords: detection, monitoring, process corner, process variation
Procedia PDF Downloads 52515323 Nonlinear Modelling of Sloshing Waves and Solitary Waves in Shallow Basins
Authors: Mohammad R. Jalali, Mohammad M. Jalali
Abstract:
The earliest theories of sloshing waves and solitary waves based on potential theory idealisations and irrotational flow have been extended to be applicable to more realistic domains. To this end, the computational fluid dynamics (CFD) methods are widely used. Three-dimensional CFD methods such as Navier-Stokes solvers with volume of fluid treatment of the free surface and Navier-Stokes solvers with mappings of the free surface inherently impose high computational expense; therefore, considerable effort has gone into developing depth-averaged approaches. Examples of such approaches include Green–Naghdi (GN) equations. In Cartesian system, GN velocity profile depends on horizontal directions, x-direction and y-direction. The effect of vertical direction (z-direction) is also taken into consideration by applying weighting function in approximation. GN theory considers the effect of vertical acceleration and the consequent non-hydrostatic pressure. Moreover, in GN theory, the flow is rotational. The present study illustrates the application of GN equations to propagation of sloshing waves and solitary waves. For this purpose, GN equations solver is verified for the benchmark tests of Gaussian hump sloshing and solitary wave propagation in shallow basins. Analysis of the free surface sloshing of even harmonic components of an initial Gaussian hump demonstrates that the GN model gives predictions in satisfactory agreement with the linear analytical solutions. Discrepancies between the GN predictions and the linear analytical solutions arise from the effect of wave nonlinearities arising from the wave amplitude itself and wave-wave interactions. Numerically predicted solitary wave propagation indicates that the GN model produces simulations in good agreement with the analytical solution of the linearised wave theory. Comparison between the GN model numerical prediction and the result from perturbation analysis confirms that nonlinear interaction between solitary wave and a solid wall is satisfactorilly modelled. Moreover, solitary wave propagation at an angle to the x-axis and the interaction of solitary waves with each other are conducted to validate the developed model.Keywords: Green–Naghdi equations, nonlinearity, numerical prediction, sloshing waves, solitary waves
Procedia PDF Downloads 28715322 Comprehensive Assessment of Energy Efficiency within the Production Process
Authors: S. Kreitlein, N. Eder, J. Franke
Abstract:
The importance of energy efficiency within the production process increases steadily. Unfortunately, so far no tools for a comprehensive assessment of energy efficiency within the production process exist. Therefore the Institute for Factory Automation and Production Systems of the Friedrich-Alexander-University Erlangen-Nuremberg has developed two methods with the goal of achieving transparency and a quantitative assessment of energy efficiency: EEV (Energy Efficiency Value) and EPE (Energetic Process Efficiency). This paper describes the basics and state of the art as well as the developed approaches.Keywords: energy efficiency, energy efficiency value, energetic process efficiency, production
Procedia PDF Downloads 733