Search results for: batch test automation
3039 Note on the Necessity of the Patch Test
Authors: Rado Flajs, Miran Saje
Abstract:
We present a simple nonconforming approximation of the linear two–point boundary value problem which violates patch test requirements. Nevertheless the solutions, obtained from these type of approximations, converge to the exact solution.
Keywords: Generalized patch test, Irons' patch test, nonconforming finite element, convergence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15493038 Facial Emotion Recognition with Convolutional Neural Network Based Architecture
Authors: Koray U. Erbas
Abstract:
Neural networks are appealing for many applications since they are able to learn complex non-linear relationships between input and output data. As the number of neurons and layers in a neural network increase, it is possible to represent more complex relationships with automatically extracted features. Nowadays Deep Neural Networks (DNNs) are widely used in Computer Vision problems such as; classification, object detection, segmentation image editing etc. In this work, Facial Emotion Recognition task is performed by proposed Convolutional Neural Network (CNN)-based DNN architecture using FER2013 Dataset. Moreover, the effects of different hyperparameters (activation function, kernel size, initializer, batch size and network size) are investigated and ablation study results for Pooling Layer, Dropout and Batch Normalization are presented.
Keywords: Convolutional Neural Network, Deep Learning, Deep Learning Based FER, Facial Emotion Recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13703037 Regulatory Effects of Carbon Sources on Tabtoxin Production (A β-lactam Phytotoxin of Pseudomonas syringae pv. tabaci)
Authors: N. Messaadia, D. Harzallah
Abstract:
The effects of divers carbon substrates were investigated for the tabtoxin production of an isolated pathogenic Pseudomonas syringae pv. tabaci, the causal agent of wildfire of tobacco and are discussed in relation to the bacterium growth. The isolated organism was grown in batch culture on Woolley's medium (28°C, 200 rpm, during 5 days). The growth has been measured by the optical density (OD) at 620 nm and the tabtoxin production quantified by Escherichia coli (K-12) bioassay technique. The growth and the tabtoxin production were both influenced by the substrates (sugars, amino acids, organic acids) used, each, as a sole carbon source and as a supplement for the same amino acids. The most significant quantities of tabtoxin were obtained in presence of some amino acids used as sole carbon source and/or as supplement.Keywords: Amino acid supplement, carbon substrates, batch culture, Pseudomonas syringae pv. tabaci.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35753036 Real-time Performance Study of EPA Periodic Data Transmission
Authors: Liu Ning, Zhong Chongquan, Teng Hongfei
Abstract:
EPA (Ethernet for Plant Automation) resolves the nondeterministic problem of standard Ethernet and accomplishes real-time communication by means of micro-segment topology and deterministic scheduling mechanism. This paper studies the real-time performance of EPA periodic data transmission from theoretical and experimental perspective. By analyzing information transmission characteristics and EPA deterministic scheduling mechanism, 5 indicators including delivery time, time synchronization accuracy, data-sending time offset accuracy, utilization percentage of configured timeslice and non-RTE bandwidth that can be used to specify the real-time performance of EPA periodic data transmission are presented and investigated. On this basis, the test principles and test methods of the indicators are respectively studied and some formulas for real-time performance of EPA system are derived. Furthermore, an experiment platform is developed to test the indicators of EPA periodic data transmission in a micro-segment. According to the analysis and the experiment, the methods to improve the real-time performance of EPA periodic data transmission including optimizing network structure, studying self-adaptive adjustment method of timeslice and providing data-sending time offset accuracy for configuration are proposed.
Keywords: EPA system, Industrial Ethernet, Periodic data, Real-time performance
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14683035 Neural Network-Based Control Strategies Applied to a Fed-Batch Crystallization Process
Authors: P. Georgieva, S. Feyo de Azevedo
Abstract:
This paper is focused on issues of process modeling and two model based control strategies of a fed-batch sugar crystallization process applying the concept of artificial neural networks (ANNs). The control objective is to force the operation into following optimal supersaturation trajectory. It is achieved by manipulating the feed flow rate of sugar liquor/syrup, considered as the control input. The control task is rather challenging due to the strong nonlinearity of the process dynamics and variations in the crystallization kinetics. Two control alternatives are considered – model predictive control (MPC) and feedback linearizing control (FLC). Adequate ANN process models are first built as part of the controller structures. MPC algorithm outperforms the FLC approach with respect to satisfactory reference tracking and smooth control action. However, the MPC is computationally much more involved since it requires an online numerical optimization, while for the FLC an analytical control solution was determined.Keywords: artificial neural networks, nonlinear model control, process identification, crystallization process
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18383034 Deep Reinforcement Learning Approach for Trading Automation in the Stock Market
Authors: Taylan Kabbani, Ekrem Duman
Abstract:
Deep Reinforcement Learning (DRL) algorithms can scale to previously intractable problems. The automation of profit generation in the stock market is possible using DRL, by combining the financial assets price ”prediction” step and the ”allocation” step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. This work represents a DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem as a Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. We then solved the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm and achieved a 2.68 Sharpe ratio on the test dataset. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of DRL in financial markets over other types of machine learning and proves its credibility and advantages of strategic decision-making.
Keywords: Autonomous agent, deep reinforcement learning, MDP, sentiment analysis, stock market, technical indicators, twin delayed deep deterministic policy gradient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5243033 Process Optimization and Automation of Information Technology Services in a Heterogenic Digital Environment
Authors: Tasneem Halawani, Yamen Khateeb
Abstract:
With customers’ ever-increasing expectations for fast services provisioning for all their business needs, information technology (IT) organizations, as business partners, have to cope with this demanding environment and deliver their services in the most effective and efficient way. The purpose of this paper is to identify optimization and automation opportunities for the top requested IT services in a heterogenic digital environment and widely spread customer base. In collaboration with systems, processes, and subject matter experts (SMEs), the processes in scope were approached by analyzing four-year related historical data, identifying and surveying stakeholders, modeling the as-is processes, and studying systems integration/automation capabilities. This effort resulted in identifying several pain areas, including standardization, unnecessary customer and IT involvement, manual steps, systems integration, and performance measurement. These pain areas were addressed by standardizing the top five requested IT services, eliminating/automating 43 steps, and utilizing a single platform for end-to-end process execution. In conclusion, the optimization of IT service request processes in a heterogenic digital environment and widely spread customer base is challenging, yet achievable without compromising the service quality and customers’ added value. Further studies can focus on measuring the value of the eliminated/automated process steps to quantify the enhancement impact. Moreover, a similar approach can be utilized to optimize other IT service requests, with a focus on business criticality.Keywords: Automation, customer value, heterogenic, integration, IT services, optimization, processes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6653032 A Robust Al-Hawalees Gaming Automation using Minimax and BPNN Decision
Authors: Ahmad Sharieh, R Bremananth
Abstract:
Artificial Intelligence based gaming is an interesting topic in the state-of-art technology. This paper presents an automation of a tradition Omani game, called Al-Hawalees. Its related issues are resolved and implemented using artificial intelligence approach. An AI approach called mini-max procedure is incorporated to make a diverse budges of the on-line gaming. If number of moves increase, time complexity will be increased in terms of propositionally. In order to tackle the time and space complexities, we have employed a back propagation neural network (BPNN) to train in off-line to make a decision for resources required to fulfill the automation of the game. We have utilized Leverberg- Marquardt training in order to get the rapid response during the gaming. A set of optimal moves is determined by the on-line back propagation training fashioned with alpha-beta pruning. The results and analyses reveal that the proposed scheme will be easily incorporated in the on-line scenario with one player against the system.
Keywords: Artificial neural network, back propagation gaming, Leverberg-Marquardt, minimax procedure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19363031 The Different Ways to Describe Regular Languages by Using Finite Automata and the Changing Algorithm Implementation
Authors: Abdulmajid Mukhtar Afat
Abstract:
This paper aims at introducing finite automata theory, the different ways to describe regular languages and create a program to implement the subset construction algorithms to convert nondeterministic finite automata (NFA) to deterministic finite automata (DFA). This program is written in c++ programming language. The program reads FA 5tuples from text file and then classifies it into either DFA or NFA. For DFA, the program will read the string w and decide whether it is acceptable or not. If accepted, the program will save the tracking path and point it out. On the other hand, when the automation is NFA, the program will change the Automation to DFA so that it is easy to track and it can decide whether the w exists in the regular language or not.
Keywords: Finite Automata, subset construction DFA, NFA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19863030 A Modified Run Length Coding Technique for Test Data Compression Based on Multi-Level Selective Huffman Coding
Authors: C. Kalamani, K. Paramasivam
Abstract:
Test data compression is an efficient method for reducing the test application cost. The problem of reducing test data has been addressed by researchers in three different aspects: Test Data Compression, Built-in-Self-Test (BIST) and Test set compaction. The latter two methods are capable of enhancing fault coverage with cost of hardware overhead. The drawback of the conventional methods is that they are capable of reducing the test storage and test power but when test data have redundant length of runs, no additional compression method is followed. This paper presents a modified Run Length Coding (RLC) technique with Multilevel Selective Huffman Coding (MLSHC) technique to reduce test data volume, test pattern delivery time and power dissipation in scan test applications where redundant length of runs is encountered then the preceding run symbol is replaced with tiny codeword. Experimental results show that the presented method not only improves the test data compression but also reduces the overall test data volume compared to recent schemes. Experiments for the six largest ISCAS-98 benchmarks show that our method outperforms most known techniques.
Keywords: Modified run length coding, multilevel selective Huffman coding, built-in-self-test modified selective Huffman coding, automatic test equipment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12743029 Intelligent Face-Up CMP System Integrated with On-Line Optical Measurements
Authors: Sheng-Ming Huang, Nan-Chyuan Tsai, Chih-Che Lin, Chun-Chi Lin
Abstract:
An innovative design for intelligent Chemical Mechanical Polishing (CMP) system is proposed and verified by experiments in this report. On-line measurement and real-time feedback are integrated to eliminate the shortcomings of traditional approaches, e.g., the batch-to-batch discrepancy of required polishing time, over consumption of chemical slurry, and non-uniformity across the wafer. The major advantage of the proposed method is that the finish of local surface roughness can be consistent, no matter where the inner-ring region or outer-ring region is concerned. Secondly, it is able to eliminate the Edge effect. Conventionally, the interfacial induced stress near the wafer edge is generally much higher than that near the wafer center. At last, by using the proposed intelligent chemical mechanical polishing strategy, the cost of the entire machining cycle can be much reduced while the quality of the finished goods certainly upgraded.
Keywords: Chemical Mechanical Polishing, Active Magnetic Actuator, On-Line Measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17393028 Test Data Compression Using a Hybrid of Bitmask Dictionary and 2n Pattern Runlength Coding Methods
Authors: C. Kalamani, K. Paramasivam
Abstract:
In VLSI, testing plays an important role. Major problem in testing are test data volume and test power. The important solution to reduce test data volume and test time is test data compression. The Proposed technique combines the bit maskdictionary and 2n pattern run length-coding method and provides a substantial improvement in the compression efficiency without introducing any additional decompression penalty. This method has been implemented using Mat lab and HDL Language to reduce test data volume and memory requirements. This method is applied on various benchmark test sets and compared the results with other existing methods. The proposed technique can achieve a compression ratio up to 86%.Keywords: Bit Mask dictionary, 2n pattern run length code, system-on-chip, SOC, test data compression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19213027 Equilibrium, Kinetics and Thermodynamic Studies for Adsorption of Hg (II) on Palm Shell Powder
Authors: Shilpi Kushwaha, Suparna Sodaye, P. Padmaja
Abstract:
Palm shell obtained from coastal part of southern India was studied for the removal for the adsorption of Hg (II) ions. Batch adsorption experiments were carried out as a function of pH, concentration of Hg (II) ions, time, temperature and adsorbent dose. Maximum removal was seen in the range pH 4.0- pH 7.0. The palm shell powder used as adsorbent was characterized for its surface area, SEM, PXRD, FTIR, ion exchange capacity, moisture content, and bulk density, soluble content in water and acid and pH. The experimental results were analyzed using Langmuir I, II, III, IV and Freundlich adsorption isotherms. The batch sorption kinetics was studied for the first order reversible reaction, pseudo first order; pseudo second order reaction and the intra-particle diffusion reaction. The biomass was successfully used for removal Hg (II) from synthetic and industrial effluents and the technique appears industrially applicable and viable.Keywords: Biosorbent, mercury removal, borassus flabellifer, isotherms, kinetics, palm shell.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20573026 Effect of COD Loading Rate on Hydrogen Production from Alcohol Wastewater
Authors: Patcharee Intanoo, Jittipan Chavadej, Sumaeth Chavadej
Abstract:
The objective of this study was to investigate hydrogen production from alcohol wastewater by anaerobic sequencing batch reactor (ASBR) under thermophillic operation. The ASBR unit used in this study had a liquid holding volume of 4 L and was operated at 6 cycles per day. The seed sludge taken from an upflow anaerobic sludge blanket unit treating the same wastewater was boiled at 95 °C for 15 min before being fed to the ASBR unit. The ASBR system was operated at different COD loading rates at a thermophillic temperature (55 °C), and controlled pH of 5.5. When the system was operated under optimum conditions (providing maximum hydrogen production performance) at a feed COD of 60 000 mg/l, and a COD loading rate of 68 kg/m3 d, the produced gas contained 43 % H2 content in the produced gas. Moreover, the hydrogen yield and the specific hydrogen production rate (SHPR) were 130 ml H2/g COD removed and 2100 ml H2/l d, respectively.
Keywords: Biohydrogen, Alcohol wastewater, Anaerobic sequencing batch reactor (ASBR), Thermophillic operation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21033025 Increasing The Speed of Convergence of an Artificial Neural Network based ARMA Coefficients Determination Technique
Authors: Abiodun M. Aibinu, Momoh J. E. Salami, Amir A. Shafie, Athaur Rahman Najeeb
Abstract:
In this paper, novel techniques in increasing the accuracy and speed of convergence of a Feed forward Back propagation Artificial Neural Network (FFBPNN) with polynomial activation function reported in literature is presented. These technique was subsequently used to determine the coefficients of Autoregressive Moving Average (ARMA) and Autoregressive (AR) system. The results obtained by introducing sequential and batch method of weight initialization, batch method of weight and coefficient update, adaptive momentum and learning rate technique gives more accurate result and significant reduction in convergence time when compared t the traditional method of back propagation algorithm, thereby making FFBPNN an appropriate technique for online ARMA coefficient determination.Keywords: Adaptive Learning rate, Adaptive momentum, Autoregressive, Modeling, Neural Network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14983024 Assertion-Driven Test Repair Based on Priority Criteria
Authors: Ruilian Zhao, Shukai Zhang, Yan Wang, Weiwei Wang
Abstract:
Repairing broken test cases is an expensive and challenging task in evolving software systems. Although an automated repair technique with intent-preservation has been proposed, it does not take into account the association between test repairs and assertions, leading a large number of irrelevant candidates and decreasing the repair capability. This paper proposes a assertion-driven test repair approach. Furthermore, a intent-oriented priority criterion is raised to guide the repair candidate generation, making the repairs closer to the intent of the test. In more detail, repair targets are determined through post-dominance relations between assertions and the methods that directly cause compilation errors. Then, test repairs are generated from the target in a bottom-up way, guided by the the intent-oriented priority criteria. Finally, the generated repair candidates are prioritized to match the original test intent. The approach is implemented and evaluated on the benchmark of 4 open-source programs and 91 broken test cases. The result shows that the approach can fix 89% (81/91) broken test cases, which are more effective than the existing intent-preserved test repair approach, and our intent-oriented priority criteria work well.
Keywords: Test repair, test intent, software test, test case evolution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1553023 Robust Batch Process Scheduling in Pharmaceutical Industries: A Case Study
Authors: Tommaso Adamo, Gianpaolo Ghiani, Antonio D. Grieco, Emanuela Guerriero
Abstract:
Batch production plants provide a wide range of scheduling problems. In pharmaceutical industries a batch process is usually described by a recipe, consisting of an ordering of tasks to produce the desired product. In this research work we focused on pharmaceutical production processes requiring the culture of a microorganism population (i.e. bacteria, yeasts or antibiotics). Several sources of uncertainty may influence the yield of the culture processes, including (i) low performance and quality of the cultured microorganism population or (ii) microbial contamination. For these reasons, robustness is a valuable property for the considered application context. In particular, a robust schedule will not collapse immediately when a cell of microorganisms has to be thrown away due to a microbial contamination. Indeed, a robust schedule should change locally in small proportions and the overall performance measure (i.e. makespan, lateness) should change a little if at all. In this research work we formulated a constraint programming optimization (COP) model for the robust planning of antibiotics production. We developed a discrete-time model with a multi-criteria objective, ordering the different criteria and performing a lexicographic optimization. A feasible solution of the proposed COP model is a schedule of a given set of tasks onto available resources. The schedule has to satisfy tasks precedence constraints, resource capacity constraints and time constraints. In particular time constraints model tasks duedates and resource availability time windows constraints. To improve the schedule robustness, we modeled the concept of (a, b) super-solutions, where (a, b) are input parameters of the COP model. An (a, b) super-solution is one in which if a variables (i.e. the completion times of a culture tasks) lose their values (i.e. cultures are contaminated), the solution can be repaired by assigning these variables values with a new values (i.e. the completion times of a backup culture tasks) and at most b other variables (i.e. delaying the completion of at most b other tasks). The efficiency and applicability of the proposed model is demonstrated by solving instances taken from a real-life pharmaceutical company. Computational results showed that the determined super-solutions are near-optimal.Keywords: Constraint programming, super-solutions, robust scheduling, batch process, pharmaceutical industries.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19743022 Improving the Effectiveness of Software Testing through Test Case Reduction
Authors: R. P. Mahapatra, Jitendra Singh
Abstract:
This paper proposes a new technique for improving the efficiency of software testing, which is based on a conventional attempt to reduce test cases that have to be tested for any given software. The approach utilizes the advantage of Regression Testing where fewer test cases would lessen time consumption of the testing as a whole. The technique also offers a means to perform test case generation automatically. Compared to one of the techniques in the literature where the tester has no option but to perform the test case generation manually, the proposed technique provides a better option. As for the test cases reduction, the technique uses simple algebraic conditions to assign fixed values to variables (Maximum, minimum and constant variables). By doing this, the variables values would be limited within a definite range, resulting in fewer numbers of possible test cases to process. The technique can also be used in program loops and arrays.Keywords: Software Testing, Test Case Generation, Test CaseReduction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30173021 A Promising Approach to Supporting Knowledge-Intensive Business Processes: Business Case Management
Authors: Zeljko Panian
Abstract:
Through the course of this paper we define Business Case Management and its characteristics, and highlight its link to knowledge workers. Business Case Management combines knowledge and process effectively, supporting the ad hoc and unpredictable nature of cases, and coordinate a range of other technologies to appropriately support knowledge-intensive processes. We emphasize the growing importance of knowledge workers and the current poor support for knowledge work automation. We also discuss the challenges in supporting this kind of knowledge work and propose a novel approach to overcome these challenges.
Keywords: Knowledge management, knowledge workers, business process management, business case management, automation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21743020 Reducing Test Vectors Count Using Fault Based Optimization Schemes in VLSI Testing
Authors: Vinod Kumar Khera, R. K. Sharma, A. K. Gupta
Abstract:
Power dissipation increases exponentially during test mode as compared to normal operation of the circuit. In extreme cases, test power is more than twice the power consumed during normal operation mode. Test vector generation scheme is key component in deciding the power hungriness of a circuit during testing. Test vector count and consequent leakage current are functions of test vector generation scheme. Fault based test vector count optimization has been presented in this work. It helps in reducing test vector count and the leakage current. In the presented scheme, test vectors have been reduced by extracting essential child vectors. The scheme has been tested experimentally using stuck at fault models and results ensure the reduction in test vector count.Keywords: Low power VLSI testing, independent fault, essential faults, test vector reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14243019 Adsorption of Phenol, 3-Nitrophenol and Dyes from Aqueous Solutions onto an Activated Carbon Column under Semi-Batch and Continuous Operation
Authors: I. Moraitopoulos, Z. Ioannou, J. Simitzis
Abstract:
The present study examines the adsorption of phenol, 3-nitrophenol and dyes (methylene blue, alizarine yellow), from aqueous solutions onto a commercial activated carbon. Two different operations, semi-batch and continuous with reflux, were applied. The commercial activated carbon exhibits high adsorption abilities for phenol, 3-nitrophenol and dyes (methylene blue and alizarin yellow) from their aqueous solutions. The adsorption of all adsorbates after 1 h is higher by the continuous operation with reflux than by the semibatch operation. The adsorption of phenol is higher than that of 3-nitrophenol for both operations. Similarly, the adsorption of alizarin yellow is higher than that of methylene blue for both operations. The regenerated commercial activated carbon regains its adsorption ability due to the removal of the adsorbate from its pores during the regeneration.
Keywords: Activated carbon, adsorption, phenols, dyes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20593018 Application of Feed Forward Neural Networks in Modeling and Control of a Fed-Batch Crystallization Process
Authors: Petia Georgieva, Sebastião Feyo de Azevedo
Abstract:
This paper is focused on issues of nonlinear dynamic process modeling and model-based predictive control of a fed-batch sugar crystallization process applying the concept of artificial neural networks as computational tools. The control objective is to force the operation into following optimal supersaturation trajectory. It is achieved by manipulating the feed flow rate of sugar liquor/syrup, considered as the control input. A feed forward neural network (FFNN) model of the process is first built as part of the controller structure to predict the process response over a specified (prediction) horizon. The predictions are supplied to an optimization procedure to determine the values of the control action over a specified (control) horizon that minimizes a predefined performance index. The control task is rather challenging due to the strong nonlinearity of the process dynamics and variations in the crystallization kinetics. However, the simulation results demonstrated smooth behavior of the control actions and satisfactory reference tracking.
Keywords: Feed forward neural network, process modelling, model predictive control, crystallization process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18753017 Performance Evaluation of an Aboveground LNG Storage Tank Cover using Nondestructive and Destructive Tests
Authors: Sungnam Hong, Sun-Kyu Park, Jieun Jeong, Jinwoong Choi
Abstract:
In this study, a new procedure for inspecting damages on LNG storage tanks was proposed with the use of structural diagnostic techniques: i.e., nondestructive inspection techniques such as macrography, the hammer sounding test, the Schmidt hammer test, and the ultrasonic pulse velocity test, and destructive inspection techniques such as the compressive strength test, the chloride penetration test, and the carbonation test. From the analysis of all the test results, it was concluded that the LNG storage tank cover was in good condition. Such results were also compared with the Korean concrete standard specifications and design values. In addition, the remaining life of the LNG storage tank was estimated by using existing models. Based on the results, an LNG storage tank cover performance evaluation procedure was suggested.
Keywords: Destructive test, LNG storage tank, Nondestructive test, Performance evaluation procedure, Remaining life.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31903016 Using ε Value in Describe Regular Languages by Using Finite Automata, Operation on Languages and the Changing Algorithm Implementation
Authors: Abdulmajid Mukhtar Afat
Abstract:
This paper aims at introducing nondeterministic finite automata with ε value which is used to perform some operations on languages. a program is created to implement the algorithm that converts nondeterministic finite automata with ε value (ε-NFA) to deterministic finite automata (DFA).The program is written in c++ programming language. The program inputs are FA 5-tuples from text file and then classifies it into either DFA/NFA or ε -NFA. For DFA, the program will get the string w and decide whether it is accepted or rejected. The tracking path for an accepted string is saved by the program. In case of NFA or ε-NFA automation, the program changes the automation to DFA to enable tracking and to decide if the string w exists in the regular language or not.
Keywords: Finite automata, DFA, NFA, ε-NFA, Eclose, operations on languages.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8373015 A Study of Fatty Acid Production in the Batch Reactor via the Carbohydrate Fermentation by C. butyricum
Authors: H. Azan T., R. W. Lovitt, Nur K. T., N. Azwa. M. B.
Abstract:
Carbohydrate can be used as a substrate that can be consumed by C. butyricum and converted to useful chemicals such as acetic and butyric acid. Influence of concentration and types of carbohydrate to cell growth, carbohydrate consumed, productivity and carbon balance have been explored. Batch reactor was selected in this study to avoid contamination due to simpler operation system. Glucose was preferred as first types of carbohydrate to be tested. Six concentrations were studied from 0 to 28g/L. Eventually, 15g/L has shown the best concentration for glucose in term of growth rate (2.63h-1) and carbon balance (99.76% recovery). Comparison for types of carbohydrate was also conducted. 15g/L of xylose (monosaccharide) and starch (complex carbohydrate) was tested. In term of growth rate and productivity, glucose showed the best carbohydrates. Results for this study showed that glucose and xylose produced more than 80% of acetic acid and less than 20% of butyric acid. Meanwhile, 63.1% of acetic acid and 36.9% of butyric acid were produced from starch.
Keywords: C. butyricum, glucose, starch, xylose, carbohydrate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20043014 Adaptive Kernel Principal Analysis for Online Feature Extraction
Authors: Mingtao Ding, Zheng Tian, Haixia Xu
Abstract:
The batch nature limits the standard kernel principal component analysis (KPCA) methods in numerous applications, especially for dynamic or large-scale data. In this paper, an efficient adaptive approach is presented for online extraction of the kernel principal components (KPC). The contribution of this paper may be divided into two parts. First, kernel covariance matrix is correctly updated to adapt to the changing characteristics of data. Second, KPC are recursively formulated to overcome the batch nature of standard KPCA.This formulation is derived from the recursive eigen-decomposition of kernel covariance matrix and indicates the KPC variation caused by the new data. The proposed method not only alleviates sub-optimality of the KPCA method for non-stationary data, but also maintains constant update speed and memory usage as the data-size increases. Experiments for simulation data and real applications demonstrate that our approach yields improvements in terms of both computational speed and approximation accuracy.
Keywords: adaptive method, kernel principal component analysis, online extraction, recursive algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15523013 Kinetic Studies on Microbial Production of Tannase Using Redgram Husk
Authors: S. K. Mohan, T. Viruthagiri, C. Arunkumar
Abstract:
Tannase (tannin acyl hydrolase, E.C.3.1.1.20) is an important hydrolysable enzyme with innumerable applications and industrial potential. In the present study, a kinetic model has been developed for the batch fermentation used for the production of tannase by A.flavus MTCC 3783. Maximum tannase activity of 143.30 U/ml was obtained at 96 hours under optimum operating conditions at 35oC, an initial pH of 5.5 and with an inducer tannic acid concentration of 3% (w/v) for a fermentation period of 120 hours. The biomass concentration reaches a maximum of 6.62 g/l at 96 hours and further there was no increase in biomass concentration till the end of the fermentation. Various unstructured kinetic models were analyzed to simulate the experimental values of microbial growth, tannase activity and substrate concentration. The Logistic model for microbial growth , Luedeking - Piret model for production of tannase and Substrate utilization kinetic model for utilization of substrate were capable of predicting the fermentation profile with high coefficient of determination (R2) values of 0.980, 0.942 and 0.983 respectively. The results indicated that the unstructured models were able to describe the fermentation kinetics more effectively.Keywords: Aspergillus flavus, Batch fermentation, Kinetic model, Tannase, Unstructured models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15643012 Performance Analysis of a Discrete-time GeoX/G/1 Queue with Single Working Vacation
Authors: Shan Gao, Zaiming Liu
Abstract:
This paper treats a discrete-time batch arrival queue with single working vacation. The main purpose of this paper is to present a performance analysis of this system by using the supplementary variable technique. For this purpose, we first analyze the Markov chain underlying the queueing system and obtain its ergodicity condition. Next, we present the stationary distributions of the system length as well as some performance measures at random epochs by using the supplementary variable method. Thirdly, still based on the supplementary variable method we give the probability generating function (PGF) of the number of customers at the beginning of a busy period and give a stochastic decomposition formulae for the PGF of the stationary system length at the departure epochs. Additionally, we investigate the relation between our discretetime system and its continuous counterpart. Finally, some numerical examples show the influence of the parameters on some crucial performance characteristics of the system.
Keywords: Discrete-time queue, batch arrival, working vacation, supplementary variable technique, stochastic decomposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14343011 A Study on the Accelerated Life Cycle Test Method of the Motor for Home Appliances by Using Acceleration Factor
Authors: Youn-Sung Kim, Mi-Sung Kim, Jae-Kun Lee
Abstract:
This paper deals with the accelerated life cycle test method of the motor for home appliances that demand high reliability. Life Cycle of parts in home appliances also should be 10 years because life cycle of the home appliances such as washing machine, refrigerator, TV is at least 10 years. In case of washing machine, the life cycle test method of motor is advanced for 3000 cycle test (1cycle = 2hours). However, 3000 cycle test incurs loss for the time and cost. Objectives of this study are to reduce the life cycle test time and the number of test samples, which could be realized by using acceleration factor for the test time and reduction factor for the number of sample.
Keywords: Accelerated life cycle test, motor reliability test, motor for washing machine, BLDC motor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35923010 A Model for Test Case Selection in the Software-Development Life Cycle
Authors: Adtha Lawanna
Abstract:
Software maintenance is one of the essential processes of Software-Development Life Cycle. The main philosophies of retaining software concern the improvement of errors, the revision of codes, the inhibition of future errors, and the development in piece and capacity. While the adjustment has been employing, the software structure has to be retested to an upsurge a level of assurance that it will be prepared due to the requirements. According to this state, the test cases must be considered for challenging the revised modules and the whole software. A concept of resolving this problem is ongoing by regression test selection such as the retest-all selections, random/ad-hoc selection and the safe regression test selection. Particularly, the traditional techniques concern a mapping between the test cases in a test suite and the lines of code it executes. However, there are not only the lines of code as one of the requirements that can affect the size of test suite but including the number of functions and faulty versions. Therefore, a model for test case selection is developed to cover those three requirements by the integral technique which can produce the smaller size of the test cases when compared with the traditional regression selection techniques.
Keywords: Software maintenance, regression test selection, test case.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697