Search results for: cost function
2980 Thermodynamic Modeling of the High Temperature Shift Converter Reactor Using Minimization of Gibbs Free Energy
Authors: H. Zare Aliabadi
Abstract:
The equilibrium chemical reactions taken place in a converter reactor of the Khorasan Petrochemical Ammonia plant was studied using the minimization of Gibbs free energy method. In the minimization of the Gibbs free energy function the Davidon– Fletcher–Powell (DFP) optimization procedure using the penalty terms in the well-defined objective function was used. It should be noted that in the DFP procedure along with the corresponding penalty terms the Hessian matrices for the composition of constituents in the Converter reactor can be excluded. This, in fact, can be considered as the main advantage of the DFP optimization procedure. Also the effect of temperature and pressure on the equilibrium composition of the constituents was investigated. The results obtained in this work were compared with the data collected from the converter reactor of the Khorasan Petrochemical Ammonia plant. It was concluded that the results obtained from the method used in this work are in good agreement with the industrial data. Notably, the algorithm developed in this work, in spite of its simplicity, takes the advantage of short computation and convergence time.
Keywords: Gibbs free energy, converter reactors, Chemical equilibrium
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25592979 Fixture Layout Optimization Using Element Strain Energy and Genetic Algorithm
Authors: Zeshan Ahmad, Matteo Zoppi, Rezia Molfino
Abstract:
The stiffness of the workpiece is very important to reduce the errors in manufacturing process. The high stiffness of the workpiece can be achieved by optimal positioning of fixture elements in the fixture. The minimization of the sum of the nodal deflection normal to the surface is used as objective function in previous research. The deflection in other direction has been neglected. The 3-2-1 fixturing principle is not valid for metal sheets due to its flexible nature. We propose a new fixture layout optimization method N-3-2-1 for metal sheets that uses the strain energy of the finite elements. This method combines the genetic algorithm and finite element analysis. The objective function in this method is to minimize the sum of all the element strain energy. By using the concept of element strain energy, the deformations in all the directions have been considered. Strain energy and stiffness are inversely proportional to each other. So, lower the value of strain energy, higher will be the stiffness. Two different kinds of case studies are presented. The case studies are solved for both objective functions; element strain energy and nodal deflection. The result are compared to verify the propose method.
Keywords: Fixture layout, optimization, fixturing element, genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25632978 Assessing Efficiency Trends in the Indian Sugar Industry
Authors: S. P. Singh
Abstract:
This paper measures technical and scale efficiencies of 40 Indian sugar companies for the period from 2004-05 to 2013-14. The efficiencies are estimated through input-oriented DEA models using one output variable—value of output (VOP) and five input variables—capital cost (CA), employee cost (EMP), raw material (RW), energy & fuel (E&F) and other manufacturing expenses (OME). The sugar companies are classified into integrated and non-integrated categories to know which one achieves higher level of efficiency. Sources of inefficiency in the industry are identified through decomposing the overall technical efficiency (TE) into pure technical efficiency (PTE) and scale efficiency (SE). The paper also estimates input-reduction targets for relatively inefficient companies and suggests measures to improve their efficiency level. The findings reveal that the TE does not evince any trend rather it shows fluctuations across years, largely due to erratic and cyclical pattern of sugar production. Further, technical inefficiency in the industry seems to be driven more by the managerial inefficiency than the scale inefficiency, which implies that TE can be improved through better conversion of inputs into output.
Keywords: Sugar industry, companies, technical efficiency, data envelopment analysis, targets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13422977 Experiment and Simulation of Laser Effect on Thermal Field of Porcine Liver
Authors: K.Ting, K. T. Chen, Y. L. Su, C. J. Chang
Abstract:
In medical therapy, laser has been widely used to conduct cosmetic, tumor and other treatments. During the process of laser irradiation, there may be thermal damage caused by excessive laser exposure. Thus, the establishment of a complete thermal analysis model is clinically helpful to physicians in reference data. In this study, porcine liver in place of tissue was subjected to laser irradiation to set up the experimental data considering the explored impact on surface thermal field and thermal damage region under different conditions of power, laser irradiation time, and distance between laser and porcine liver. In the experimental process, the surface temperature distribution of the porcine lever was measured by the infrared thermal imager. In the part of simulation, the bio heat transfer Pennes-s equation was solved by software SYSWELD applying in welding process. The double ellipsoid function as a laser source term is firstly considered in the prediction for surface thermal field and internal tissue damage. The simulation results are compared with the experimental data to validate the mathematical model established here in.
Keywords: laser infrared thermal imager, bio-heat transfer, double ellipsoid function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20572976 Electricity Power Planning: the Role of Wind Energy
Authors: Paula Ferreira, Madalena Araújo, M.E.J. O’Kelly
Abstract:
Combining energy efficiency with renewable energy sources constitutes a key strategy for a sustainable future. The wind power sector stands out as a fundamental element for the achievement of the European renewable objectives and Portugal is no exception to the increase of the wind energy for the electricity generation. This work proposes an optimization model for the long range electricity power planning in a system similar to the Portuguese one, where the expected impacts of the increasing installed wind power on the operating performance of thermal power plants are taken into account. The main results indicate that the increasing penetration of wind power in the electricity system will have significant effects on the combined cycle gas power plants operation and on the theoretically expected cost reduction and environmental gains. This research demonstrated the need to address the impact that energy sources with variable output may have, not only on the short-term operational planning, but especially on the medium to long range planning activities, in order to meet the strategic objectives for the energy sector.Keywords: Wind power, electricity planning model, cost, emissions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16072975 2D-Modeling with Lego Mindstorms
Authors: Miroslav Popelka, Jakub Nožička
Abstract:
The whole work is based on possibility to use Lego Mindstorms robotics systems to reduce costs. Lego Mindstorms consists of a wide variety of hardware components necessary to simulate, programme and test of robotics systems in practice. To programme algorithm, which simulates space using the ultrasonic sensor, was used development environment supplied with kit. Software Matlab was used to render values afterwards they were measured by ultrasonic sensor. The algorithm created for this paper uses theoretical knowledge from area of signal processing. Data being processed by algorithm are collected by ultrasonic sensor that scans 2D space in front of it. Ultrasonic sensor is placed on moving arm of robot which provides horizontal moving of sensor. Vertical movement of sensor is provided by wheel drive. The robot follows map in order to get correct positioning of measured data. Based on discovered facts it is possible to consider Lego Mindstorm for low-cost and capable kit for real-time modelling.
Keywords: LEGO Mindstorms, ultrasonic sensor, Real-time modeling, 2D object, low-cost robotics systems, sensors, Matlab, EV3 Home Edition Software.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31082974 An Improved Learning Algorithm based on the Conjugate Gradient Method for Back Propagation Neural Networks
Authors: N. M. Nawi, M. R. Ransing, R. S. Ransing
Abstract:
The conjugate gradient optimization algorithm usually used for nonlinear least squares is presented and is combined with the modified back propagation algorithm yielding a new fast training multilayer perceptron (MLP) algorithm (CGFR/AG). The approaches presented in the paper consist of three steps: (1) Modification on standard back propagation algorithm by introducing gain variation term of the activation function, (2) Calculating the gradient descent on error with respect to the weights and gains values and (3) the determination of the new search direction by exploiting the information calculated by gradient descent in step (2) as well as the previous search direction. The proposed method improved the training efficiency of back propagation algorithm by adaptively modifying the initial search direction. Performance of the proposed method is demonstrated by comparing to the conjugate gradient algorithm from neural network toolbox for the chosen benchmark. The results show that the number of iterations required by the proposed method to converge is less than 20% of what is required by the standard conjugate gradient and neural network toolbox algorithm.Keywords: Back-propagation, activation function, conjugategradient, search direction, gain variation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28372973 Modelling of Electron States in Quantum -Wire Systems - Influence of Stochastic Effects on the Confining Potential
Authors: Mikhail Vladimirovich Deryabin, Morten Willatzen
Abstract:
In this work, we address theoretically the influence of red and white Gaussian noise for electronic energies and eigenstates of cylindrically shaped quantum dots. The stochastic effect can be imagined as resulting from crystal-growth statistical fluctuations in the quantum-dot material composition. In particular we obtain analytical expressions for the eigenvalue shifts and electronic envelope functions in the k . p formalism due to stochastic variations in the confining band-edge potential. It is shown that white noise in the band-edge potential leaves electronic properties almost unaffected while red noise may lead to changes in state energies and envelopefunction amplitudes of several percentages. In the latter case, the ensemble-averaged envelope function decays as a function of distance. It is also shown that, in a stochastic system, constant ensembleaveraged envelope functions are the only bounded solutions for the infinite quantum-wire problem and the energy spectrum is completely discrete. In other words, the infinite stochastic quantum wire behaves, ensemble-averaged, as an atom.
Keywords: cylindrical quantum dots, electronic eigen energies, red and white Gaussian noise, ensemble averaging effects.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15292972 An Improved K-Means Algorithm for Gene Expression Data Clustering
Authors: Billel Kenidra, Mohamed Benmohammed
Abstract:
Data mining technique used in the field of clustering is a subject of active research and assists in biological pattern recognition and extraction of new knowledge from raw data. Clustering means the act of partitioning an unlabeled dataset into groups of similar objects. Each group, called a cluster, consists of objects that are similar between themselves and dissimilar to objects of other groups. Several clustering methods are based on partitional clustering. This category attempts to directly decompose the dataset into a set of disjoint clusters leading to an integer number of clusters that optimizes a given criterion function. The criterion function may emphasize a local or a global structure of the data, and its optimization is an iterative relocation procedure. The K-Means algorithm is one of the most widely used partitional clustering techniques. Since K-Means is extremely sensitive to the initial choice of centers and a poor choice of centers may lead to a local optimum that is quite inferior to the global optimum, we propose a strategy to initiate K-Means centers. The improved K-Means algorithm is compared with the original K-Means, and the results prove how the efficiency has been significantly improved.
Keywords: Microarray data mining, biological pattern recognition, partitional clustering, k-means algorithm, centroid initialization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12832971 Integrating Big Island Layout with Pull System for Production Optimization
Authors: M. H. M. Rusli, A. Jaffar, M. T. Ali, S. Muhamud @ Kayat
Abstract:
Lean manufacturing is a production philosophy made popular by Toyota Motor Corporation (TMC). It is globally known as the Toyota Production System (TPS) and has the ultimate aim of reducing cost by thoroughly eliminating wastes or muda. TPS embraces the Just-in-time (JIT) manufacturing; achieving cost reduction through lead time reduction. JIT manufacturing can be achieved by implementing Pull system in the production. Furthermore, TPS aims to improve productivity and creating continuous flow in the production by arranging the machines and processes in cellular configurations. This is called as Cellular Manufacturing Systems (CMS). This paper studies on integrating the CMS with the Pull system to establish a Big Island-Pull system production for High Mix Low Volume (HMLV) products in an automotive component industry. The paper will use the build-in JIT system steps adapted from TMC to create the Pull system production and also create a shojinka line which, according to takt time, has the flexibility to adapt to demand changes simply by adding and taking out manpower. This will lead to optimization in production.Keywords: Big Island layout, Lean manufacturing, Material and Information Flow Chart, Pull system production, TPS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25862970 Multilevel Activation Functions For True Color Image Segmentation Using a Self Supervised Parallel Self Organizing Neural Network (PSONN) Architecture: A Comparative Study
Authors: Siddhartha Bhattacharyya, Paramartha Dutta, Ujjwal Maulik, Prashanta Kumar Nandi
Abstract:
The paper describes a self supervised parallel self organizing neural network (PSONN) architecture for true color image segmentation. The proposed architecture is a parallel extension of the standard single self organizing neural network architecture (SONN) and comprises an input (source) layer of image information, three single self organizing neural network architectures for segmentation of the different primary color components in a color image scene and one final output (sink) layer for fusion of the segmented color component images. Responses to the different shades of color components are induced in each of the three single network architectures (meant for component level processing) by applying a multilevel version of the characteristic activation function, which maps the input color information into different shades of color components, thereby yielding a processed component color image segmented on the basis of the different shades of component colors. The number of target classes in the segmented image corresponds to the number of levels in the multilevel activation function. Since the multilevel version of the activation function exhibits several subnormal responses to the input color image scene information, the system errors of the three component network architectures are computed from some subnormal linear index of fuzziness of the component color image scenes at the individual level. Several multilevel activation functions are employed for segmentation of the input color image scene using the proposed network architecture. Results of the application of the multilevel activation functions to the PSONN architecture are reported on three real life true color images. The results are substantiated empirically with the correlation coefficients between the segmented images and the original images.
Keywords: Colour image segmentation, fuzzy set theory, multi-level activation functions, parallel self-organizing neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20212969 Designing Mobile Application to Motivate Young People to Visit Cultural Heritage Sites
Authors: Yuko Hiramatsu, Fumihiro Sato, Atsushi Ito, Hiroyuki Hatano, Mie Sato, Yu Watanabe, Akira Sasaki
Abstract:
This paper presents a mobile phone application developed for sightseeing in Nikko, one of the cultural world heritages in Japan, using the BLE (Bluetooth Low Energy) beacon. Based on our pre-research, we decided to design our application for young people who walk around the area actively, but know little about the tradition and culture of Nikko. One solution is to construct many information boards to explain; however, it is difficult to construct new guide plates in cultural world heritage sites. The smartphone is a good solution to send such information to such visitors. This application was designed using a combination of the smartphone and beacons, set in the area, so that when a tourist passes near a beacon, the application displays information about the area including a map, historical or cultural information about the temples and shrines, and local shops nearby as well as a bus timetable. It is useful for foreigners, too. In addition, we developed quizzes relating to the culture and tradition of Nikko to provide information based on the Zeigarnik effect, a psychological effect. According to the results of our trials, tourists positively evaluated the basic information and young people who used the quiz function were able to learn the historical and cultural points. This application helped young visitors at Nikko to understand the cultural elements of the site. In addition, this application has a function to send notifications. This function is designed to provide information about the local community such as shops, local transportation companies and information office. The application hopes to also encourage people living in the area, and such cooperation from the local people will make this application vivid and inspire young visitors to feel that the cultural heritage site is still alive today. This is a gateway for young people to learn about a traditional place and understand the gravity of preserving such areas.
Keywords: BLE beacon, smartphone application, Zeigarnik effect, world heritage site, school trip.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19572968 Low Cost Real-Time Communication Braille Hand-Glove for Visually Impaired Using Slot Sensors and Vibration Motors
Authors: Mukul Bandodkar, Virat Chourasia
Abstract:
Visually impaired people find it extremely difficult to acquire basic and vital information necessary for their living. Therefore, they are at a very high risk of being socially excluded as a result of poor access to information. In recent years, several attempts have been made in improving the communication methods for visually impaired people which involve tactile sensation such as finger Braille, manual alphabets and the print on palm method and several other electronic devices. But, there are some problems which arise in such methods such as lack of privacy and lack of compatibility to computer environment. This paper describes a low cost Braille hand glove for blind people using slot sensors and vibration motors with the help of which they can read and write emails, text messages and read e-books. This glove allows the person to type characters based on different Braille combination using six slot sensors. The vibration in six different positions of the glove which matches to the Braille code allows them to read characters.
Keywords: Braille, Braille Hand-Glove, Slot sensors, Vibration motors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41882967 Spatial and Temporal Variability of Fog Over the Indo-Gangetic Plains, India
Authors: Sanjay Kumar Srivastava, Anu Rani Sharma, Kamna Sachdeva
Abstract:
The aim of the paper is to analyze the characteristics of winter fog in terms of its trend and spatial-temporal variability over Indo-Gangetic plains. The study reveals that during last four and half decades (1971-2015), an alarming increasing trend in fog frequency has been observed during the winter months of December and January over the study area. The frequency of fog has increased by 118.4% during the peak winter months of December and January. It has also been observed that on an average central part of IGP has 66.29% fog days followed by west IGP with 41.94% fog days. Further, Empirical Orthogonal Function (EOF) decomposition and Mann-Kendall variation analysis are used to analyze the spatial and temporal patterns of winter fog. The findings have significant implications for the further research of fog over IGP and formulate robust strategies to adapt the fog variability and mitigate its effects. The decision by Delhi Government to implement odd-even scheme to restrict the use of private vehicles in order to reduce pollution and improve quality of air may result in increasing the alarming increasing trend of fog over Delhi and its surrounding areas regions of IGP.
Keywords: Fog, climatology, spatial variability, temporal variability, empirical orthogonal function, visibility, Mann-Kendall test, variation point.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16532966 Secure Block-Based Video Authentication with Localization and Self-Recovery
Authors: Ammar M. Hassan, Ayoub Al-Hamadi, Yassin M. Y. Hasan, Mohamed A. A. Wahab, Bernd Michaelis
Abstract:
Because of the great advance in multimedia technology, digital multimedia is vulnerable to malicious manipulations. In this paper, a public key self-recovery block-based video authentication technique is proposed which can not only precisely localize the alteration detection but also recover the missing data with high reliability. In the proposed block-based technique, multiple description coding MDC is used to generate two codes (two descriptions) for each block. Although one block code (one description) is enough to rebuild the altered block, the altered block is rebuilt with better quality by the two block descriptions. So using MDC increases the ratability of recovering data. A block signature is computed using a cryptographic hash function and a doubly linked chain is utilized to embed the block signature copies and the block descriptions into the LSBs of distant blocks and the block itself. The doubly linked chain scheme gives the proposed technique the capability to thwart vector quantization attacks. In our proposed technique , anyone can check the authenticity of a given video using the public key. The experimental results show that the proposed technique is reliable for detecting, localizing and recovering the alterations.Keywords: Authentication, hash function, multiple descriptioncoding, public key encryption, watermarking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19382965 Computational Method for Annotation of Protein Sequence According to Gene Ontology Terms
Authors: Razib M. Othman, Safaai Deris, Rosli M. Illias
Abstract:
Annotation of a protein sequence is pivotal for the understanding of its function. Accuracy of manual annotation provided by curators is still questionable by having lesser evidence strength and yet a hard task and time consuming. A number of computational methods including tools have been developed to tackle this challenging task. However, they require high-cost hardware, are difficult to be setup by the bioscientists, or depend on time intensive and blind sequence similarity search like Basic Local Alignment Search Tool. This paper introduces a new method of assigning highly correlated Gene Ontology terms of annotated protein sequences to partially annotated or newly discovered protein sequences. This method is fully based on Gene Ontology data and annotations. Two problems had been identified to achieve this method. The first problem relates to splitting the single monolithic Gene Ontology RDF/XML file into a set of smaller files that can be easy to assess and process. Thus, these files can be enriched with protein sequences and Inferred from Electronic Annotation evidence associations. The second problem involves searching for a set of semantically similar Gene Ontology terms to a given query. The details of macro and micro problems involved and their solutions including objective of this study are described. This paper also describes the protein sequence annotation and the Gene Ontology. The methodology of this study and Gene Ontology based protein sequence annotation tool namely extended UTMGO is presented. Furthermore, its basic version which is a Gene Ontology browser that is based on semantic similarity search is also introduced.
Keywords: automatic clustering, bioinformatics tool, gene ontology, protein sequence annotation, semantic similarity search
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31272964 Materialized View Effect on Query Performance
Authors: Yusuf Ziya Ayık, Ferhat Kahveci
Abstract:
Currently, database management systems have various tools such as backup and maintenance, and also provide statistical information such as resource usage and security. In terms of query performance, this paper covers query optimization, views, indexed tables, pre-computation materialized view, query performance analysis in which query plan alternatives can be created and the least costly one selected to optimize a query. Indexes and views can be created for related table columns. The literature review of this study showed that, in the course of time, despite the growing capabilities of the database management system, only database administrators are aware of the need for dealing with archival and transactional data types differently. These data may be constantly changing data used in everyday life, and also may be from the completed questionnaire whose data input was completed. For both types of data, the database uses its capabilities; but as shown in the findings section, instead of repeating similar heavy calculations which are carrying out same results with the same query over a survey results, using materialized view results can be in a more simple way. In this study, this performance difference was observed quantitatively considering the cost of the query.
Keywords: Materialized view, pre-computation, query cost, query performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13432963 Predicting the Impact of the Defect on the Overall Environment in Function Based Systems
Authors: Parvinder S. Sandhu, Urvashi Malhotra, E. Ardil
Abstract:
There is lot of work done in prediction of the fault proneness of the software systems. But, it is the severity of the faults that is more important than number of faults existing in the developed system as the major faults matters most for a developer and those major faults needs immediate attention. In this paper, we tried to predict the level of impact of the existing faults in software systems. Neuro-Fuzzy based predictor models is applied NASA-s public domain defect dataset coded in C programming language. As Correlation-based Feature Selection (CFS) evaluates the worth of a subset of attributes by considering the individual predictive ability of each feature along with the degree of redundancy between them. So, CFS is used for the selecting the best metrics that have highly correlated with level of severity of faults. The results are compared with the prediction results of Logistic Models (LMT) that was earlier quoted as the best technique in [17]. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provide a relatively better prediction accuracy as compared to other models and hence, can be used for the modeling of the level of impact of faults in function based systems.Keywords: Software Metrics, Fuzzy, Neuro-Fuzzy, Software Faults, Accuracy, MAE, RMSE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13552962 OXADM Asymmetrical Optical Device: Extending the Application to FTTH System
Authors: Mohammad Syuhaimi Ab-Rahman, Mohd. Saiful Dzulkefly Zan, Mohd Taufiq Mohd Yusof
Abstract:
With the drastically growth in optical communication technology, a lossless, low-crosstalk and multifunction optical switch is most desirable for large-scale photonic network. To realize such a switch, we have introduced the new architecture of optical switch that embedded many functions on single device. The asymmetrical architecture of OXADM consists of 3 parts; selective port, add/drop operation, and path routing. Selective port permits only the interest wavelength pass through and acts as a filter. While add and drop function can be implemented in second part of OXADM architecture. The signals can then be re-routed to any output port or/and perform an accumulation function which multiplex all signals onto single path and then exit to any interest output port. This will be done by path routing operation. The unique features offered by OXADM has extended its application to Fiber to-the Home Technology (FTTH), here the OXADM is used as a wavelength management element in Optical Line Terminal (OLT). Each port is assigned specifically with the operating wavelengths and with the dynamic routing management to ensure no traffic combustion occurs in OLT.Keywords: OXADM, asymmetrical architecture, optical switch, OLT, FTTH.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15472961 Optimal Maintenance Policy for a Partially Observable Two-Unit System
Authors: Leila Jafari, Viliam Makis, Akram Khaleghei G.B.
Abstract:
In this paper, we present a maintenance model of a two-unit series system with economic dependence. Unit#1 which is considered to be more expensive and more important, is subject to condition monitoring (CM) at equidistant, discrete time epochs and unit#2, which is not subject to CM has a general lifetime distribution. The multivariate observation vectors obtained through condition monitoring carry partial information about the hidden state of unit#1, which can be in a healthy or a warning state while operating. Only the failure state is assumed to be observable for both units. The objective is to find an optimal opportunistic maintenance policy minimizing the long-run expected average cost per unit time. The problem is formulated and solved in the partially observable semi-Markov decision process framework. An effective computational algorithm for finding the optimal policy and the minimum average cost is developed, illustrated by a numerical example.
Keywords: Condition-Based Maintenance, Semi-Markov Decision Process, Multivariate Bayesian Control Chart, Partially Observable System, Two-unit System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22932960 Prediction of Slump in Concrete using Artificial Neural Networks
Authors: V. Agrawal, A. Sharma
Abstract:
High Strength Concrete (HSC) is defined as concrete that meets special combination of performance and uniformity requirements that cannot be achieved routinely using conventional constituents and normal mixing, placing, and curing procedures. It is a highly complex material, which makes modeling its behavior a very difficult task. This paper aimed to show possible applicability of Neural Networks (NN) to predict the slump in High Strength Concrete (HSC). Neural Network models is constructed, trained and tested using the available test data of 349 different concrete mix designs of High Strength Concrete (HSC) gathered from a particular Ready Mix Concrete (RMC) batching plant. The most versatile Neural Network model is selected to predict the slump in concrete. The data used in the Neural Network models are arranged in a format of eight input parameters that cover the Cement, Fly Ash, Sand, Coarse Aggregate (10 mm), Coarse Aggregate (20 mm), Water, Super-Plasticizer and Water/Binder ratio. Furthermore, to test the accuracy for predicting slump in concrete, the final selected model is further used to test the data of 40 different concrete mix designs of High Strength Concrete (HSC) taken from the other batching plant. The results are compared on the basis of error function (or performance function).Keywords: Artificial Neural Networks, Concrete, prediction ofslump, slump in concrete
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35952959 A New Class χ2 (M, A,) of the Double Difference Sequences of Fuzzy Numbers
Authors: N.Subramanian, U.K.Misra
Abstract:
The aim of this paper is to introduce and study a new concept of strong double χ2 (M,A, Δ) of fuzzy numbers and also some properties of the resulting sequence spaces of fuzzy numbers were examined.
Keywords: Modulus function, fuzzy number, metric space.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22962958 Optimal Planning of Waste-to-Energy through Mixed Integer Linear Programming
Authors: S. T. Tan, H. Hashim, W. S. Ho, C. T. Lee
Abstract:
Rapid economic development and population growth in Malaysia had accelerated the generation of solid waste. This issue gives pressure for effective management of municipal solid waste (MSW) to take place in Malaysia due to the increased cost of landfill. This paper discusses optimal planning of waste-to-energy (WTE) using a combinatorial simulation and optimization model through mixed integer linear programming (MILP) approach. The proposed multi-period model is tested in Iskandar Malaysia (IM) as case study for a period of 12 years (2011 -2025) to illustrate the economic potential and tradeoffs involved in this study. In this paper, 3 scenarios have been used to demonstrate the applicability of the model: (1) Incineration scenario (2) Landfill scenario (3) Optimal scenario. The model revealed that the minimum cost of electricity generation from 9,995,855 tonnes of MSW is estimated as USD 387million with a total electricity generation of 50MW /yr in the optimal scenario.Keywords: Mixed Integer Linear Programming (MILP), optimization, solid waste management (SWM), Waste-to-energy (WTE).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29862957 Influence of Radio Frequency Identification Technology in Logistic, Inventory Control and Supply Chain Optimization
Authors: H. Amoozad-khalili, R. Tavakkoli-Moghaddam, N.Shahab-Dehkordi
Abstract:
The main aim of Supply Chain Management (SCM) is to produce, distribute, logistics and deliver goods and equipment in right location, right time, right amount to satisfy costumers, with minimum time and cost waste. So implementing techniques that reduce project time and cost, and improve productivity and performance is very important. Emerging technologies such as the Radio Frequency Identification (RFID) are now making it possible to automate supply chains in a real time manner and making them more efficient than the simple supply chain of the past for tracing and monitoring goods and products and capturing data on movements of goods and other events. This paper considers concepts, components and RFID technology characteristics by concentration of warehouse and inventories management. Additionally, utilization of RFID in the role of improving information management in supply chain is discussed. Finally, the facts of installation and this technology-s results in direction with warehouse and inventory management and business development will be presented.Keywords: Logistics, Supply Chain Management, RFIDTechnology, Inventory Control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18342956 Transmission Line Congestion Management Using Hybrid Fish-Bee Algorithm with Unified Power Flow Controller
Authors: P. Valsalal, S. Thangalakshmi
Abstract:
There is a widespread changeover in the electrical power industry universally from old-style monopolistic outline towards a horizontally distributed competitive structure to come across the demand of rising consumption. When the transmission lines of derestricted system are incapable to oblige the entire service needs, the lines are overloaded or congested. The governor between customer and power producer is nominated as Independent System Operator (ISO) to lessen the congestion without obstructing transmission line restrictions. Among the existing approaches for congestion management, the frequently used approaches are reorganizing the generation and load curbing. There is a boundary for reorganizing the generators, and further loads may not be supplemented with the prevailing resources unless more private power producers are added in the system by considerably raising the cost. Hence, congestion is relaxed by appropriate Flexible AC Transmission Systems (FACTS) devices which boost the existing transfer capacity of transmission lines. The FACTs device, namely, Unified Power Flow Controller (UPFC) is preferred, and the correct placement of UPFC is more vital and should be positioned in the highly congested line. Hence, the weak line is identified by using power flow performance index with the new objective function with proposed hybrid Fish – Bee algorithm. Further, the location of UPFC at appropriate line reduces the branch loading and minimizes the voltage deviation. The power transfer capacity of lines is determined with and without UPFC in the identified congested line of IEEE 30 bus structure and the simulated results are compared with prevailing algorithms. It is observed that the transfer capacity of existing line is increased with the presented algorithm and thus alleviating the congestion.
Keywords: Available line transfer capability, congestion management, FACTS device, hybrid fish-bee algorithm, ISO, UPFC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15782955 Effect of Multiple Taxation on Investments in Small and Medium Enterprises in Enugu State, Nigeria
Authors: Ebere U. Okolo, Eunice C. Okpalaojiego, Chimaobi V. Okolo
Abstract:
Some investors prefer to keep their money in the bank rather than invest in Small and Medium Enterprise (SME) due to the high cost of running small and medium scale enterprise in Enugu State. This cost primarily concerns multiple-taxation, enormous tax burdens, levies and charges. This study examines the effect of multiple-taxation on the investments in SMEs. The study used survey design with SME population of 80. Questionnaire was used to collect data. Simple percentages/frequencies were used to analyze the data and the research hypotheses were tested with ANOVA. It was found that multiple taxation has negative effect on SMEs investment. Furthermore, the relationship between SMEs investment and its ability to pay tax is significant. The researcher recommends that government should develop a tax policy that considers the enhancement of SMEs’ capital allowance when imposing taxes. Government should also consider a tax policy that encourages investment in SMEs by consolidating all taxes in one slot and latter disseminate to various government purses rather than having many closely related but different taxes at the same time.
Keywords: Investments, multiple taxation, small and medium enterprises.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 61972954 Tape-Shaped Multiscale Fiducial Marker: A Design Prototype for Indoor Localization
Authors: Marcell S. A. Martins, Benedito S. R. Neto, Gerson L. Serejo, Carlos G. R. Santos
Abstract:
Indoor positioning systems use sensors such as Bluetooth, ZigBee, and Wi-Fi, as well as cameras for image capture, which can be fixed or mobile. These computer vision-based positioning approaches are low-cost to implement, mainly when it uses a mobile camera. The present study aims to create a design of a fiducial marker for a low-cost indoor localization system. The marker is tape-shaped to perform a continuous reading employing two detection algorithms, one for greater distances and another for smaller distances. Therefore, the location service is always operational, even with variations in capture distance. A minimal localization and reading algorithm was implemented for the proposed marker design, aiming to validate it. The accuracy tests consider readings varying the capture distance between [0.5, 10] meters, comparing the proposed marker with others. The tests showed that the proposed marker has a broader capture range than the ArUco and QRCode, maintaining the same size. Therefore, reducing the visual pollution and maximizing the tracking since the ambient can be covered entirely.
Keywords: Multiscale recognition, indoor localization, tape-shaped marker, Fiducial Marker.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1732953 Application of Stochastic Models to Annual Extreme Streamflow Data
Authors: Karim Hamidi Machekposhti, Hossein Sedghi
Abstract:
This study was designed to find the best stochastic model (using of time series analysis) for annual extreme streamflow (peak and maximum streamflow) of Karkheh River at Iran. The Auto-regressive Integrated Moving Average (ARIMA) model used to simulate these series and forecast those in future. For the analysis, annual extreme streamflow data of Jelogir Majin station (above of Karkheh dam reservoir) for the years 1958–2005 were used. A visual inspection of the time plot gives a little increasing trend; therefore, series is not stationary. The stationarity observed in Auto-Correlation Function (ACF) and Partial Auto-Correlation Function (PACF) plots of annual extreme streamflow was removed using first order differencing (d=1) in order to the development of the ARIMA model. Interestingly, the ARIMA(4,1,1) model developed was found to be most suitable for simulating annual extreme streamflow for Karkheh River. The model was found to be appropriate to forecast ten years of annual extreme streamflow and assist decision makers to establish priorities for water demand. The Statistical Analysis System (SAS) and Statistical Package for the Social Sciences (SPSS) codes were used to determinate of the best model for this series.Keywords: Stochastic models, ARIMA, extreme streamflow, Karkheh River.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7212952 Pallet Tracking and Cost Optimization of the Flow of Goods in Logistics Operations by Serial Shipping Container Code
Authors: Dominika Crnjac Milic, Martina Martinovic, Vladimir Simovic
Abstract:
The case study method in this paper shows the implementation of Information Technology (IT) and the Serial Shipping Container Code (SSCC) in a Croatian company that deals with logistics operations and provides logistics services in the cold chain segment. This company is aware of the sensitivity of the goods entrusted to them by the user of the service, as well as of the importance of speed and accuracy in providing logistics services. To that end, it has implemented and used the latest IT to ensure the highest standard of high-quality logistics services to its customers. Looking for efficiency and optimization of supply chain management, while maintaining a high level of quality of the products that are sold, today's users of outsourced logistics services are open to the implementation of new IT products that ultimately deliver savings. By analysing the positive results and the difficulties that arise when using this technology, we aim to provide an insight into the potential of this approach of the logistics service provider.
Keywords: Logistics operations, serial shipping container code, SSCC, information technology, cost optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9372951 An Empirical Study on Switching Activation Functions in Shallow and Deep Neural Networks
Authors: Apoorva Vinod, Archana Mathur, Snehanshu Saha
Abstract:
Though there exists a plethora of Activation Functions (AFs) used in single and multiple hidden layer Neural Networks (NN), their behavior always raised curiosity, whether used in combination or singly. The popular AFs – Sigmoid, ReLU, and Tanh – have performed prominently well for shallow and deep architectures. Most of the time, AFs are used singly in multi-layered NN, and, to the best of our knowledge, their performance is never studied and analyzed deeply when used in combination. In this manuscript, we experiment on multi-layered NN architecture (both on shallow and deep architectures; Convolutional NN and VGG16) and investigate how well the network responds to using two different AFs (Sigmoid-Tanh, Tanh-ReLU, ReLU-Sigmoid) used alternately against a traditional, single (Sigmoid-Sigmoid, Tanh-Tanh, ReLU-ReLU) combination. Our results show that on using two different AFs, the network achieves better accuracy, substantially lower loss, and faster convergence on 4 computer vision (CV) and 15 Non-CV (NCV) datasets. When using different AFs, not only was the accuracy greater by 6-7%, but we also accomplished convergence twice as fast. We present a case study to investigate the probability of networks suffering vanishing and exploding gradients when using two different AFs. Additionally, we theoretically showed that a composition of two or more AFs satisfies Universal Approximation Theorem (UAT).
Keywords: Activation Function, Universal Approximation function, Neural Networks, convergence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 153