Search results for: classifiers accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1838

Search results for: classifiers accuracy

98 Simulation and Workspace Analysis of a Tripod Parallel Manipulator

Authors: A. Arockia Selvakumar, R. Sivaramakrishnan, Srinivasa Karthik.T.V, Valluri Siva Ramakrishna, B.Vinodh.

Abstract:

Industrial robots play a vital role in automation however only little effort are taken for the application of robots in machining work such as Grinding, Cutting, Milling, Drilling, Polishing etc. Robot parallel manipulators have high stiffness, rigidity and accuracy, which cannot be provided by conventional serial robot manipulators. The aim of this paper is to perform the modeling and the workspace analysis of a 3 DOF Parallel Manipulator (3 DOF PM). The 3 DOF PM was modeled and simulated using 'ADAMS'. The concept involved is based on the transformation of motion from a screw joint to a spherical joint through a connecting link. This paper work has been planned to model the Parallel Manipulator (PM) using screw joints for very accurate positioning. A workspace analysis has been done for the determination of work volume of the 3 DOF PM. The position of the spherical joints connected to the moving platform and the circumferential points of the moving platform were considered for finding the workspace. After the simulation, the position of the joints of the moving platform was noted with respect to simulation time and these points were given as input to the 'MATLAB' for getting the work envelope. Then 'AUTOCAD' is used for determining the work volume. The obtained values were compared with analytical approach by using Pappus-Guldinus Theorem. The analysis had been dealt by considering the parameters, link length and radius of the moving platform. From the results it is found that the radius of moving platform is directly proportional to the work volume for a constant link length and the link length is also directly proportional to the work volume, at a constant radius of the moving platform.

Keywords: Three Degrees of freedom Parallel Manipulator (3DOF PM), ADAMS, Work volume, MATLAB, AUTOCAD, Pappus- Guldinus Theorem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2995
97 CFD Study of Subcooled Boiling Flow at Elevated Pressure Using a Mechanistic Wall Heat Partitioning Model

Authors: Machimontorn Promtong, Sherman C. P. Cheung, Guan H. Yeoh, Sara Vahaji, Jiyuan Tu

Abstract:

The wide range of industrial applications involved with boiling flows promotes the necessity of establishing fundamental knowledge in boiling flow phenomena. For this purpose, a number of experimental and numerical researches have been performed to elucidate the underlying physics of this flow. In this paper, the improved wall boiling models, implemented on ANSYS CFX 14.5, were introduced to study subcooled boiling flow at elevated pressure. At the heated wall boundary, the Fractal model, Force balance approach and Mechanistic frequency model are given for predicting the nucleation site density, bubble departure diameter, and bubble departure frequency. The presented wall heat flux partitioning closures were modified to consider the influence of bubble sliding along the wall before the lift-off, which usually happens in the flow boiling. The simulation was performed based on the Two-fluid model, where the standard k-ω SST model was selected for turbulence modelling. Existing experimental data at around 5 bars were chosen to evaluate the accuracy of the presented mechanistic approach. The void fraction and Interfacial Area Concentration (IAC) are in good agreement with the experimental data. However, the predicted bubble velocity and Sauter Mean Diameter (SMD) are over-predicted. This over-prediction may be caused by consideration of only dispersed and spherical bubbles in the simulations. In the future work, the important physical mechanisms of bubbles, such as merging and shrinking during sliding on the heated wall will be incorporated into this mechanistic model to enhance its capability for a wider range of flow prediction.

Keywords: CFD, mechanistic model, subcooled boiling flow, two-fluid model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1270
96 Non-Invasive Data Extraction from Machine Display Units Using Video Analytics

Authors: Ravneet Kaur, Joydeep Acharya, Sudhanshu Gaur

Abstract:

Artificial Intelligence (AI) has the potential to transform manufacturing by improving shop floor processes such as production, maintenance and quality. However, industrial datasets are notoriously difficult to extract in a real-time, streaming fashion thus, negating potential AI benefits. The main example is some specialized industrial controllers that are operated by custom software which complicates the process of connecting them to an Information Technology (IT) based data acquisition network. Security concerns may also limit direct physical access to these controllers for data acquisition. To connect the Operational Technology (OT) data stored in these controllers to an AI application in a secure, reliable and available way, we propose a novel Industrial IoT (IIoT) solution in this paper. In this solution, we demonstrate how video cameras can be installed in a factory shop floor to continuously obtain images of the controller HMIs. We propose image pre-processing to segment the HMI into regions of streaming data and regions of fixed meta-data. We then evaluate the performance of multiple Optical Character Recognition (OCR) technologies such as Tesseract and Google vision to recognize the streaming data and test it for typical factory HMIs and realistic lighting conditions. Finally, we use the meta-data to match the OCR output with the temporal, domain-dependent context of the data to improve the accuracy of the output. Our IIoT solution enables reliable and efficient data extraction which will improve the performance of subsequent AI applications.

Keywords: Human machine interface, industrial internet of things, internet of things, optical character recognition, video analytic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 739
95 E-Learning Recommender System Based on Collaborative Filtering and Ontology

Authors: John Tarus, Zhendong Niu, Bakhti Khadidja

Abstract:

In recent years, e-learning recommender systems has attracted great attention as a solution towards addressing the problem of information overload in e-learning environments and providing relevant recommendations to online learners. E-learning recommenders continue to play an increasing educational role in aiding learners to find appropriate learning materials to support the achievement of their learning goals. Although general recommender systems have recorded significant success in solving the problem of information overload in e-commerce domains and providing accurate recommendations, e-learning recommender systems on the other hand still face some issues arising from differences in learner characteristics such as learning style, skill level and study level. Conventional recommendation techniques such as collaborative filtering and content-based deal with only two types of entities namely users and items with their ratings. These conventional recommender systems do not take into account the learner characteristics in their recommendation process. Therefore, conventional recommendation techniques cannot make accurate and personalized recommendations in e-learning environment. In this paper, we propose a recommendation technique combining collaborative filtering and ontology to recommend personalized learning materials to online learners. Ontology is used to incorporate the learner characteristics into the recommendation process alongside the ratings while collaborate filtering predicts ratings and generate recommendations. Furthermore, ontological knowledge is used by the recommender system at the initial stages in the absence of ratings to alleviate the cold-start problem. Evaluation results show that our proposed recommendation technique outperforms collaborative filtering on its own in terms of personalization and recommendation accuracy.

Keywords: Collaborative filtering, e-learning, ontology, recommender system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3114
94 Oscillation Effect of the Multi-stage Learning for the Layered Neural Networks and Its Analysis

Authors: Isao Taguchi, Yasuo Sugai

Abstract:

This paper proposes an efficient learning method for the layered neural networks based on the selection of training data and input characteristics of an output layer unit. Comparing to recent neural networks; pulse neural networks, quantum neuro computation, etc, the multilayer network is widely used due to its simple structure. When learning objects are complicated, the problems, such as unsuccessful learning or a significant time required in learning, remain unsolved. Focusing on the input data during the learning stage, we undertook an experiment to identify the data that makes large errors and interferes with the learning process. Our method devides the learning process into several stages. In general, input characteristics to an output layer unit show oscillation during learning process for complicated problems. The multi-stage learning method proposes by the authors for the function approximation problems of classifying learning data in a phased manner, focusing on their learnabilities prior to learning in the multi layered neural network, and demonstrates validity of the multi-stage learning method. Specifically, this paper verifies by computer experiments that both of learning accuracy and learning time are improved of the BP method as a learning rule of the multi-stage learning method. In learning, oscillatory phenomena of a learning curve serve an important role in learning performance. The authors also discuss the occurrence mechanisms of oscillatory phenomena in learning. Furthermore, the authors discuss the reasons that errors of some data remain large value even after learning, observing behaviors during learning.

Keywords: data selection, function approximation problem, multistage leaning, neural network, voluntary oscillation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1430
93 Evaluating the Validity of Computational Fluid Dynamics Model of Dispersion in a Complex Urban Geometry Using Two Sets of Experimental Measurements

Authors: Mohammad R. Kavian Nezhad, Carlos F. Lange, Brian A. Fleck

Abstract:

This research presents the validation study of a computational fluid dynamics (CFD) model developed to simulate the scalar dispersion emitted from rooftop sources around the buildings at the University of Alberta North Campus. The ANSYS CFX code was used to perform the numerical simulation of the wind regime and pollutant dispersion by solving the 3D steady Reynolds-averaged Navier-Stokes (RANS) equations on a building-scale high-resolution grid. The validation study was performed in two steps. First, the CFD model performance in 24 cases (eight wind directions and three wind speeds) was evaluated by comparing the predicted flow fields with the available data from the previous measurement campaign designed at the North Campus, using the standard deviation method (SDM), while the estimated results of the numerical model showed maximum average percent errors of approximately 53% and 37% for wind incidents from the North and Northwest, respectively. Good agreement with the measurements was observed for the other six directions, with an average error of less than 30%. In the second step, the reliability of the implemented turbulence model, numerical algorithm, modeling techniques, and the grid generation scheme was further evaluated using the Mock Urban Setting Test (MUST) dispersion dataset. Different statistical measures, including the fractional bias (FB), the mean geometric bias (MG), and the normalized mean square error (NMSE), were used to assess the accuracy of the predicted dispersion field. Our CFD results are in very good agreement with the field measurements.

Keywords: CFD, plume dispersion, complex urban geometry, validation study, wind flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 373
92 Speaker Identification by Joint Statistical Characterization in the Log Gabor Wavelet Domain

Authors: Suman Senapati, Goutam Saha

Abstract:

Real world Speaker Identification (SI) application differs from ideal or laboratory conditions causing perturbations that leads to a mismatch between the training and testing environment and degrade the performance drastically. Many strategies have been adopted to cope with acoustical degradation; wavelet based Bayesian marginal model is one of them. But Bayesian marginal models cannot model the inter-scale statistical dependencies of different wavelet scales. Simple nonlinear estimators for wavelet based denoising assume that the wavelet coefficients in different scales are independent in nature. However wavelet coefficients have significant inter-scale dependency. This paper enhances this inter-scale dependency property by a Circularly Symmetric Probability Density Function (CS-PDF) related to the family of Spherically Invariant Random Processes (SIRPs) in Log Gabor Wavelet (LGW) domain and corresponding joint shrinkage estimator is derived by Maximum a Posteriori (MAP) estimator. A framework is proposed based on these to denoise speech signal for automatic speaker identification problems. The robustness of the proposed framework is tested for Text Independent Speaker Identification application on 100 speakers of POLYCOST and 100 speakers of YOHO speech database in three different noise environments. Experimental results show that the proposed estimator yields a higher improvement in identification accuracy compared to other estimators on popular Gaussian Mixture Model (GMM) based speaker model and Mel-Frequency Cepstral Coefficient (MFCC) features.

Keywords: Speaker Identification, Log Gabor Wavelet, Bayesian Bivariate Estimator, Circularly Symmetric Probability Density Function, SIRP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1651
91 Computational Method for Annotation of Protein Sequence According to Gene Ontology Terms

Authors: Razib M. Othman, Safaai Deris, Rosli M. Illias

Abstract:

Annotation of a protein sequence is pivotal for the understanding of its function. Accuracy of manual annotation provided by curators is still questionable by having lesser evidence strength and yet a hard task and time consuming. A number of computational methods including tools have been developed to tackle this challenging task. However, they require high-cost hardware, are difficult to be setup by the bioscientists, or depend on time intensive and blind sequence similarity search like Basic Local Alignment Search Tool. This paper introduces a new method of assigning highly correlated Gene Ontology terms of annotated protein sequences to partially annotated or newly discovered protein sequences. This method is fully based on Gene Ontology data and annotations. Two problems had been identified to achieve this method. The first problem relates to splitting the single monolithic Gene Ontology RDF/XML file into a set of smaller files that can be easy to assess and process. Thus, these files can be enriched with protein sequences and Inferred from Electronic Annotation evidence associations. The second problem involves searching for a set of semantically similar Gene Ontology terms to a given query. The details of macro and micro problems involved and their solutions including objective of this study are described. This paper also describes the protein sequence annotation and the Gene Ontology. The methodology of this study and Gene Ontology based protein sequence annotation tool namely extended UTMGO is presented. Furthermore, its basic version which is a Gene Ontology browser that is based on semantic similarity search is also introduced.

Keywords: automatic clustering, bioinformatics tool, gene ontology, protein sequence annotation, semantic similarity search

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3128
90 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model

Authors: Yepeng Cheng, Yasuhiko Morimoto

Abstract:

Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.

Keywords: Customer value, Huff's Gravity Model, POS, retailer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 612
89 A Two-Phase Flow Interface Tracking Algorithm Using a Fully Coupled Pressure-Based Finite Volume Method

Authors: Shidvash Vakilipour, Scott Ormiston, Masoud Mohammadi, Rouzbeh Riazi, Kimia Amiri, Sahar Barati

Abstract:

Two-phase and multi-phase flows are common flow types in fluid mechanics engineering. Among the basic and applied problems of these flow types, two-phase parallel flow is the one that two immiscible fluids flow in the vicinity of each other. In this type of flow, fluid properties (e.g. density, viscosity, and temperature) are different at the two sides of the interface of the two fluids. The most challenging part of the numerical simulation of two-phase flow is to determine the location of interface accurately. In the present work, a coupled interface tracking algorithm is developed based on Arbitrary Lagrangian-Eulerian (ALE) approach using a cell-centered, pressure-based, coupled solver. To validate this algorithm, an analytical solution for fully developed two-phase flow in presence of gravity is derived, and then, the results of the numerical simulation of this flow are compared with analytical solution at various flow conditions. The results of the simulations show good accuracy of the algorithm despite using a nearly coarse and uniform grid. Temporal variations of interface profile toward the steady-state solution show that a greater difference between fluids properties (especially dynamic viscosity) will result in larger traveling waves. Gravity effect studies also show that favorable gravity will result in a reduction of heavier fluid thickness and adverse gravity leads to increasing it with respect to the zero gravity condition. However, the magnitude of variation in favorable gravity is much more than adverse gravity.

Keywords: Coupled solver, gravitational force, interface tracking, Reynolds number to Froude number, two-phase flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1013
88 Accuracy of Peak Demand Estimates for Office Buildings Using eQUEST

Authors: Mahdiyeh Zafaranchi, Ethan S. Cantor, William T. Riddell, Jess W. Everett

Abstract:

The New Jersey Department of Military and Veteran’s Affairs (NJ DMAVA) operates over 50 facilities throughout the state of New Jersey, US. NJ DMAVA is under a mandate to move toward decarbonization, which will eventually include eliminating the use of natural gas and other fossil fuels for heating. At the same time, the organization requires increased resiliency regarding electric grid disruption. These competing goals necessitate adopting the use of on-site renewables such as photovoltaic and geothermal power, as well as implementing power control strategies through microgrids. Planning for these changes requires a detailed understanding of current and future electricity use on yearly, monthly, and shorter time scales, as well as a breakdown of consumption by heating, ventilation, and air conditioning (HVAC) equipment. This paper discusses case studies of two buildings that were simulated using the QUick Energy Simulation Tool (eQUEST). Both buildings use electricity from the grid and photovoltaics. One building also uses natural gas. While electricity use data are available in hourly intervals and natural gas data are available in monthly intervals, the simulations were developed using monthly and yearly totals. This approach was chosen to reflect the information available for most NJ DMAVA facilities. Once completed, simulation results are compared to metrics recommended by several organizations to validate energy use simulations. In addition to yearly and monthly totals, the simulated peak demands are compared to actual monthly peak demand values. The simulations resulted in monthly peak demand values that were within 30% of the measured values. These benchmarks will help to assess future energy planning efforts for NJ DMAVA.

Keywords: Building Energy Modeling, eQUEST, peak demand, smart meters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 179
87 Structural Behavior of Precast Foamed Concrete Sandwich Panel Subjected to Vertical In-Plane Shear Loading

Authors: Y. H. Mugahed Amran, Raizal S. M. Rashid, Farzad Hejazi, Nor Azizi Safiee, A. A. Abang Ali

Abstract:

Experimental and analytical studies were accomplished to examine the structural behavior of precast foamed concrete sandwich panel (PFCSP) under vertical in-plane shear load. PFCSP full-scale specimens with total number of six were developed with varying heights to study an important parameter slenderness ratio (H/t). The production technique of PFCSP and the procedure of test setup were described. The results obtained from the experimental tests were analysed in the context of in-plane shear strength capacity, load-deflection profile, load-strain relationship, slenderness ratio, shear cracking patterns and mode of failure. Analytical study of finite element analysis was implemented and the theoretical calculations of the ultimate in-plane shear strengths using the adopted ACI318 equation for reinforced concrete wall were determined aimed at predicting the in-plane shear strength of PFCSP. The decrease in slenderness ratio from 24 to 14 showed an increase of 26.51% and 21.91% on the ultimate in-plane shear strength capacity as obtained experimentally and in FEA models, respectively. The experimental test results, FEA models data and theoretical calculation values were compared and provided a significant agreement with high degree of accuracy. Therefore, on the basis of the results obtained, PFCSP wall has the potential use as an alternative to the conventional load-bearing wall system.

Keywords: Deflection profiles, foamed concrete, load-strain relationships, precast foamed concrete sandwich panel, slenderness ratio, vertical in-plane shear strength capacity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2647
86 Remaining Useful Life Estimation of Bearings Based on Nonlinear Dimensional Reduction Combined with Timing Signals

Authors: Zhongmin Wang, Wudong Fan, Hengshan Zhang, Yimin Zhou

Abstract:

In data-driven prognostic methods, the prediction accuracy of the estimation for remaining useful life of bearings mainly depends on the performance of health indicators, which are usually fused some statistical features extracted from vibrating signals. However, the existing health indicators have the following two drawbacks: (1) The differnet ranges of the statistical features have the different contributions to construct the health indicators, the expert knowledge is required to extract the features. (2) When convolutional neural networks are utilized to tackle time-frequency features of signals, the time-series of signals are not considered. To overcome these drawbacks, in this study, the method combining convolutional neural network with gated recurrent unit is proposed to extract the time-frequency image features. The extracted features are utilized to construct health indicator and predict remaining useful life of bearings. First, original signals are converted into time-frequency images by using continuous wavelet transform so as to form the original feature sets. Second, with convolutional and pooling layers of convolutional neural networks, the most sensitive features of time-frequency images are selected from the original feature sets. Finally, these selected features are fed into the gated recurrent unit to construct the health indicator. The results state that the proposed method shows the enhance performance than the related studies which have used the same bearing dataset provided by PRONOSTIA.

Keywords: Continuous wavelet transform, convolution neural network, gated recurrent unit, health indicators, remaining useful life.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 767
85 Machine Learning Techniques for COVID-19 Detection: A Comparative Analysis

Authors: Abeer Aljohani

Abstract:

The COVID-19 virus spread has been one of the extreme pandemics across the globe. It is also referred as corona virus which is a contagious disease that continuously mutates into numerous variants. Currently, the B.1.1.529 variant labeled as Omicron is detected in South Africa. The huge spread of COVID-19 disease has affected several lives and has surged exceptional pressure on the healthcare systems worldwide. Also, everyday life and the global economy have been at stake. Numerous COVID-19 cases have produced a huge burden on hospitals as well as health workers. To reduce this burden, this paper predicts COVID-19 disease based on the symptoms and medical history of the patient. As machine learning is a widely accepted area and gives promising results for healthcare, this research presents an architecture for COVID-19 detection using ML techniques integrated with feature dimensionality reduction. This paper uses a standard University of California Irvine (UCI) dataset for predicting COVID-19 disease. This dataset comprises symptoms of 5434 patients. This paper also compares several supervised ML techniques on the presented architecture. The architecture has also utilized 10-fold cross validation process for generalization and Principal Component Analysis (PCA) technique for feature reduction. Standard parameters are used to evaluate the proposed architecture including F1-Score, precision, accuracy, recall, Receiver Operating Characteristic (ROC) and Area under Curve (AUC). The results depict that Decision tree, Random Forest and neural networks outperform all other state-of-the-art ML techniques. This result can be used to effectively identify COVID-19 infection cases.

Keywords: Supervised machine learning, COVID-19 prediction, healthcare analytics, Random Forest, Neural Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 384
84 Dispersion Rate of Spilled Oil in Water Column under Non-Breaking Water Waves

Authors: Hanifeh Imanian, Morteza Kolahdoozan

Abstract:

The purpose of this study is to present a mathematical phrase for calculating the dispersion rate of spilled oil in water column under non-breaking waves. In this regard, a multiphase numerical model is applied for which waves and oil phase were computed concurrently, and accuracy of its hydraulic calculations have been proven. More than 200 various scenarios of oil spilling in wave waters were simulated using the multiphase numerical model and its outcome were collected in a database. The recorded results were investigated to identify the major parameters affected vertical oil dispersion and finally 6 parameters were identified as main independent factors. Furthermore, some statistical tests were conducted to identify any relationship between the dependent variable (dispersed oil mass in the water column) and independent variables (water wave specifications containing height, length and wave period and spilled oil characteristics including density, viscosity and spilled oil mass). Finally, a mathematical-statistical relationship is proposed to predict dispersed oil in marine waters. To verify the proposed relationship, a laboratory example available in the literature was selected. Oil mass rate penetrated in water body computed by statistical regression was in accordance with experimental data was predicted. On this occasion, it was necessary to verify the proposed mathematical phrase. In a selected laboratory case available in the literature, mass oil rate penetrated in water body computed by suggested regression. Results showed good agreement with experimental data. The validated mathematical-statistical phrase is a useful tool for oil dispersion prediction in oil spill events in marine areas.

Keywords: Dispersion, marine environment, mathematical-statistical relationship, oil spill.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1146
83 Appraisal of Trace Elements in Scalp Hair of School Children in Kandal Province, Cambodia

Authors: A. Yavar, S. Sarmani, K. S. Khoo

Abstract:

The analysis of trace elements in human hair provides crucial insights into an individual's nutritional status and environmental exposure. This research aimed to examine the levels of toxic and essential elements in the scalp hair of school children aged 12-17 from three villages (Anglong Romiot (AR), Svay Romiot (SR), and Kampong Kong (KK)) in Cambodia's Kandal province, a region where residents are especially vulnerable to toxic elements, notably arsenic (As), due to their dietary habits, lifestyle, and environmental conditions. The scalp hair samples were analyzed using the k0-Instrumental Neutron Activation method (k0-INAA), with a six-hour irradiation period in the Malaysian Nuclear Agency (MNA) research reactor followed by High Purity Germanium (HPGe) detector use to identify the gamma peaks of radionuclides. The analysis identified 31 elements in the human hair from the study area, including As, Au, Br, Ca, Ce, Co, Dy, Eu-152m, Hg-197, Hg-203, Ho, Ir, K, La, Lu, Mn, Na, Pa, Pt-195m, Pt-197, Sb, Sc-46, Sc-47, Sm, Sn-117m, W-181, W-187, Yb-169, Yb-175, Zn, and Zn-69m. The accuracy of the method was verified through the analysis of ERM-DB001-human hair as a Certified Reference Material (CRM), with the results demonstrating consistency with the certified values. Given the prevalent arsenic pollution in the research area, the study also examined the relationship between the concentration of As and other elements using Pearson's correlation test. The outcomes offer a comprehensive resource for future investigations into toxic and essential element presence in the region. In the main body of the paper, a more extensive discussion on the implications of arsenic pollution and the correlations observed is provided to enhance understanding and inform future research directions.

Keywords: Human scalp hair, toxic and essential elements, Kandal Province, Cambodia, k0-Instrumental Neutron Activation Method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 263
82 Probe-Assisted Axillary Lymph Node Biopsy Compared with Axillary Dissection in Breast Cancer: A Retrospective Study from the West of Iran

Authors: Morteza Alizadeh Foroutan, Hassan Moayeri, Keivan Sabooni, Motahareh Rouhi Ardeshiri

Abstract:

Breast cancer incidence is annually increasing in various parts of the world, and sentinel lymph node biopsy (SLNB) has turned into a new standard for care as a staging process in this regard. In the present study, the gamma probe technique was used for SLNB as a safe method with more accuracy and less complications. The study sought to compare the results of two surgical techniques, namely, axillary lymph node dissection (ALND) and SLNB, including epidemiological results and clinicopathological features of BC patients from the western provinces of Iran. In general, 420 BC women were identified who referred to the breast clinic in Sanandaj, Kurdistan province during 2017-2021. Of whom, 318 patients underwent breast surgery, and from these patients, 277 cases participated in the current study. Patients were divided into those undergoing ALND and SLNB. The criteria for complete dissection or axillary biopsy using the gamma probe were based on the results of clinical examinations and the presence of palpable lymph nodes. Overall complications after surgery belonged to 58 (18.9%) cases, including 15 (25.9%) and 43 (74.1%) patients in the SLNB and ALND groups, respectively (P = 0.74). Based on the findings, Seroma (60.3%) was the most reported complication in each group. Most patients had tumors in the upper-outer quadrant of their left breast. The mean of the tumor dimension in the SLNB and ALND groups was 2.1 ± 1.3 cm and 3.2 ± 1.8 cm, respectively, (P = 0.003). The benefits of breast-conserving surgery (BCS) with the SLNB technique are clearly undeniable and can be considered a method with less complications and a better prognosis. Accordingly, SLNB and BCS are favorable methods that can be performed, along with gamma probe technique, which is safe and accurate.

Keywords: Breast cancer, Sentinel lymph node biopsy, Axillary lymph node dissection, Gamma probe.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38
81 Comparison of Different Hydrograph Routing Techniques in XPSTORM Modelling Software: A Case Study

Authors: Fatema Akram, Mohammad Golam Rasul, Mohammad Masud Kamal Khan, Md. Sharif Imam Ibne Amir

Abstract:

A variety of routing techniques are available to develop surface runoff hydrographs from rainfall. The selection of runoff routing method is very vital as it is directly related to the type of watershed and the required degree of accuracy. There are different modelling softwares available to explore the rainfall-runoff process in urban areas. XPSTORM, a link-node based, integrated stormwater modelling software, has been used in this study for developing surface runoff hydrograph for a Golf course area located in Rockhampton in Central Queensland in Australia. Four commonly used methods, namely SWMM runoff, Kinematic wave, Laurenson, and Time-Area are employed to generate runoff hydrograph for design storm of this study area. In runoff mode of XPSTORM, the rainfall, infiltration, evaporation and depression storage for subcatchments were simulated and the runoff from the subcatchment to collection node was calculated. The simulation results are presented, discussed and compared. The total surface runoff generated by SWMM runoff, Kinematic wave and Time-Area methods are found to be reasonably close, which indicates any of these methods can be used for developing runoff hydrograph of the study area. Laurenson method produces a comparatively less amount of surface runoff, however, it creates highest peak of surface runoff among all which may be suitable for hilly region. Although the Laurenson hydrograph technique is widely acceptable surface runoff routing technique in Queensland (Australia), extensive investigation is recommended with detailed topographic and hydrologic data in order to assess its suitability for use in the case study area.

Keywords: ARI, design storm, IFD, rainfall temporal pattern, routing techniques, surface runoff, XPSTORM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5046
80 Artificial Neural Network Modeling of a Closed Loop Pulsating Heat Pipe

Authors: Vipul M. Patel, Hemantkumar B. Mehta

Abstract:

Technological innovations in electronic world demand novel, compact, simple in design, less costly and effective heat transfer devices. Closed Loop Pulsating Heat Pipe (CLPHP) is a passive phase change heat transfer device and has potential to transfer heat quickly and efficiently from source to sink. Thermal performance of a CLPHP is governed by various parameters such as number of U-turns, orientations, input heat, working fluids and filling ratio. The present paper is an attempt to predict the thermal performance of a CLPHP using Artificial Neural Network (ANN). Filling ratio and heat input are considered as input parameters while thermal resistance is set as target parameter. Types of neural networks considered in the present paper are radial basis, generalized regression, linear layer, cascade forward back propagation, feed forward back propagation; feed forward distributed time delay, layer recurrent and Elman back propagation. Linear, logistic sigmoid, tangent sigmoid and Radial Basis Gaussian Function are used as transfer functions. Prediction accuracy is measured based on the experimental data reported by the researchers in open literature as a function of Mean Absolute Relative Deviation (MARD). The prediction of a generalized regression ANN model with spread constant of 4.8 is found in agreement with the experimental data for MARD in the range of ±1.81%.

Keywords: ANN models, CLPHP, filling ratio, generalized regression, spread constant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1184
79 Experimental Studies of Sigma Thin-Walled Beams Strengthen by CFRP Tapes

Authors: Katarzyna Rzeszut, Ilona Szewczak

Abstract:

The review of selected methods of strengthening of steel structures with carbon fiber reinforced polymer (CFRP) tapes and the analysis of influence of composite materials on the steel thin-walled elements are performed in this paper. The study is also focused to the problem of applying fast and effective strengthening methods of the steel structures made of thin-walled profiles. It is worth noting that the issue of strengthening the thin-walled structures is a very complex, due to inability to perform welded joints in this type of elements and the limited ability to applying mechanical fasteners. Moreover, structures made of thin-walled cross-section demonstrate a high sensitivity to imperfections and tendency to interactive buckling, which may substantially contribute to the reduction of critical load capacity. Due to the lack of commonly used and recognized modern methods of strengthening of thin-walled steel structures, authors performed the experimental studies of thin-walled sigma profiles strengthened with CFRP tapes. The paper presents the experimental stand and the preliminary results of laboratory test concerning the analysis of the effectiveness of the strengthening steel beams made of thin-walled sigma profiles with CFRP tapes. The study includes six beams made of the cold-rolled sigma profiles with height of 140 mm, wall thickness of 2.5 mm, and a length of 3 m, subjected to the uniformly distributed load. Four beams have been strengthened with carbon fiber tape Sika CarboDur S, while the other two were tested without strengthening to obtain reference results. Based on the obtained results, the evaluation of the accuracy of applied composite materials for strengthening of thin-walled structures was performed.

Keywords: CFRP tapes, sigma profiles, steel thin-walled structures, strengthening.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 864
78 A Modelling Study of the Photochemical and Particulate Pollution Characteristics above a Typical Southeast Mediterranean Urban Area

Authors: Kiriaki-Maria Fameli, Vasiliki D. Assimakopoulos, Vasiliki Kotroni

Abstract:

The Greater Athens Area (GAA) faces photochemical and particulate pollution episodes as a result of the combined effects of local pollutant emissions, regional pollution transport, synoptic circulation and topographic characteristics. The area has undergone significant changes since the Athens 2004 Olympic Games because of large scale infrastructure works that lead to the shift of population to areas previously characterized as rural, the increase of the traffic fleet and the operation of highways. However, few recent modelling studies have been performed due to the lack of an accurate, updated emission inventory. The photochemical modelling system MM5/CAMx was applied in order to study the photochemical and particulate pollution characteristics above the GAA for two distinct ten-day periods in the summer of 2006 and 2010, where air pollution episodes occurred. A new updated emission inventory was used based on official data. Comparison of modeled results with measurements revealed the importance and accuracy of the new Athens emission inventory as compared to previous modeling studies. The model managed to reproduce the local meteorological conditions, the daily ozone and particulates fluctuations at different locations across the GAA. Higher ozone levels were found at suburban and rural areas as well as over the sea at the south of the basin. Concerning PM10, high concentrations were computed at the city centre and the southeastern suburbs in agreement with measured data. Source apportionment analysis showed that different sources contribute to the ozone levels, the local sources (traffic, port activities) affecting its formation.

Keywords: Photochemical modelling, urban pollution, greater Athens area, MM5/CAMx.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1367
77 Integration of Big Data to Predict Transportation for Smart Cities

Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin

Abstract:

The Intelligent transportation system is essential to build smarter cities. Machine learning based transportation prediction could be highly promising approach by delivering invisible aspect visible. In this context, this research aims to make a prototype model that predicts transportation network by using big data and machine learning technology. In detail, among urban transportation systems this research chooses bus system.  The research problem that existing headway model cannot response dynamic transportation conditions. Thus, bus delay problem is often occurred. To overcome this problem, a prediction model is presented to fine patterns of bus delay by using a machine learning implementing the following data sets; traffics, weathers, and bus statues. This research presents a flexible headway model to predict bus delay and analyze the result. The prototyping model is composed by real-time data of buses. The data are gathered through public data portals and real time Application Program Interface (API) by the government. These data are fundamental resources to organize interval pattern models of bus operations as traffic environment factors (road speeds, station conditions, weathers, and bus information of operating in real-time). The prototyping model is designed by the machine learning tool (RapidMiner Studio) and conducted tests for bus delays prediction. This research presents experiments to increase prediction accuracy for bus headway by analyzing the urban big data. The big data analysis is important to predict the future and to find correlations by processing huge amount of data. Therefore, based on the analysis method, this research represents an effective use of the machine learning and urban big data to understand urban dynamics.

Keywords: Big data, bus headway prediction, machine learning, public transportation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1562
76 A Damage Level Assessment Model for Extra High Voltage Transmission Towers

Authors: Huan-Chieh Chiu, Hung-Shuo Wu, Chien-Hao Wang, Yu-Cheng Yang, Ching-Ya Tseng, Joe-Air Jiang

Abstract:

Power failure resulting from tower collapse due to violent seismic events might bring enormous and inestimable losses. The Chi-Chi earthquake, for example, strongly struck Taiwan and caused huge damage to the power system on September 21, 1999. Nearly 10% of extra high voltage (EHV) transmission towers were damaged in the earthquake. Therefore, seismic hazards of EHV transmission towers should be monitored and evaluated. The ultimate goal of this study is to establish a damage level assessment model for EHV transmission towers. The data of earthquakes provided by Taiwan Central Weather Bureau serve as a reference and then lay the foundation for earthquake simulations and analyses afterward. Some parameters related to the damage level of each point of an EHV tower are simulated and analyzed by the data from monitoring stations once an earthquake occurs. Through the Fourier transform, the seismic wave is then analyzed and transformed into different wave frequencies, and the data would be shown through a response spectrum. With this method, the seismic frequency which damages EHV towers the most is clearly identified. An estimation model is built to determine the damage level caused by a future seismic event. Finally, instead of relying on visual observation done by inspectors, the proposed model can provide a power company with the damage information of a transmission tower. Using the model, manpower required by visual observation can be reduced, and the accuracy of the damage level estimation can be substantially improved. Such a model is greatly useful for health and construction monitoring because of the advantages of long-term evaluation of structural characteristics and long-term damage detection.

Keywords: Smart grid, EHV transmission tower, response spectrum, damage level monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1066
75 Improving Fake News Detection Using K-means and Support Vector Machine Approaches

Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy

Abstract:

Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.

Keywords: Fake news detection, feature selection, support vector machine, K-means clustering, machine learning, social media.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4524
74 Phosphine Mortality Estimation for Simulation of Controlling Pest of Stored Grain: Lesser Grain Borer (Rhyzopertha dominica)

Authors: Mingren Shi, Michael Renton

Abstract:

There is a world-wide need for the development of sustainable management strategies to control pest infestation and the development of phosphine (PH3) resistance in lesser grain borer (Rhyzopertha dominica). Computer simulation models can provide a relatively fast, safe and inexpensive way to weigh the merits of various management options. However, the usefulness of simulation models relies on the accurate estimation of important model parameters, such as mortality. Concentration and time of exposure are both important in determining mortality in response to a toxic agent. Recent research indicated the existence of two resistance phenotypes in R. dominica in Australia, weak and strong, and revealed that the presence of resistance alleles at two loci confers strong resistance, thus motivating the construction of a two-locus model of resistance. Experimental data sets on purified pest strains, each corresponding to a single genotype of our two-locus model, were also available. Hence it became possible to explicitly include mortalities of the different genotypes in the model. In this paper we described how we used two generalized linear models (GLM), probit and logistic models, to fit the available experimental data sets. We used a direct algebraic approach generalized inverse matrix technique, rather than the traditional maximum likelihood estimation, to estimate the model parameters. The results show that both probit and logistic models fit the data sets well but the former is much better in terms of small least squares (numerical) errors. Meanwhile, the generalized inverse matrix technique achieved similar accuracy results to those from the maximum likelihood estimation, but is less time consuming and computationally demanding.

Keywords: mortality estimation, probit models, logistic model, generalized inverse matrix approach, pest control simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584
73 Geostatistical Analysis and Mapping of Groundlevel Ozone in a Medium Sized Urban Area

Authors: F. J. Moral García, P. Valiente González, F. López Rodríguez

Abstract:

Ground-level tropospheric ozone is one of the air pollutants of most concern. It is mainly produced by photochemical processes involving nitrogen oxides and volatile organic compounds in the lower parts of the atmosphere. Ozone levels become particularly high in regions close to high ozone precursor emissions and during summer, when stagnant meteorological conditions with high insolation and high temperatures are common. In this work, some results of a study about urban ozone distribution patterns in the city of Badajoz, which is the largest and most industrialized city in Extremadura region (southwest Spain) are shown. Fourteen sampling campaigns, at least one per month, were carried out to measure ambient air ozone concentrations, during periods that were selected according to favourable conditions to ozone production, using an automatic portable analyzer. Later, to evaluate the ozone distribution at the city, the measured ozone data were analyzed using geostatistical techniques. Thus, first, during the exploratory analysis of data, it was revealed that they were distributed normally, which is a desirable property for the subsequent stages of the geostatistical study. Secondly, during the structural analysis of data, theoretical spherical models provided the best fit for all monthly experimental variograms. The parameters of these variograms (sill, range and nugget) revealed that the maximum distance of spatial dependence is between 302-790 m and the variable, air ozone concentration, is not evenly distributed in reduced distances. Finally, predictive ozone maps were derived for all points of the experimental study area, by use of geostatistical algorithms (kriging). High prediction accuracy was obtained in all cases as cross-validation showed. Useful information for hazard assessment was also provided when probability maps, based on kriging interpolation and kriging standard deviation, were produced.

Keywords: Kriging, map, tropospheric ozone, variogram.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1869
72 Precipitation Intensity: Duration Based Threshold Analysis for Initiation of Landslides in Upper Alaknanda Valley

Authors: Soumiya Bhattacharjee, P. K. Champati Ray, Shovan L. Chattoraj, Mrinmoy Dhara

Abstract:

The entire Himalayan range is globally renowned for rainfall-induced landslides. The prime focus of the study is to determine rainfall based threshold for initiation of landslides that can be used as an important component of an early warning system for alerting stake holders. This research deals with temporal dimension of slope failures due to extreme rainfall events along the National Highway-58 from Karanprayag to Badrinath in the Garhwal Himalaya, India. Post processed 3-hourly rainfall intensity data and its corresponding duration from daily rainfall data available from Tropical Rainfall Measuring Mission (TRMM) were used as the prime source of rainfall data. Landslide event records from Border Road Organization (BRO) and some ancillary landslide inventory data for 2013 and 2014 have been used to determine Intensity Duration (ID) based rainfall threshold. The derived governing threshold equation, I= 4.738D-0.025, has been considered for prediction of landslides of the study region. This equation was validated with an accuracy of 70% landslides during August and September 2014. The derived equation was considered for further prediction of landslides of the study region. From the obtained results and validation, it can be inferred that this equation can be used for initiation of landslides in the study area to work as a part of an early warning system. Results can significantly improve with ground based rainfall estimates and better database on landslide records. Thus, the study has demonstrated a very low cost method to get first-hand information on possibility of impending landslide in any region, thereby providing alert and better preparedness for landslide disaster mitigation.

Keywords: Landslide, intensity-duration, rainfall threshold, Tropical Rainfall Measuring Mission, slope, inventory, early warning system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1237
71 High Securing Cover-File of Hidden Data Using Statistical Technique and AES Encryption Algorithm

Authors: A. A. Zaidan, Anas Majeed, B. B. Zaidan

Abstract:

Nowadays, the rapid development of multimedia and internet allows for wide distribution of digital media data. It becomes much easier to edit, modify and duplicate digital information Besides that, digital documents are also easy to copy and distribute, therefore it will be faced by many threatens. It-s a big security and privacy issue with the large flood of information and the development of the digital format, it become necessary to find appropriate protection because of the significance, accuracy and sensitivity of the information. Nowadays protection system classified with more specific as hiding information, encryption information, and combination between hiding and encryption to increase information security, the strength of the information hiding science is due to the non-existence of standard algorithms to be used in hiding secret messages. Also there is randomness in hiding methods such as combining several media (covers) with different methods to pass a secret message. In addition, there are no formal methods to be followed to discover the hidden data. For this reason, the task of this research becomes difficult. In this paper, a new system of information hiding is presented. The proposed system aim to hidden information (data file) in any execution file (EXE) and to detect the hidden file and we will see implementation of steganography system which embeds information in an execution file. (EXE) files have been investigated. The system tries to find a solution to the size of the cover file and making it undetectable by anti-virus software. The system includes two main functions; first is the hiding of the information in a Portable Executable File (EXE), through the execution of four process (specify the cover file, specify the information file, encryption of the information, and hiding the information) and the second function is the extraction of the hiding information through three process (specify the steno file, extract the information, and decryption of the information). The system has achieved the main goals, such as make the relation of the size of the cover file and the size of information independent and the result file does not make any conflict with anti-virus software.

Keywords: Cryptography, Steganography, Portable ExecutableFile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1802
70 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: Anti-spoofing, CNN, fingerprint recognition, loss function, optimizer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 420
69 Turbine Follower Control Strategy Design Based on Developed FFPP Model

Authors: Ali Ghaffari, Mansour Nikkhah Bahrami, Hesam Parsa

Abstract:

In this paper a comprehensive model of a fossil fueled power plant (FFPP) is developed in order to evaluate the performance of a newly designed turbine follower controller. Considering the drawbacks of previous works, an overall model is developed to minimize the error between each subsystem model output and the experimental data obtained at the actual power plant. The developed model is organized in two main subsystems namely; Boiler and Turbine. Considering each FFPP subsystem characteristics, different modeling approaches are developed. For economizer, evaporator, superheater and reheater, first order models are determined based on principles of mass and energy conservation. Simulations verify the accuracy of the developed models. Due to the nonlinear characteristics of attemperator, a new model, based on a genetic-fuzzy systems utilizing Pittsburgh approach is developed showing a promising performance vis-à-vis those derived with other methods like ANFIS. The optimization constraints are handled utilizing penalty functions. The effect of increasing the number of rules and membership functions on the performance of the proposed model is also studied and evaluated. The turbine model is developed based on the equation of adiabatic expansion. Parameters of all evaluated models are tuned by means of evolutionary algorithms. Based on the developed model a fuzzy PI controller is developed. It is then successfully implemented in the turbine follower control strategy of the plant. In this control strategy instead of keeping control parameters constant, they are adjusted on-line with regard to the error and the error rate. It is shown that the response of the system improves significantly. It is also shown that fuel consumption decreases considerably.

Keywords: Attemperator, Evolutionary algorithms, Fossil fuelled power plant (FFPP), Fuzzy set theory, Gain scheduling

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1792