Search results for: Gaussian Mixture Model (GMM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7939

Search results for: Gaussian Mixture Model (GMM)

5869 Improving the Performance of Deep Learning in Facial Emotion Recognition with Image Sharpening

Authors: Ksheeraj Sai Vepuri, Nada Attar

Abstract:

We as humans use words with accompanying visual and facial cues to communicate effectively. Classifying facial emotion using computer vision methodologies has been an active research area in the computer vision field. In this paper, we propose a simple method for facial expression recognition that enhances accuracy. We tested our method on the FER-2013 dataset that contains static images. Instead of using Histogram equalization to preprocess the dataset, we used Unsharp Mask to emphasize texture and details and sharpened the edges. We also used ImageDataGenerator from Keras library for data augmentation. Then we used Convolutional Neural Networks (CNN) model to classify the images into 7 different facial expressions, yielding an accuracy of 69.46% on the test set. Our results show that using image preprocessing such as the sharpening technique for a CNN model can improve the performance, even when the CNN model is relatively simple.

Keywords: Facial expression recognition, image pre-processing, deep learning, CNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 531
5868 Customer Value Creation by CRM System in Electronic Device Companies

Authors: Hideki.Kobayashi, Hiroshi.Osada

Abstract:

The service industry accounts for about 70% of GDP of Japan, and the importance of the service innovation is pointed out. The importance of the system use and the support service increases in the information system that is one of the service industries. However, because the system is not used enough, the purpose for which it was originally intended cannot often be achieved in the CRM system. To promote the use of the system, the effective service method is needed. It is thought that the service model's making and the clarification of the success factors are necessary to improve the operation service of the CRM system. In this research the model of the operation service in the CRM system is made.

Keywords: Information system, Operation service, Serviceinnovation, Solution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1307
5867 Stress Analysis of Water Wall Tubes of a Coal-fired Boiler during Soot Blowing Operation

Authors: Pratch Kittipongpattana, Thongchai Fongsamootr

Abstract:

This research aimed to study the influences of a soot blowing operation and geometrical variables to the stress characteristic of water wall tubes located in soot blowing areas which caused the boilers of Mae Moh power plant to lose their generation hour. The research method is divided into 2 parts (a) measuring the strain on water wall tubes by using 3-element rosette strain gages orientation during a full capacity plant operation and in periods of soot blowing operations (b) creating a finite element model in order to calculate stresses on tubes and validating the model by using experimental data in a steady state plant operation. Then, the geometrical variables in the model were changed to study stresses on the tubes. The results revealed that the stress was not affected by the soot blowing process and the finite element model gave the results 1.24% errors from the experiment. The geometrical variables influenced the stress, with the most optimum tubes design in this research reduced the average stress from the present design 31.28%.

Keywords: Boiler water wall tube, Finite element, Stress analysis, Strain gage rosette.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1835
5866 A Comparative Study of Malware Detection Techniques Using Machine Learning Methods

Authors: Cristina Vatamanu, Doina Cosovan, Dragoş Gavriluţ, Henri Luchian

Abstract:

In the past few years, the amount of malicious software increased exponentially and, therefore, machine learning algorithms became instrumental in identifying clean and malware files through (semi)-automated classification. When working with very large datasets, the major challenge is to reach both a very high malware detection rate and a very low false positive rate. Another challenge is to minimize the time needed for the machine learning algorithm to do so. This paper presents a comparative study between different machine learning techniques such as linear classifiers, ensembles, decision trees or various hybrids thereof. The training dataset consists of approximately 2 million clean files and 200.000 infected files, which is a realistic quantitative mixture. The paper investigates the above mentioned methods with respect to both their performance (detection rate and false positive rate) and their practicability.

Keywords: Detection Rate, False Positives, Perceptron, One Side Class, Ensembles, Decision Tree, Hybrid methods, Feature Selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3274
5865 Airplane Stability during Climb/Descend Phase Using a Flight Dynamics Simulation

Authors: Niloufar Ghoreishi, Ali Nekouzadeh

Abstract:

The stability of the flight during maneuvering and in response to probable perturbations is one of the most essential features of an aircraft that should be analyzed and designed for. In this study, we derived the non-linear governing equations of aircraft dynamics during the climb/descend phase and simulated a model aircraft. The corresponding force and moment dimensionless coefficients of the model and their variations with elevator angle and other relevant aerodynamic parameters were measured experimentally. The short-period mode and phugoid mode response were simulated by solving the governing equations numerically and then compared with the desired stability parameters for the particular level, category, and class of the aircraft model. To meet the target stability, a controller was designed and used. This resulted in significant improvement in the stability parameters of the flight.

Keywords: Flight stability, phugoid mode, short period mode, climb phase, damping coefficient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 188
5864 An Enhanced Situational Awareness of AUV's Mission by Multirate Neural Control

Authors: Igor Astrov, Mikhail Pikkov

Abstract:

This paper focuses on a critical component of the situational awareness (SA), the neural control of depth flight of an autonomous underwater vehicle (AUV). Constant depth flight is a challenging but important task for AUVs to achieve high level of autonomy under adverse conditions. With the SA strategy, we proposed a multirate neural control of an AUV trajectory using neural network model reference controller for a nontrivial mid-small size AUV "r2D4" stochastic model. This control system has been demonstrated and evaluated by simulation of diving maneuvers using software package Simulink. From the simulation results it can be seen that the chosen AUV model is stable in the presence of high noise, and also can be concluded that the fast SA of similar AUV systems with economy in energy of batteries can be asserted during the underwater missions in search-and-rescue operations.

Keywords: Autonomous underwater vehicles, multirate systems, neurocontrollers, situational awareness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1933
5863 Churn Prediction for Telecommunication Industry Using Artificial Neural Networks

Authors: Ulas Vural, M. Ergun Okay, E. Mesut Yildiz

Abstract:

Telecommunication service providers demand accurate and precise prediction of customer churn probabilities to increase the effectiveness of their customer relation services. The large amount of customer data owned by the service providers is suitable for analysis by machine learning methods. In this study, expenditure data of customers are analyzed by using an artificial neural network (ANN). The ANN model is applied to the data of customers with different billing duration. The proposed model successfully predicts the churn probabilities at 83% accuracy for only three months expenditure data and the prediction accuracy increases up to 89% when the nine month data is used. The experiments also show that the accuracy of ANN model increases on an extended feature set with information of the changes on the bill amounts.

Keywords: Customer relationship management, churn prediction, telecom industry, deep learning, Artificial Neural Networks, ANN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 744
5862 Studies on Properties of Knowledge Dependency and Reduction Algorithm in Tolerance Rough Set Model

Authors: Chen Wu, Lijuan Wang

Abstract:

Relation between tolerance class and indispensable attribute and knowledge dependency in rough set model with tolerance relation is explored. After giving definitions and concepts of knowledge dependency and knowledge dependency degree for incomplete information system in tolerance rough set model by distinguishing decision attribute containing missing attribute value or not, the result of maintaining reflectivity, transitivity, augmentation, decomposition law and merge law for complete knowledge dependency is proved. Knowledge dependency degrees (not complete knowledge dependency degrees) only satisfy some laws after transitivity, augmentation and decomposition operations. An algorithm to solve attribute reduction in an incomplete decision table is designed. The correctness is checked by an example.

Keywords: Incomplete information system, rough set, tolerance relation, knowledge dependence, attribute reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 723
5861 Predictive Clustering Hybrid Regression(pCHR) Approach and Its Application to Sucrose-Based Biohydrogen Production

Authors: Nikhil, Ari Visa, Chin-Chao Chen, Chiu-Yue Lin, Jaakko A. Puhakka, Olli Yli-Harja

Abstract:

A predictive clustering hybrid regression (pCHR) approach was developed and evaluated using dataset from H2- producing sucrose-based bioreactor operated for 15 months. The aim was to model and predict the H2-production rate using information available about envirome and metabolome of the bioprocess. Selforganizing maps (SOM) and Sammon map were used to visualize the dataset and to identify main metabolic patterns and clusters in bioprocess data. Three metabolic clusters: acetate coupled with other metabolites, butyrate only, and transition phases were detected. The developed pCHR model combines principles of k-means clustering, kNN classification and regression techniques. The model performed well in modeling and predicting the H2-production rate with mean square error values of 0.0014 and 0.0032, respectively.

Keywords: Biohydrogen, bioprocess modeling, clusteringhybrid regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1768
5860 Molar Excess Volumes and Excess Isentropic Compressibilities of Ternary Mixtures Containing 2-Pyrrolidinone

Authors: Jaibir S. Yadav, Dimple, Vinod K. Sharma

Abstract:

Molar excess Volumes, VE ijk and speeds of sound , uijk of 2-pyrrolidinone (i) + benzene or toluene (j) + ethanol (k) ternary mixture have been measured as a function of composition at 308.15 K. The observed speeds of sound data have been utilized to determine excess isentropic compressiblities, ( E S κ )ijk of ternary (i + j + k) mixtures. Molar excess volumes, VE ijk and excess isentropic compressibilities, ( E S κ )ijk data have fitted to the Redlich-Kister equation to calculate ternary adjustable parameters and standard deviations. The Moelywn-Huggins concept (Huggins in Polymer 12: 389-399, 1971) of connectivity between the surfaces of the constituents of binary mixtures has been extended to ternary mixtures (using the concept of a connectivity parameter of third degree of molecules, 3ξ , which inturn depends on its topology) to obtain an expression that describes well the measured VE ijk and ( E S κ )ijk data.

Keywords: Connectivity parameter of third degree, , Excess isentropic compressibilities, ( ES κ )ijk, Interaction energy parameter, χ, Molar excess volumes, VEijk, Speeds of sound, uijk.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1635
5859 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: Goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, type-I error, penalized quasi-likelihood, power, quasi-likelihood.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 725
5858 A Statistical Approach for Predicting and Optimizing Depth of Cut in AWJ Machining for 6063-T6 Al Alloy

Authors: Farhad Kolahan, A. Hamid Khajavi

Abstract:

In this paper, a set of experimental data has been used to assess the influence of abrasive water jet (AWJ) process parameters in cutting 6063-T6 aluminum alloy. The process variables considered here include nozzle diameter, jet traverse rate, jet pressure and abrasive flow rate. The effects of these input parameters are studied on depth of cut (h); one of most important characteristics of AWJ. The Taguchi method and regression modeling are used in order to establish the relationships between input and output parameters. The adequacy of the model is evaluated using analysis of variance (ANOVA) technique. In the next stage, the proposed model is embedded into a Simulated Annealing (SA) algorithm to optimize the AWJ process parameters. The objective is to determine a suitable set of process parameters that can produce a desired depth of cut, considering the ranges of the process parameters. Computational results prove the effectiveness of the proposed model and optimization procedure.

Keywords: AWJ machining, Mathematical modeling, Simulated Annealing, Optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1767
5857 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: Computer vision, deep learning, object detection, semiconductor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 809
5856 Production of As Isotopes in the Interaction of natGe with 14-30 MeV Protons

Authors: Yong H. Chung, Eun J. Han, Seil Lee, Sun Y. Park, Eun H. Yoon, Eun J. Cho, Jang H. Lee, Young J. Chu, Jang H. Ha, Jongseo Chai, Yu S. Kim, Min Y. Lee, Hyeyoung Lee

Abstract:

Cross sections of As radionuclides in the interaction of natGe with 14-30 MeV protons have been deduced by off-line y-ray spectroscopy to find optimal reaction channels leading to radiotracers for positron emission tomography. The experimental results were compared with the previous results and those estimated by the compound nucleus reaction model.

Keywords: Compound nucleus reaction model, off-line g-ray spectroscopy, radionuclide.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1522
5855 Optimization of PEM Fuel Cell Biphasic Model

Authors: Boubekeur Dokkar, Nasreddine Chennouf, Noureddine Settou, Belkhir Negrou, Abdesslam Benmhidi

Abstract:

The optimal operation of proton exchange membrane fuel cell (PEMFC) requires good water management which is presented under two forms vapor and liquid. Moreover, fuel cells have to reach higher output require integration of some accessories which need electrical power. In order to analyze fuel cells operation and different species transport phenomena a biphasic mathematical model is presented by governing equations set. The numerical solution of these conservation equations is calculated by Matlab program. A multi-criteria optimization with weighting between two opposite objectives is used to determine the compromise solutions between maximum output and minimal stack size. The obtained results are in good agreement with available literature data.

Keywords: Biphasic model, PEM fuel cell, optimization, simulation, specie transport.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2024
5854 Mathematical Modeling of Storm Surge in Three Dimensional Primitive Equations

Authors: Worachat Wannawong, Usa W. HumphriesPrungchan Wongwises, Suphat Vongvisessomjai

Abstract:

The mathematical modeling of storm surge in sea and coastal regions such as the South China Sea (SCS) and the Gulf of Thailand (GoT) are important to study the typhoon characteristics. The storm surge causes an inundation at a lateral boundary exhibiting in the coastal zones particularly in the GoT and some part of the SCS. The model simulations in the three dimensional primitive equations with a high resolution model are important to protect local properties and human life from the typhoon surges. In the present study, the mathematical modeling is used to simulate the typhoon–induced surges in three case studies of Typhoon Linda 1997. The results of model simulations at the tide gauge stations can describe the characteristics of storm surges at the coastal zones.

Keywords: lateral boundary, mathematical modeling, numericalsimulations, three dimensional primitive equations, storm surge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3425
5853 Equilibrium and Rate Based Simulation of MTBE Reactive Distillation Column

Authors: Debashish Panda, Kannan A.

Abstract:

Equilibrium and rate based models have been applied in the simulation of methyl tertiary-butyl ether (MTBE) synthesis through reactive distillation. Temperature and composition profiles were compared for both the models and found that both the profiles trends, though qualitatively similar are significantly different quantitatively. In the rate based method (RBM), multicomponent mass transfer coefficients have been incorporated to describe interphase mass transfer. MTBE mole fraction in the bottom stream is found to be 0.9914 in the Equilibrium Model (EQM) and only 0.9904 for RBM when the same column configuration was preserved. The individual tray efficiencies were incorporated in the EQM and simulations were carried out. Dynamic simulation have been also carried out for the two column configurations and compared.

Keywords: Aspen Plus, equilibrium stage model, methyl tertiary-butyl ether, rate based model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4901
5852 Stochastic Model Predictive Control for Linear Discrete-Time Systems with Random Dither Quantization

Authors: Tomoaki Hashimoto

Abstract:

Recently, feedback control systems using random dither quantizers have been proposed for linear discrete-time systems. However, the constraints imposed on state and control variables have not yet been taken into account for the design of feedback control systems with random dither quantization. Model predictive control is a kind of optimal feedback control in which control performance over a finite future is optimized with a performance index that has a moving initial and terminal time. An important advantage of model predictive control is its ability to handle constraints imposed on state and control variables. Based on the model predictive control approach, the objective of this paper is to present a control method that satisfies probabilistic state constraints for linear discrete-time feedback control systems with random dither quantization. In other words, this paper provides a method for solving the optimal control problems subject to probabilistic state constraints for linear discrete-time feedback control systems with random dither quantization.

Keywords: Optimal control, stochastic systems, discrete-time systems, probabilistic constraints, random dither quantization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1142
5851 Nongovernmental Organisations’ Sustainable Strategic Planning and Its Impact on Donors’ Loyalty

Authors: Farah Mahmoud Attallah, Sara El-Deeb

Abstract:

The non-profit sector has been heavily rising with the rise of sustainable development in developed and developing countries. Most economies are putting high pressure on this sector, believing that nongovernmental organizations (NGOs) are one of the main rescues during crises worldwide. However, with the rising number of those NGOs comes their incapability of sustaining their performance and fundraising. Additionally, donors who are considered the key partners for those organizations have become knowledgeable about this sector which made them more demanding, putting high pressure on those organizations to believe that there must be a valuable return for the economy in order to donate. This research aims to study the impact of a sustainable strategic planning model on raising loyal donors; the proposed model of this research presents several independent variables determining their impact on donors' intention to become loyal.

Keywords: Non-profit sector, non-governmental organizations, strategic planning, sustainable business model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 152
5850 Ion Thruster Grid Lifetime Assessment Based on Its Structural Failure

Authors: Juan Li, Jiawen Qiu, Yuchuan Chu, Tianping Zhang, Wei Meng, Yanhui Jia, Xiaohui Liu

Abstract:

This article developed an ion thruster optic system sputter erosion depth numerical 3D model by IFE-PIC (Immersed Finite Element-Particle-in-Cell) and Mont Carlo method, and calculated the downstream surface sputter erosion rate of accelerator grid; compared with LIPS-200 life test data. The results of the numerical model are in reasonable agreement with the measured data. Finally, we predicted the lifetime of the 20cm diameter ion thruster via the erosion data obtained with the model. The ultimate result demonstrated that under normal operating condition, the erosion rate of the grooves wears on the downstream surface of the accelerator grid is 34.6μm⁄1000h, which means the conservative lifetime until structural failure occurring on the accelerator grid is 11500 hours.

Keywords: Ion thruster, accelerator gird, sputter erosion, lifetime assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1994
5849 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach

Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar

Abstract:

Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.

Keywords: ANN, DWT, GLCM, KNN, ROI, artificial neural networks, discrete wavelet transform, gray-level co-occurrence matrix, k-nearest neighbor, region of interest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 948
5848 Enhanced Coagulation of Disinfection By-Products Precursors in Porsuk Water Resource, Eskisehir

Authors: Zehra Yigit, Hatice Inan, Guven Seydioglu, Vedat Uyak

Abstract:

Natural organic matter (NOM) is heterogeneous mixture of organic compounds that enter the water media from animal and plant remains, domestic and industrial wastes. Researches showed that NOM is likely precursor material for disinfection by products (DBPs). Chlorine very commenly used for disinfection purposes and NOM and chlorine reacts then Trihalomethane (THM) and Haloacetic acids (HAAs) which are cancerogenics for human health are produced. The aim of the study is to search NOM removal by enhanced coagulation from drinking water source of Eskisehir which is supplied from Porsuk Dam. Recently, Porsuk dam water is getting highly polluted and therefore NOM concentration is increasing. Enhanced coagulation studies were evaluated by measurement of Dissolved Organic Carbon (DOC), UV absorbance at 254 nm (UV254), and different trihalomethane formation potential (THMFP) tests. Results of jar test experiments showed that NOM can be removed from water about 40-50 % of efficiency by enhanced coagulation. Optimum coagulant type and coagulant dosages were determined using FeCl3 and Alum.

Keywords: Chlorination, Disinfection by-products, DOC, Enhanced Coagulation, NOM, Porsuk, UV254.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2209
5847 A Conservative Multi-block Algorithm for Two-dimensional Numerical Model

Authors: Yaoxin Zhang, Yafei Jia, Sam S.Y. Wang

Abstract:

A multi-block algorithm and its implementation in two-dimensional finite element numerical model CCHE2D are presented. In addition to a conventional Lagrangian Interpolation Method (LIM), a novel interpolation method, called Consistent Interpolation Method (CIM), is proposed for more accurate information transfer across the interfaces. The consistent interpolation solves the governing equations over the auxiliary elements constructed around the interpolation nodes using the same numerical scheme used for the internal computational nodes. With the CIM, the momentum conservation can be maintained as well as the mass conservation. An imbalance correction scheme is used to enforce the conservation laws (mass and momentum) across the interfaces. Comparisons of the LIM and the CIM are made using several flow simulation examples. It is shown that the proposed CIM is physically more accurate and produces satisfactory results efficiently.

Keywords: Multi-block algorithm, conservation, interpolation, numerical model, flow simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1785
5846 Numerical Simulation of the Dynamic Behavior of a LaNi5 Water Pumping System

Authors: Miled Amel, Ben Maad Hatem, Askri Faouzi, Ben Nasrallah Sassi

Abstract:

Metal hydride water pumping system uses hydrogen as working fluid to pump water for low head and high discharge. The principal operation of this pump is based on the desorption of hydrogen at high pressure and its absorption at low pressure by a metal hydride. This work is devoted to study a concept of the dynamic behavior of a metal hydride pump using unsteady model and LaNi5 as hydriding alloy. This study shows that with MHP, it is possible to pump 340l/kg-cycle of water in 15 000s using 1 Kg of LaNi5 at a desorption temperature of 360 K, a pumping head equal to 5 m and a desorption gear ratio equal to 33. This study reveals also that the error given by the steady model, using LaNi5 is about 2%.A dimensional mathematical model and the governing equations of the pump were presented to predict the coupled heat and mass transfer within the MHP. Then, a numerical simulation is carried out to present the time evolution of the specific water discharge and to test the effect of different parameters (desorption temperature, absorption temperature, desorption gear ratio) on the performance of the water pumping system (specific water discharge, pumping efficiency and pumping time). In addition, a comparison between results obtained with steady and unsteady model is performed with different hydride mass. Finally, a geometric configuration of the reactor is simulated to optimize the pumping time.

Keywords: Dynamic behavior, unsteady model, LaNi5, performance of the water pumping system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 764
5845 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements

Authors: Alexander Buhr, Klaus Ehrenfried

Abstract:

Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.

Keywords: Boundary layer, high-speed PIV, ICE3, moving train model, roughness elements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1522
5844 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes

Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono

Abstract:

Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is widely used for LV segmentation, but it suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is improved to achieve a fast and efficient LV segmentation. First, a robust and efficient detection based on Hough forest localizes cardiac feature points. Such feature points are used to predict the initial fitting of the LV shape model. Second, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. With the robust initialization, ASM is able to achieve more accurate segmentation. The performance of the proposed method is evaluated on a dataset of 810 cardiac ultrasound images that are mostly abnormal shapes. This proposed method is compared with several combinations of ASM and existing initialization methods. Our experiment results demonstrate that accuracy of the proposed method for feature point detection for initialization was 40% higher than the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops and thus speeds up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.

Keywords: Hough forest, active shape model, segmentation, cardiac left ventricle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1498
5843 Optimal Model Order Selection for Transient Error Autoregressive Moving Average (TERA) MRI Reconstruction Method

Authors: Abiodun M. Aibinu, Athaur Rahman Najeeb, Momoh J. E. Salami, Amir A. Shafie

Abstract:

An alternative approach to the use of Discrete Fourier Transform (DFT) for Magnetic Resonance Imaging (MRI) reconstruction is the use of parametric modeling technique. This method is suitable for problems in which the image can be modeled by explicit known source functions with a few adjustable parameters. Despite the success reported in the use of modeling technique as an alternative MRI reconstruction technique, two important problems constitutes challenges to the applicability of this method, these are estimation of Model order and model coefficient determination. In this paper, five of the suggested method of evaluating the model order have been evaluated, these are: The Final Prediction Error (FPE), Akaike Information Criterion (AIC), Residual Variance (RV), Minimum Description Length (MDL) and Hannan and Quinn (HNQ) criterion. These criteria were evaluated on MRI data sets based on the method of Transient Error Reconstruction Algorithm (TERA). The result for each criterion is compared to result obtained by the use of a fixed order technique and three measures of similarity were evaluated. Result obtained shows that the use of MDL gives the highest measure of similarity to that use by a fixed order technique.

Keywords: Autoregressive Moving Average (ARMA), MagneticResonance Imaging (MRI), Parametric modeling, Transient Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1608
5842 In situ Real-Time Multivariate Analysis of Methanolysis Monitoring of Sunflower Oil Using FTIR

Authors: Pascal Mwenge, Tumisang Seodigeng

Abstract:

The combination of world population and the third industrial revolution led to high demand for fuels. On the other hand, the decrease of global fossil 8fuels deposits and the environmental air pollution caused by these fuels has compounded the challenges the world faces due to its need for energy. Therefore, new forms of environmentally friendly and renewable fuels such as biodiesel are needed. The primary analytical techniques for methanolysis yield monitoring have been chromatography and spectroscopy, these methods have been proven reliable but are more demanding, costly and do not provide real-time monitoring. In this work, the in situ monitoring of biodiesel from sunflower oil using FTIR (Fourier Transform Infrared) has been studied; the study was performed using EasyMax Mettler Toledo reactor equipped with a DiComp (Diamond) probe. The quantitative monitoring of methanolysis was performed by building a quantitative model with multivariate calibration using iC Quant module from iC IR 7.0 software. 15 samples of known concentrations were used for the modelling which were taken in duplicate for model calibration and cross-validation, data were pre-processed using mean centering and variance scale, spectrum math square root and solvent subtraction. These pre-processing methods improved the performance indexes from 7.98 to 0.0096, 11.2 to 3.41, 6.32 to 2.72, 0.9416 to 0.9999, RMSEC, RMSECV, RMSEP and R2Cum, respectively. The R2 value of 1 (training), 0.9918 (test), 0.9946 (cross-validation) indicated the fitness of the model built. The model was tested against univariate model; small discrepancies were observed at low concentration due to unmodelled intermediates but were quite close at concentrations above 18%. The software eliminated the complexity of the Partial Least Square (PLS) chemometrics. It was concluded that the model obtained could be used to monitor methanol of sunflower oil at industrial and lab scale.

Keywords: Biodiesel, calibration, chemometrics, FTIR, methanolysis, multivariate analysis, transesterification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 915
5841 Effect of Na2O Content on Durability of Geopolymer Mortars in Sulphuric Acid

Authors: Suresh Thokchom, Partha Ghosh, Somnath Ghosh

Abstract:

This paper presents the findings of an experimental investigation to study the effect of alkali content in geopolymer mortar specimens exposed to sulphuric acid. Geopolymer mortar specimens were manufactured from Class F fly ash by activation with a mixture of sodium hydroxide and sodium silicate solution containing 5% to 8% Na2O. Durability of specimens were assessed by immersing them in 10% sulphuric acid solution and periodically monitoring surface deterioration and depth of dealkalization, changes in weight and residual compressive strength over a period of 24 weeks. Microstructural changes in the specimens were studied with Scanning electron microscopy (SEM) and EDAX. Alkali content in the activator solution significantly affects the durability of fly ash based geopolymer mortars in sulphuric acid. Specimens manufactured with higher alkali content performed better than those manufactured with lower alkali content. After 24 weeks in sulphuric acid, specimen with 8% alkali still recorded a residual strength as high as 55%.

Keywords: Alkali content, acid attack, compressive strength, geopolymer

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2626
5840 Urban Growth Prediction in Athens, Greece, Using Artificial Neural Networks

Authors: D. Triantakonstantis, D. Stathakis

Abstract:

Urban areas have been expanded throughout the globe. Monitoring and modelling urban growth have become a necessity for a sustainable urban planning and decision making. Urban prediction models are important tools for analyzing the causes and consequences of urban land use dynamics. The objective of this research paper is to analyze and model the urban change, which has been occurred from 1990 to 2000 using CORINE land cover maps. The model was developed using drivers of urban changes (such as road distance, slope, etc.) under an Artificial Neural Network modelling approach. Validation was achieved using a prediction map for 2006 which was compared with a real map of Urban Atlas of 2006. The accuracy produced a Kappa index of agreement of 0,639 and a value of Cramer's V of 0,648. These encouraging results indicate the importance of the developed urban growth prediction model which using a set of available common biophysical drivers could serve as a management tool for the assessment of urban change.

Keywords: Artificial Neural Networks, CORINE, Urban Atlas, Urban Growth Prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3441