Search results for: heuristics methods.
3795 Tests for Gaussianity of a Stationary Time Series
Authors: Adnan Al-Smadi
Abstract:
One of the primary uses of higher order statistics in signal processing has been for detecting and estimation of non- Gaussian signals in Gaussian noise of unknown covariance. This is motivated by the ability of higher order statistics to suppress additive Gaussian noise. In this paper, several methods to test for non- Gaussianity of a given process are presented. These methods include histogram plot, kurtosis test, and hypothesis testing using cumulants and bispectrum of the available sequence. The hypothesis testing is performed by constructing a statistic to test whether the bispectrum of the given signal is non-zero. A zero bispectrum is not a proof of Gaussianity. Hence, other tests such as the kurtosis test should be employed. Examples are given to demonstrate the performance of the presented methods.Keywords: Non-Gaussian, bispectrum, kurtosis, hypothesistesting, histogram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19153794 The Influence of the Commons Structure Modification on the Active Power Losses Allocation
Authors: O. Pop, C. Barbulescu, M. Nemes, St. Kilyeni
Abstract:
The tracing methods determine the contribution the power system sources have in their supplying. These methods can be used to assess the transmission prices, but also to recover the transmission fixed cost. In this paper is presented the influence of the modification of commons structure has on the specific price of transfer and on active power losses. The authors propose a power losses allocation method, based on Kirschen-s method. The system operator must make use of a few basic principles about allocation. The only necessary information is the power flows on system branches and the modifications applied to power system buses. In order to illustrate this method, the 25-bus test system is used, elaborated within the Electrical Power Engineering Department, from Timisoara, Romania.Keywords: Power systems, P-U bus, P-Q bus, loss allocation, traceability methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15243793 An Efficient 3D Animation Data Reduction Using Frame Removal
Authors: Jinsuk Yang, Choongjae Joo, Kyoungsu Oh
Abstract:
Existing methods in which the animation data of all frames are stored and reproduced as with vertex animation cannot be used in mobile device environments because these methods use large amounts of the memory. So 3D animation data reduction methods aimed at solving this problem have been extensively studied thus far and we propose a new method as follows. First, we find and remove frames in which motion changes are small out of all animation frames and store only the animation data of remaining frames (involving large motion changes). When playing the animation, the removed frame areas are reconstructed using the interpolation of the remaining frames. Our key contribution is to calculate the accelerations of the joints of individual frames and the standard deviations of the accelerations using the information of joint locations in the relevant 3D model in order to find and delete frames in which motion changes are small. Our methods can reduce data sizes by approximately 50% or more while providing quality which is not much lower compared to original animations. Therefore, our method is expected to be usefully used in mobile device environments or other environments in which memory sizes are limited.
Keywords: Data Reduction, Interpolation, Vertex Animation, 3D Animation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16613792 Teaching Computer Programming to Diverse Students: A Comparative, Mixed-Methods, Classroom Research Study
Authors: Almudena Konrad, Tomás Galguera
Abstract:
Lack of motivation and interest is a serious obstacle to students’ learning computing skills. A need exists for a knowledge base on effective pedagogy and curricula to teach computer programming. This paper presents results from research evaluating a six-year project designed to teach complex concepts in computer programming collaboratively, while supporting students to continue developing their computer thinking and related coding skills individually. Utilizing a quasi-experimental, mixed methods design, the pedagogical approaches and methods were assessed in two contrasting groups of students with different socioeconomic status, gender, and age composition. Analyses of quantitative data from Likert-scale surveys and an evaluation rubric, combined with qualitative data from reflective writing exercises and semi-structured interviews yielded convincing evidence of the project’s success at both teaching and inspiring students.Keywords: Computational thinking, computing education, computer programming curriculum, logic, teaching methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7893791 Relative Suitability Evaluation of Two Methods of Particle-Size Analysis for Selected Soils of Sudan Savanna of Nigeria
Authors: B. A. Lawal, B. R. Singh, G. A. Babaji, P. A. Tsado
Abstract:
The two widely used methods base on the sedimentation principle (Bouyoucos hydrometer and International pipette) for particle-size analysis were comparatively evaluated on soils collected from various locations in Sudan savanna of Nigeria particularly from Sokoto and Zamfara States. The hydrometer method under-estimated the silt and over-estimated the clay content. Also, the hydrometer reading proved difficult and tended to submerge when floated for clay reading in the suspension of very sandy soils (900g kg-1 sand). Furthermore, the results from the two methods were validated by subjecting the data to USDA soil textural triangle to determine their textural class names. The outcome was that 91.67 % of the experimental soils retained the same textural class names irrespective of the method. Thus, Bouyoucos hydrometer method may conveniently find a place in routine work in view of its simplicity, rapidity, and strong correlation with the pipette method.
Keywords: Hydrometer and pipette methods, particle-size analysis, sedimentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23713790 A User Friendly Tool for Performance Evaluation of Different Reference Evapotranspiration Methods
Authors: Vijay Shankar
Abstract:
Evapotranspiration (ET) is a major component of the hydrologic cycle and its accurate estimation is essential for hydrological studies. In past, various estimation methods have been developed for different climatological data, and the accuracy of these methods varies with climatic conditions. Reference crop evapotranspiration (ET0) is a key variable in procedures established for estimating evapotranspiration rates of agricultural crops. Values of ET0 are used with crop coefficients for many aspects of irrigation and water resources planning and management. Numerous methods are used for estimating ET0. As per internationally accepted procedures outlined in the United Nations Food and Agriculture Organization-s Irrigation and Drainage Paper No. 56(FAO-56), use of Penman-Monteith equation is recommended for computing ET0 from ground based climatological observations. In the present study, seven methods have been selected for performance evaluation. User friendly software has been developed using programming language visual basic. The visual basic has ability to create graphical environment using less coding. For given data availability the developed software estimates reference evapotranspiration for any given area and period for which data is available. The accuracy of the software has been checked by the examples given in FAO-56.The developed software is a user friendly tool for estimating ET0 under different data availability and climatic conditions.
Keywords: Crop coefficient, Crop evapotranspiration, Field moisture, Irrigation Scheduling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16543789 Effects of Different Drying Methods on the Properties of Viscose Single Jersey Fabrics
Authors: M. Kucukali Ozturk, Y. Beceren, B. Nergis
Abstract:
The study discussed in this paper was conducted in an attempt to investigate effects of different drying methods (line dry and tumble dry) on viscose single jersey fabrics knitted with ring yarn.Keywords: Color change, dimensional properties, drying method, fabric tightness, physical properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30133788 Numerical Study of Iterative Methods for the Solution of the Dirichlet-Neumann Map for Linear Elliptic PDEs on Regular Polygon Domains
Authors: A. G. Sifalakis, E. P. Papadopoulou, Y. G. Saridakis
Abstract:
A generalized Dirichlet to Neumann map is one of the main aspects characterizing a recently introduced method for analyzing linear elliptic PDEs, through which it became possible to couple known and unknown components of the solution on the boundary of the domain without solving on its interior. For its numerical solution, a well conditioned quadratically convergent sine-Collocation method was developed, which yielded a linear system of equations with the diagonal blocks of its associated coefficient matrix being point diagonal. This structural property, among others, initiated interest for the employment of iterative methods for its solution. In this work we present a conclusive numerical study for the behavior of classical (Jacobi and Gauss-Seidel) and Krylov subspace (GMRES and Bi-CGSTAB) iterative methods when they are applied for the solution of the Dirichlet to Neumann map associated with the Laplace-s equation on regular polygons with the same boundary conditions on all edges.Keywords: Elliptic PDEs, Dirichlet to Neumann Map, Global Relation, Collocation, Iterative Methods, Jacobi, Gauss-Seidel, GMRES, Bi-CGSTAB.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17103787 Data Mining Classification Methods Applied in Drug Design
Authors: Mária Stachová, Lukáš Sobíšek
Abstract:
Data mining incorporates a group of statistical methods used to analyze a set of information, or a data set. It operates with models and algorithms, which are powerful tools with the great potential. They can help people to understand the patterns in certain chunk of information so it is obvious that the data mining tools have a wide area of applications. For example in the theoretical chemistry data mining tools can be used to predict moleculeproperties or improve computer-assisted drug design. Classification analysis is one of the major data mining methodologies. The aim of thecontribution is to create a classification model, which would be able to deal with a huge data set with high accuracy. For this purpose logistic regression, Bayesian logistic regression and random forest models were built using R software. TheBayesian logistic regression in Latent GOLD software was created as well. These classification methods belong to supervised learning methods. It was necessary to reduce data matrix dimension before construct models and thus the factor analysis (FA) was used. Those models were applied to predict the biological activity of molecules, potential new drug candidates.Keywords: data mining, classification, drug design, QSAR
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28493786 Development of a Multi-Factorial Instrument for Accident Analysis Based on Systemic Methods
Authors: C. V. Pietreanu, S. E. Zaharia, C. Dinu
Abstract:
The present research is built on three major pillars, commencing by making some considerations on accident investigation methods and pointing out both defining aspects and differences between linear and non-linear analysis. The traditional linear focus on accident analysis describes accidents as a sequence of events, while the latest systemic models outline interdependencies between different factors and define the processes evolution related to a specific (normal) situation. Linear and non-linear accident analysis methods have specific limitations, so the second point of interest is mirrored by the aim to discover the drawbacks of systemic models which becomes a starting point for developing new directions to identify risks or data closer to the cause of incidents/accidents. Since communication represents a critical issue in the interaction of human factor and has been proved to be the answer of the problems made by possible breakdowns in different communication procedures, from this focus point, on the third pylon a new error-modeling instrument suitable for risk assessment/accident analysis will be elaborated.
Keywords: Accident analysis, multi-factorial error modeling, risk, systemic methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10393785 Application of Scanning Electron Microscopy and X-Ray Evaluation of the Main Digestion Methods for Determination of Macroelements in Plant Tissue
Authors: Krasimir I. Ivanov, Penka S. Zapryanova, Stefan V. Krustev, Violina R. Angelova
Abstract:
Three commonly used digestion methods (dry ashing, acid digestion, and microwave digestion) in different variants were compared for digestion of tobacco leaves. Three main macroelements (K, Ca and Mg) were analysed using AAS Spectrometer Spectra АА 220, Varian, Australia. The accuracy and precision of the measurements were evaluated by using Polish reference material CTR-VTL-2 (Virginia tobacco leaves). To elucidate the problems with elemental recovery X-Ray and SEM–EDS analysis of all residues after digestion were performed. The X-ray investigation showed a formation of KClO4 when HClO4 was used as a part of the acids mixture. The use of HF at Ca and Mg determination led to the formation of CaF2 and MgF2. The results were confirmed by energy dispersive X-ray microanalysis. SPSS program for Windows was used for statistical data processing.
Keywords: Digestion methods, determination of macroelements, plant tissue.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9383784 Simulation of a Multi-Component Transport Model for the Chemical Reaction of a CVD-Process
Abstract:
In this paper we present discretization and decomposition methods for a multi-component transport model of a chemical vapor deposition (CVD) process. CVD processes are used to manufacture deposition layers or bulk materials. In our transport model we simulate the deposition of thin layers. The microscopic model is based on the heavy particles, which are derived by approximately solving a linearized multicomponent Boltzmann equation. For the drift-process of the particles we propose diffusionreaction equations as well as for the effects of heat conduction. We concentrate on solving the diffusion-reaction equation with analytical and numerical methods. For the chemical processes, modelled with reaction equations, we propose decomposition methods and decouple the multi-component models to simpler systems of differential equations. In the numerical experiments we present the computational results of our proposed models.
Keywords: Chemical reactions, chemical vapor deposition, convection-diffusion-reaction equations, decomposition methods, multi-component transport.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14093783 Improved Power Spectrum Estimation for RR-Interval Time Series
Authors: B. S. Saini, Dilbag Singh, Moin Uddin, Vinod Kumar
Abstract:
The RR interval series is non-stationary and unevenly spaced in time. For estimating its power spectral density (PSD) using traditional techniques like FFT, require resampling at uniform intervals. The researchers have used different interpolation techniques as resampling methods. All these resampling methods introduce the low pass filtering effect in the power spectrum. The lomb transform is a means of obtaining PSD estimates directly from irregularly sampled RR interval series, thus avoiding resampling. In this work, the superiority of Lomb transform method has been established over FFT based approach, after applying linear and cubicspline interpolation as resampling methods, in terms of reproduction of exact frequency locations as well as the relative magnitudes of each spectral component.Keywords: HRV, Lomb Transform, Resampling, RR-intervals.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32343782 Emerging Methods as a Tool for Obtaining Subconscious Feedback in E-commerce and Marketplace
Authors: J. Berčík, A. Mravcová, A. Rusková, P. Jurčišin, R. Virágh
Abstract:
The online world is changing every day. With this comes the emergence and development of new business models. One of them is the sale of several types of products in one place. This type of sales in the form of online marketplaces has undergone a positive development in recent years and represents a kind of alternative to brick-and-mortar shopping centers. The main philosophy is to buy several products under one roof. Examples of popular e-commerce marketplaces are Amazon, eBay or Allegro. Their share of total e-commerce turnover is expected to even double in the coming years. The paper highlights possibilities for testing web applications and online marketplace using emerging methods like stationary eye camera (Eye tracking) and facial analysis (FaceReading).
Keywords: Emerging methods, consumer neuroscience, e-commerce, marketplace, user experience, user interface.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 783781 Feature Selection Methods for an Improved SVM Classifier
Authors: Daniel Morariu, Lucian N. Vintan, Volker Tresp
Abstract:
Text categorization is the problem of classifying text documents into a set of predefined classes. After a preprocessing step, the documents are typically represented as large sparse vectors. When training classifiers on large collections of documents, both the time and memory restrictions can be quite prohibitive. This justifies the application of feature selection methods to reduce the dimensionality of the document-representation vector. In this paper, three feature selection methods are evaluated: Random Selection, Information Gain (IG) and Support Vector Machine feature selection (called SVM_FS). We show that the best results were obtained with SVM_FS method for a relatively small dimension of the feature vector. Also we present a novel method to better correlate SVM kernel-s parameters (Polynomial or Gaussian kernel).Keywords: Feature Selection, Learning with Kernels, SupportVector Machine, and Classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18273780 Computable Function Representations Using Effective Chebyshev Polynomial
Authors: Mohammed A. Abutheraa, David Lester
Abstract:
We show that Chebyshev Polynomials are a practical representation of computable functions on the computable reals. The paper presents error estimates for common operations and demonstrates that Chebyshev Polynomial methods would be more efficient than Taylor Series methods for evaluation of transcendental functions.
Keywords: Approximation Theory, Chebyshev Polynomial, Computable Functions, Computable Real Arithmetic, Integration, Numerical Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30863779 Adhesion Strength Evaluation Methods in Thermally Sprayed Coatings
Authors: M.Jalali Azizpour, H.Mohammadi majd, Milad Jalali, H.Fasihi
Abstract:
The techniques for estimating the adhesive and cohesive strength in high velocity oxy fuel (HVOF) thermal spray coatings have been discussed and compared. The development trend and the last investigation have been studied. We will focus on benefits and limitations of these methods in different process and materials.
Keywords: Adhesion, Bonding strength, Cohesion, HVOF Thermal spray
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31393778 Comparison of Experimental Relationships to Determine Flow Discharge in Meandering Compound Channels Using M5 Decision Tree Model
Authors: Mehdi Kheradmand, Mehdi Azhdary Moghaddam, Abdolreza Zahiri, Khalil Ghorbani
Abstract:
This research compares results of major methods of determining the flow discharge using experimental relationships with results from the M5 decision tree model in meandering compound sections in several laboratory channels. It was found that the M5 decision tree model enjoyed greater accuracy of statistical parameters compared to methods to the said methods. This suggested that the M5 decision tree model has highly improved the calculated accuracy of the flow discharge in meandering compound channels.
Keywords: Stage-discharge relationship, M5 decision tree model, compound section, meandering compound channel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2293777 Convex Restrictions for Outage Constrained MU-MISO Downlink under Imperfect Channel State Information
Authors: A. Preetha Priyadharshini, S. B. M. Priya
Abstract:
In this paper, we consider the MU-MISO downlink scenario, under imperfect channel state information (CSI). The main issue in imperfect CSI is to keep the probability of each user achievable outage rate below the given threshold level. Such a rate outage constraints present significant and analytical challenges. There are many probabilistic methods are used to minimize the transmit optimization problem under imperfect CSI. Here, decomposition based large deviation inequality and Bernstein type inequality convex restriction methods are used to perform the optimization problem under imperfect CSI. These methods are used for achieving improved output quality and lower complexity. They provide a safe tractable approximation of the original rate outage constraints. Based on these method implementations, performance has been evaluated in the terms of feasible rate and average transmission power. The simulation results are shown that all the two methods offer significantly improved outage quality and lower computational complexity.
Keywords: Imperfect channel state information, outage probability, multiuser- multi input single output.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11143776 Calibration of Syringe Pumps Using Interferometry and Optical Methods
Authors: E. Batista, R. Mendes, A. Furtado, M. C. Ferreira, I. Godinho, J. A. Sousa, M. Alvares, R. Martins
Abstract:
Syringe pumps are commonly used for drug delivery in hospitals and clinical environments. These instruments are critical in neonatology and oncology, where any variation in the flow rate and drug dosing quantity can lead to severe incidents and even death of the patient. Therefore it is very important to determine the accuracy and precision of these devices using the suitable calibration methods. The Volume Laboratory of the Portuguese Institute for Quality (LVC/IPQ) uses two different methods to calibrate syringe pumps from 16 nL/min up to 20 mL/min. The Interferometric method uses an interferometer to monitor the distance travelled by a pusher block of the syringe pump in order to determine the flow rate. Therefore, knowing the internal diameter of the syringe with very high precision, the travelled distance, and the time needed for that travelled distance, it was possible to calculate the flow rate of the fluid inside the syringe and its uncertainty. As an alternative to the gravimetric and the interferometric method, a methodology based on the application of optical technology was also developed to measure flow rates. Mainly this method relies on measuring the increase of volume of a drop over time. The objective of this work is to compare the results of the calibration of two syringe pumps using the different methodologies described above. The obtained results were consistent for the three methods used. The uncertainties values were very similar for all the three methods, being higher for the optical drop method due to setup limitations.
Keywords: Calibration, interferometry, syringe pump, optical method, uncertainty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7813775 Generation of Sets of Synthetic Classifiers for the Evaluation of Abstract-Level Combination Methods
Authors: N. Greco, S. Impedovo, R.Modugno, G. Pirlo
Abstract:
This paper presents a new technique for generating sets of synthetic classifiers to evaluate abstract-level combination methods. The sets differ in terms of both recognition rates of the individual classifiers and degree of similarity. For this purpose, each abstract-level classifier is considered as a random variable producing one class label as the output for an input pattern. From the initial set of classifiers, new slightly different sets are generated by applying specific operators, which are defined at the purpose. Finally, the sets of synthetic classifiers have been used to estimate the performance of combination methods for abstract-level classifiers. The experimental results demonstrate the effectiveness of the proposed approach.
Keywords: Abstract-level Classifier, Dempster-Shafer Rule, Multi-expert Systems, Similarity Index, System Evaluation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14863774 High-Efficiency Comparator for Low-Power Application
Authors: M. Yousefi, N. Nasirzadeh
Abstract:
In this paper, dynamic comparator structure employing two methods for power consumption reduction with applications in low-power high-speed analog-to-digital converters have been presented. The proposed comparator has low consumption thanks to power reduction methods. They have the ability for offset adjustment. The comparator consumes 14.3 μW at 100 MHz which is equal to 11.8 fJ. The comparator has been designed and simulated in 180 nm CMOS. Layouts occupy 210 μm2.Keywords: Comparator, low, power, efficiency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16203773 Vibration Induced Fatigue Assessment in Vehicle Development Process
Authors: Fatih Kagnici
Abstract:
Improvement in CAE methods has an important role for shortening of the vehicle product development time. It is provided that validation of the design and improvements in terms of durability can be done without hardware prototype production. In recent years, several different methods have been developed in order to investigate fatigue damage of the vehicle. The intended goal among these methods is prediction of fatigue damage in a short time with reduced costs. This study developed a new fatigue damage prediction method in the automotive sector using power spectrum densities of accelerations. This study also confirmed that the weak region in vehicle can be easily detected with the method developed in this study which results were compared with conventional method.
Keywords: Fatigue damage, Power spectrum density, Vibration induced fatigue, Vehicle development
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31273772 3D Face Recognition Using Modified PCA Methods
Authors: Omid Gervei, Ahmad Ayatollahi, Navid Gervei
Abstract:
In this paper we present an approach for 3D face recognition based on extracting principal components of range images by utilizing modified PCA methods namely 2DPCA and bidirectional 2DPCA also known as (2D) 2 PCA.A preprocessing stage was implemented on the images to smooth them using median and Gaussian filtering. In the normalization stage we locate the nose tip to lay it at the center of images then crop each image to a standard size of 100*100. In the face recognition stage we extract the principal component of each image using both 2DPCA and (2D) 2 PCA. Finally, we use Euclidean distance to measure the minimum distance between a given test image to the training images in the database. We also compare the result of using both methods. The best result achieved by experiments on a public face database shows that 83.3 percent is the rate of face recognition for a random facial expression.Keywords: 3D face recognition, 2DPCA, (2D) 2 PCA, Rangeimage
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30663771 The Coupling of Photocatalytic Oxidation Processes with Activated Carbon Technologies and the Comparison of the Treatment Methods for Organic Removal from Surface Water
Authors: N. Areerachakul
Abstract:
The surface water used in this study was collected from the Chao Praya River at the lower part at the Nonthaburi bridge. It was collected and used throughout the experiment. TOC (also known as DOC) in the range between 2.5 to 5.6 mg/l were investigated in this experiment. The use of conventional treatment methods such as FeCl3 and PAC showed that TOC removal was 65% using FeCl3 and 78% using PAC (powder activated carbon). The advanced oxidation process alone showed only 35% removal of TOC. Coupling advanced oxidation with a small amount of PAC (0.05g/L) increased efficiency by upto 55%. The combined BAC with advanced oxidation process and small amount of PAC demonstrated the highest efficiency of up to 95% of TOC removal and lower sludge production compared with other methods.
Keywords: Advanced oxidation process, TOC, PAC
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17843770 On Dialogue Systems Based on Deep Learning
Authors: Yifan Fan, Xudong Luo, Pingping Lin
Abstract:
Nowadays, dialogue systems increasingly become the way for humans to access many computer systems. So, humans can interact with computers in natural language. A dialogue system consists of three parts: understanding what humans say in natural language, managing dialogue, and generating responses in natural language. In this paper, we survey deep learning based methods for dialogue management, response generation and dialogue evaluation. Specifically, these methods are based on neural network, long short-term memory network, deep reinforcement learning, pre-training and generative adversarial network. We compare these methods and point out the further research directions.Keywords: Dialogue management, response generation, reinforcement learning, deep learning, evaluation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7873769 Locating Critical Failure Surface in Rock Slope Stability with Hybrid Model Based on Artificial Immune System and Cellular Learning Automata (CLA-AIS)
Authors: Ramin Javadzadeh, Emad Javadzadeh
Abstract:
Locating the critical slip surface with the minimum factor of safety for a rock slope is a difficult problem. In recent years, some modern global optimization methods have been developed with success in treating various types of problems, but very few of such methods have been applied to rock mechanical problems. In this paper, use of hybrid model based on artificial immune system and cellular learning automata is proposed. The results show that the algorithm is an effective and efficient optimization method with a high level of confidence rate.
Keywords: CLA-AIS, failure surface, optimization methods, rock slope.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20103768 Blow up in Polynomial Differential Equations
Authors: Rudolf Csikja, Janos Toth
Abstract:
Methods to detect and localize time singularities of polynomial and quasi-polynomial ordinary differential equations are systematically presented and developed. They are applied to examples taken form different fields of applications and they are also compared to better known methods such as those based on the existence of linear first integrals or Lyapunov functions.
Keywords: blow up, finite escape time, polynomial ODE, singularity, Lotka–Volterra equation, Painleve analysis, Ψ-series, global existence
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21813767 Retrieval of Relevant Visual Data in Selected Machine Vision Tasks: Examples of Hardware-based and Software-based Solutions
Authors: Andrzej Śluzek
Abstract:
To illustrate diversity of methods used to extract relevant (where the concept of relevance can be differently defined for different applications) visual data, the paper discusses three groups of such methods. They have been selected from a range of alternatives to highlight how hardware and software tools can be complementarily used in order to achieve various functionalities in case of different specifications of “relevant data". First, principles of gated imaging are presented (where relevance is determined by the range). The second methodology is intended for intelligent intrusion detection, while the last one is used for content-based image matching and retrieval. All methods have been developed within projects supervised by the author.
Keywords: Relevant visual data, gated imaging, intrusion detection, image matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13943766 Differentiation of Heart Rate Time Series from Electroencephalogram and Noise
Authors: V. I. Thajudin Ahamed, P. Dhanasekaran, Paul Joseph K.
Abstract:
Analysis of heart rate variability (HRV) has become a popular non-invasive tool for assessing the activities of autonomic nervous system. Most of the methods were hired from techniques used for time series analysis. Currently used methods are time domain, frequency domain, geometrical and fractal methods. A new technique, which searches for pattern repeatability in a time series, is proposed for quantifying heart rate (HR) time series. These set of indices, which are termed as pattern repeatability measure and pattern repeatability ratio are able to distinguish HR data clearly from noise and electroencephalogram (EEG). The results of analysis using these measures give an insight into the fundamental difference between the composition of HR time series with respect to EEG and noise.Keywords: Approximate entropy, heart rate variability, noise, pattern repeatability, and sample entropy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1733