Search results for: multivariate methods.
3825 Forecasting the Influences of Information and Communication Technology on the Structural Changes of Japanese Industrial Sectors: A Study Using Statistical Analysis
Authors: Ubaidillah Zuhdi, Shunsuke Mori, Kazuhisa Kamegai
Abstract:
The purpose of this study is to forecast the influences of information and communication technology (ICT) on the structural changes of Japanese economies. In this study, input-output (IO) and statistical approaches are used as analysis instruments. More specifically, this study employs Leontief IO coefficients and constrained multivariate regression (CMR) model in order to achieve the purpose. The periods of initial and forecast in this study are 2005 and 2015, respectively. In this study, ICT is represented by ICT capital stocks. This study conducts two levels of analysis, namely macro and micro. The results of macro level analysis show that the dynamics of Japanese economies on the forecast period, relative to the initial period, are not so high. We focus on (1) commerce, (2) business services and office supplies, and (3) personal services sectors when conducting the analysis of the micro level. Further, we analyze its specific IO coefficients when doing this analysis. The results of the analysis explain that ICT gives a strong influence on the changes of these coefficients from initial to forecast periods.
Keywords: Forecast, ICT, Structural changes, Japanese economies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16743824 Seven step Adams Type Block Method With Continuous Coefficient For Periodic Ordinary Differential Equation
Authors: Olusheye Akinfenwa
Abstract:
We consider the development of an eight order Adam-s type method, with A-stability property discussed by expressing them as a one-step method in higher dimension. This makes it suitable for solving variety of initial-value problems. The main method and additional methods are obtained from the same continuous scheme derived via interpolation and collocation procedures. The methods are then applied in block form as simultaneous numerical integrators over non-overlapping intervals. Numerical results obtained using the proposed block form reveals that it is highly competitive with existing methods in the literature.Keywords: Block Adam's type Method; Periodic Ordinary Differential Equation; Stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15843823 The Use of Methods and Techniques of Drama Education with Kindergarten Teachers
Authors: Vladimira Hornackova, Jana Kottasova, Zuzana Vanova, Anna Jungrova
Abstract:
Present study deals with drama education in preschool education. The research made in this field brings a qualitative comparative survey with the aim to find out the use of methods and techniques of drama education in preschool education at university or secondary school graduate preschool teachers. The research uses a content analysis and an unstandardized questionnaire for preschool teachers and obtained data are processed with the help of descriptive methods and correlations. The results allow a comparison of aspects applied through drama in preschool education. The research brings impulses for education improvement in kindergartens and inspiration for university study programs of drama education in the professional training of preschool teachers.Keywords: Drama education, preschool education, preschool teacher, research.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13803822 A Survey of Sentiment Analysis Based on Deep Learning
Authors: Pingping Lin, Xudong Luo, Yifan Fan
Abstract:
Sentiment analysis is a very active research topic. Every day, Facebook, Twitter, Weibo, and other social media, as well as significant e-commerce websites, generate a massive amount of comments, which can be used to analyse peoples opinions or emotions. The existing methods for sentiment analysis are based mainly on sentiment dictionaries, machine learning, and deep learning. The first two kinds of methods rely on heavily sentiment dictionaries or large amounts of labelled data. The third one overcomes these two problems. So, in this paper, we focus on the third one. Specifically, we survey various sentiment analysis methods based on convolutional neural network, recurrent neural network, long short-term memory, deep neural network, deep belief network, and memory network. We compare their futures, advantages, and disadvantages. Also, we point out the main problems of these methods, which may be worthy of careful studies in the future. Finally, we also examine the application of deep learning in multimodal sentiment analysis and aspect-level sentiment analysis.Keywords: Natural language processing, sentiment analysis, document analysis, multimodal sentiment analysis, deep learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20043821 Behavior of Composite Timber-Concrete Beam with CFRP Reinforcement
Authors: O. Vlcek
Abstract:
The paper deals with current issues in research of advanced methods to increase reliability of traditional timber structural elements. It analyses the issue of strengthening of bent timber beams, such as ceiling beams in old (historical) buildings with additional concrete slab in combination with externally bonded fiber - reinforced polymer. The study evaluates deflection of a selected group of timber beams with concrete slab and additional CFRP reinforcement using different calculating methods and observes differences in results from different calculating methods. An elastic (EN 1995) calculation method and evaluation with FEM analysis software were used.
Keywords: Timber-concrete composite, strengthening, fibre-reinforced polymer, theoretical analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18023820 FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule
Authors: Lu Si, Jie Yu, Shasha Li, Jun Ma, Lei Luo, Qingbo Wu, Yongqi Ma, Zhengji Liu
Abstract:
Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rule, we propose a large data sets instance selection method with MapReduce framework. Besides ensuring the prediction accuracy and reduction rate, it has two desirable properties: First, it reduces the work load in the aggregation node; Second and most important, it produces the same result with the sequential version, which other parallel methods cannot achieve. We evaluate the performance of FCNN-MR on one small data set and two large data sets. The experimental results show that it is effective and practical.Keywords: Instance selection, data reduction, MapReduce, kNN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10173819 Methods for Data Selection in Medical Databases: The Binary Logistic Regression -Relations with the Calculated Risks
Authors: Cristina G. Dascalu, Elena Mihaela Carausu, Daniela Manuc
Abstract:
The medical studies often require different methods for parameters selection, as a second step of processing, after the database-s designing and filling with information. One common task is the selection of fields that act as risk factors using wellknown methods, in order to find the most relevant risk factors and to establish a possible hierarchy between them. Different methods are available in this purpose, one of the most known being the binary logistic regression. We will present the mathematical principles of this method and a practical example of using it in the analysis of the influence of 10 different psychiatric diagnostics over 4 different types of offences (in a database made from 289 psychiatric patients involved in different types of offences). Finally, we will make some observations about the relation between the risk factors hierarchy established through binary logistic regression and the individual risks, as well as the results of Chi-squared test. We will show that the hierarchy built using the binary logistic regression doesn-t agree with the direct order of risk factors, even if it was naturally to assume this hypothesis as being always true.Keywords: Databases, risk factors, binary logisticregression, hierarchy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13273818 Comparison between the Conventional Methods and PSO Based MPPT Algorithm for Photovoltaic Systems
Authors: Ramdan B. A. Koad, Ahmed. F. Zobaa
Abstract:
Since the output characteristics of photovoltaic (PV) system depends on the ambient temperature, solar radiation and load impedance, its maximum power point (MPP) is not constant. Under each condition PV module has a point at which it can produce its MPP. Therefore, a maximum power point tracking (MPPT) method is needed to uphold the PV panel operating at its MPP. This paper presents comparative study between the conventional MPPT methods used in (PV) system: Perturb and Observe (P&O), Incremental Conductance (IncCond), andParticle Swarm Optimization (PSO) algorithmfor (MPPT) of (PV) system. To evaluate the study, the proposed PSO MPPT is implemented on a DC-DC cuk converter and has been compared with P&O and INcond methods in terms of their tracking speed, accuracy and performance by using the Matlab tool Simulink. The simulation result shows that the proposed algorithm is simple, and is superior to the P&O and IncCond methods.
Keywords: Incremental Conductance (IncCond) Method, Perturb and Observe (P&O) Method, Photovoltaic Systems (PV) and Practical Swarm Optimization Algorithm (PSO).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 57293817 Comparative Analysis of Classical and Parallel Inpainting Algorithms Based on Affine Combinations of Projections on Convex Sets
Authors: Irina Maria Artinescu, Costin Radu Boldea, Eduard-Ionut Matei
Abstract:
The paper is a comparative study of two classical vari-ants of parallel projection methods for solving the convex feasibility problem with their equivalents that involve variable weights in the construction of the solutions. We used a graphical representation of these methods for inpainting a convex area of an image in order to investigate their effectiveness in image reconstruction applications. We also presented a numerical analysis of the convergence of these four algorithms in terms of the average number of steps and execution time, in classical CPU and, alternativaly, in parallel GPU implementation.
Keywords: convex feasibility problem, convergence analysis, ınpainting, parallel projection methods
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4483816 Prediction of Research Topics Using Ensemble of Best Predictors from Similar Dataset
Authors: Indra Budi, Rizal Fathoni Aji, Agus Widodo
Abstract:
Prediction of future research topics by using time series analysis either statistical or machine learning has been conducted previously by several researchers. Several methods have been proposed to combine the forecasting results into single forecast. These methods use fixed combination of individual forecast to get the final forecast result. In this paper, quite different approach is employed to select the forecasting methods, in which every point to forecast is calculated by using the best methods used by similar validation dataset. The dataset used in the experiment is time series derived from research report in Garuda, which is an online sites belongs to the Ministry of Education in Indonesia, over the past 20 years. The experimental result demonstrates that the proposed method may perform better compared to the fix combination of predictors. In addition, based on the prediction result, we can forecast emerging research topics for the next few years.
Keywords: Combination, emerging topics, ensemble, forecasting, machine learning, prediction, research topics, similarity measure, time series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21253815 Biometric Methods and Implementation of Algorithms
Authors: Parvinder S. Sandhu, Iqbaldeep Kaur, Amit Verma, Samriti Jindal, Shailendra Singh
Abstract:
Biometric measures of one kind or another have been used to identify people since ancient times, with handwritten signatures, facial features, and fingerprints being the traditional methods. Of late, Systems have been built that automate the task of recognition, using these methods and newer ones, such as hand geometry, voiceprints and iris patterns. These systems have different strengths and weaknesses. This work is a two-section composition. In the starting section, we present an analytical and comparative study of common biometric techniques. The performance of each of them has been viewed and then tabularized as a result. The latter section involves the actual implementation of the techniques under consideration that has been done using a state of the art tool called, MATLAB. This tool aids to effectively portray the corresponding results and effects.Keywords: Matlab, Recognition, Facial Vectors, Functions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31923814 Flow Discharge Determination in Meandering Compound Channels Using Experimental Methods
Authors: Mehdi Kheradmand, Mehdi Azhdary Moghaddam, Abdolreza Zahiri, Mohadeseh Kheradmand
Abstract:
Determining the flow discharge in meandering channels with a compound cross section is associated with problems due to the complex hydraulic structure of the flow in the meander belt, which can be attributed to different and ever-changing geometric shapes of the meander. This research paper intends to study the accuracy of several one-dimensional experimental methods in determining the flow discharge. To this end, the results of laboratory data related to four meandering compound channels have been used, and the accuracy of three important methods to determine the flow discharge have been checked in these channels.
Keywords: Flow discharge determination, meandering compound channel, compound section, meandering rivers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3573813 Hippocampus Segmentation using a Local Prior Model on its Boundary
Authors: Dimitrios Zarpalas, Anastasios Zafeiropoulos, Petros Daras, Nicos Maglaveras
Abstract:
Segmentation techniques based on Active Contour Models have been strongly benefited from the use of prior information during their evolution. Shape prior information is captured from a training set and is introduced in the optimization procedure to restrict the evolution into allowable shapes. In this way, the evolution converges onto regions even with weak boundaries. Although significant effort has been devoted on different ways of capturing and analyzing prior information, very little thought has been devoted on the way of combining image information with prior information. This paper focuses on a more natural way of incorporating the prior information in the level set framework. For proof of concept the method is applied on hippocampus segmentation in T1-MR images. Hippocampus segmentation is a very challenging task, due to the multivariate surrounding region and the missing boundary with the neighboring amygdala, whose intensities are identical. The proposed method, mimics the human segmentation way and thus shows enhancements in the segmentation accuracy.Keywords: Medical imaging & processing, Brain MRI segmentation, hippocampus segmentation, hippocampus-amygdala missingboundary, weak boundary segmentation, region based segmentation, prior information, local weighting scheme in level sets, spatialdistribution of labels, gradient distribution on boundary.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17523812 Masquerade and “What Comes Behind Six Is More Than Seven”: Thoughts on Art History and Visual Culture Research Methods
Authors: Osa D Egonwa
Abstract:
In the 21st century, the disciplinary boundaries of past centuries that we often create through mainstream art historical classification, techniques and sources may have been eroded by visual culture, which seems to provide a more inclusive umbrella for the new ways artists go about the creative process and its resultant commodities. Over the past four decades, artists in Africa have resorted to new materials, techniques and themes which have affected our ways of research on these artists and their art. Frontline artists such as El Anatsui, Yinka Shonibare, Erasmus Onyishi are demonstrating that any material is just suitable for artistic expression. Most of times, these materials come with their own techniques/effects and visual syntax: a combination of materials compounds techniques, formal aesthetic indexes, halo effects, and iconography. This tends to challenge the categories and we lean on to view, think and talk about them. This renders our main stream art historical research methods inadequate, thus suggesting new discursive concepts, terms and theories. This paper proposed the Africanist eclectic methods derived from the dual framework of Masquerade Theory and What Comes Behind Six is More Than Seven. This paper shares thoughts/research on art historical methods, terminological re-alignments on classification/source data, presentational format and interpretation arising from the emergent trends in our subject. The outcome provides useful tools to mediate new thoughts and experiences in recent African art and visual culture.
Keywords: Art Historical Methods, Classifications, Concepts , Re-alignment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6383811 Fault Detection via Stability Analysis for the Hybrid Control Unit of HEVs
Authors: Kyogun Chang, Yoon Bok Lee
Abstract:
Fault detection determines faultexistence and detecting time. This paper discusses two layered fault detection methods to enhance the reliability and safety. Two layered fault detection methods consist of fault detection methods of component level controllers and system level controllers. Component level controllers detect faults by using limit checking, model-based detection, and data-driven detection and system level controllers execute detection by stability analysis which can detect unknown changes. System level controllers compare detection results via stability with fault signals from lower level controllers. This paper addresses fault detection methods via stability and suggests fault detection criteria in nonlinear systems. The fault detection method applies tothe hybrid control unit of a military hybrid electric vehicleso that the hybrid control unit can detect faults of the traction motor.Keywords: Two Layered Fault Detection, Stability Analysis, Fault-Tolerant Control
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17113810 A Comparison of Shunt Active Power Filter Control Methods under Non-Sinusoidal and Unbalanced Voltage Conditions
Authors: H. Abaali, M. T. Lamchich, M. Raoufi
Abstract:
There are a variety of reference current identification methods, for the shunt active power filter (SAPF), such as the instantaneous active and reactive power, the instantaneous active and reactive current and the synchronous detection method are evaluated and compared under ideal, non sinusoidal and unbalanced voltage conditions. The SAPF performances, for the investigated identification methods, are tested for a non linear load. The simulation results, using Matlab Power System Blockset Toolbox from a complete structure, are presented and discussed.
Keywords: Shunt active power filter, Current perturbation, Non sinusoidal and unbalanced voltage conditions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25443809 Exploring the Need to Study the Efficacy of VR Training Compared to Traditional Cybersecurity Training
Authors: Shaila Rana, Wasim Alhamdani
Abstract:
Effective cybersecurity training is of the utmost importance, given the plethora of attacks that continue to increase in complexity and ubiquity. VR cybersecurity training remains a starkly understudied discipline. Studies that evaluated the effectiveness of VR cybersecurity training over traditional methods are required. An engaging and interactive platform can support knowledge retention of the training material. Consequently, an effective form of cybersecurity training is required to support a culture of cybersecurity awareness. Measurements of effectiveness varied throughout the studies, with surveys and observations being the two most utilized forms of evaluating effectiveness. Further research is needed to evaluate the effectiveness of VR cybersecurity training and traditional training. Additionally, research for evaluating if VR cybersecurity training is more effective than traditional methods is vital. This paper proposes a methodology to compare the two cybersecurity training methods and their effectiveness. The proposed framework includes developing both VR and traditional cybersecurity training methods and delivering them to at least 100 users. A quiz along with a survey will be administered and statistically analyzed to determine if there is a difference in knowledge retention and user satisfaction. The aim of this paper is to bring attention to the need to study VR cybersecurity training and its effectiveness compared to traditional training methods. This paper hopes to contribute to the cybersecurity training field by providing an effective way to train users for security awareness. If VR training is deemed more effective, this could create a new direction for cybersecurity training practices.
Keywords: Virtual reality cybersecurity training, VR cybersecurity training, traditional cybersecurity training, evaluating efficacy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10833808 Bioinformatics Profiling of Missense Mutations
Authors: I. Nassiri, B. Goliaei, M. Tavassoli
Abstract:
The ability to distinguish missense nucleotide substitutions that contribute to harmful effect from those that do not is a difficult problem usually accomplished through functional in vivo analyses. In this study, instead current biochemical methods, the effects of missense mutations upon protein structure and function were assayed by means of computational methods and information from the databases. For this order, the effects of new missense mutations in exon 5 of PTEN gene upon protein structure and function were examined. The gene coding for PTEN was identified and localized on chromosome region 10q23.3 as the tumor suppressor gene. The utilization of these methods were shown that c.319G>A and c.341T>G missense mutations that were recognized in patients with breast cancer and Cowden disease, could be pathogenic. This method could be use for analysis of missense mutation in others genes.Keywords: Bioinformatics, missense mutations, PTEN tumorsuppressor gene.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23903807 Tests for Gaussianity of a Stationary Time Series
Authors: Adnan Al-Smadi
Abstract:
One of the primary uses of higher order statistics in signal processing has been for detecting and estimation of non- Gaussian signals in Gaussian noise of unknown covariance. This is motivated by the ability of higher order statistics to suppress additive Gaussian noise. In this paper, several methods to test for non- Gaussianity of a given process are presented. These methods include histogram plot, kurtosis test, and hypothesis testing using cumulants and bispectrum of the available sequence. The hypothesis testing is performed by constructing a statistic to test whether the bispectrum of the given signal is non-zero. A zero bispectrum is not a proof of Gaussianity. Hence, other tests such as the kurtosis test should be employed. Examples are given to demonstrate the performance of the presented methods.Keywords: Non-Gaussian, bispectrum, kurtosis, hypothesistesting, histogram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19163806 The Influence of the Commons Structure Modification on the Active Power Losses Allocation
Authors: O. Pop, C. Barbulescu, M. Nemes, St. Kilyeni
Abstract:
The tracing methods determine the contribution the power system sources have in their supplying. These methods can be used to assess the transmission prices, but also to recover the transmission fixed cost. In this paper is presented the influence of the modification of commons structure has on the specific price of transfer and on active power losses. The authors propose a power losses allocation method, based on Kirschen-s method. The system operator must make use of a few basic principles about allocation. The only necessary information is the power flows on system branches and the modifications applied to power system buses. In order to illustrate this method, the 25-bus test system is used, elaborated within the Electrical Power Engineering Department, from Timisoara, Romania.Keywords: Power systems, P-U bus, P-Q bus, loss allocation, traceability methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15253805 An Efficient 3D Animation Data Reduction Using Frame Removal
Authors: Jinsuk Yang, Choongjae Joo, Kyoungsu Oh
Abstract:
Existing methods in which the animation data of all frames are stored and reproduced as with vertex animation cannot be used in mobile device environments because these methods use large amounts of the memory. So 3D animation data reduction methods aimed at solving this problem have been extensively studied thus far and we propose a new method as follows. First, we find and remove frames in which motion changes are small out of all animation frames and store only the animation data of remaining frames (involving large motion changes). When playing the animation, the removed frame areas are reconstructed using the interpolation of the remaining frames. Our key contribution is to calculate the accelerations of the joints of individual frames and the standard deviations of the accelerations using the information of joint locations in the relevant 3D model in order to find and delete frames in which motion changes are small. Our methods can reduce data sizes by approximately 50% or more while providing quality which is not much lower compared to original animations. Therefore, our method is expected to be usefully used in mobile device environments or other environments in which memory sizes are limited.
Keywords: Data Reduction, Interpolation, Vertex Animation, 3D Animation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16613804 Teaching Computer Programming to Diverse Students: A Comparative, Mixed-Methods, Classroom Research Study
Authors: Almudena Konrad, Tomás Galguera
Abstract:
Lack of motivation and interest is a serious obstacle to students’ learning computing skills. A need exists for a knowledge base on effective pedagogy and curricula to teach computer programming. This paper presents results from research evaluating a six-year project designed to teach complex concepts in computer programming collaboratively, while supporting students to continue developing their computer thinking and related coding skills individually. Utilizing a quasi-experimental, mixed methods design, the pedagogical approaches and methods were assessed in two contrasting groups of students with different socioeconomic status, gender, and age composition. Analyses of quantitative data from Likert-scale surveys and an evaluation rubric, combined with qualitative data from reflective writing exercises and semi-structured interviews yielded convincing evidence of the project’s success at both teaching and inspiring students.Keywords: Computational thinking, computing education, computer programming curriculum, logic, teaching methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7893803 Relative Suitability Evaluation of Two Methods of Particle-Size Analysis for Selected Soils of Sudan Savanna of Nigeria
Authors: B. A. Lawal, B. R. Singh, G. A. Babaji, P. A. Tsado
Abstract:
The two widely used methods base on the sedimentation principle (Bouyoucos hydrometer and International pipette) for particle-size analysis were comparatively evaluated on soils collected from various locations in Sudan savanna of Nigeria particularly from Sokoto and Zamfara States. The hydrometer method under-estimated the silt and over-estimated the clay content. Also, the hydrometer reading proved difficult and tended to submerge when floated for clay reading in the suspension of very sandy soils (900g kg-1 sand). Furthermore, the results from the two methods were validated by subjecting the data to USDA soil textural triangle to determine their textural class names. The outcome was that 91.67 % of the experimental soils retained the same textural class names irrespective of the method. Thus, Bouyoucos hydrometer method may conveniently find a place in routine work in view of its simplicity, rapidity, and strong correlation with the pipette method.
Keywords: Hydrometer and pipette methods, particle-size analysis, sedimentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23713802 A User Friendly Tool for Performance Evaluation of Different Reference Evapotranspiration Methods
Authors: Vijay Shankar
Abstract:
Evapotranspiration (ET) is a major component of the hydrologic cycle and its accurate estimation is essential for hydrological studies. In past, various estimation methods have been developed for different climatological data, and the accuracy of these methods varies with climatic conditions. Reference crop evapotranspiration (ET0) is a key variable in procedures established for estimating evapotranspiration rates of agricultural crops. Values of ET0 are used with crop coefficients for many aspects of irrigation and water resources planning and management. Numerous methods are used for estimating ET0. As per internationally accepted procedures outlined in the United Nations Food and Agriculture Organization-s Irrigation and Drainage Paper No. 56(FAO-56), use of Penman-Monteith equation is recommended for computing ET0 from ground based climatological observations. In the present study, seven methods have been selected for performance evaluation. User friendly software has been developed using programming language visual basic. The visual basic has ability to create graphical environment using less coding. For given data availability the developed software estimates reference evapotranspiration for any given area and period for which data is available. The accuracy of the software has been checked by the examples given in FAO-56.The developed software is a user friendly tool for estimating ET0 under different data availability and climatic conditions.
Keywords: Crop coefficient, Crop evapotranspiration, Field moisture, Irrigation Scheduling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16543801 Effects of Different Drying Methods on the Properties of Viscose Single Jersey Fabrics
Authors: M. Kucukali Ozturk, Y. Beceren, B. Nergis
Abstract:
The study discussed in this paper was conducted in an attempt to investigate effects of different drying methods (line dry and tumble dry) on viscose single jersey fabrics knitted with ring yarn.Keywords: Color change, dimensional properties, drying method, fabric tightness, physical properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30133800 Numerical Study of Iterative Methods for the Solution of the Dirichlet-Neumann Map for Linear Elliptic PDEs on Regular Polygon Domains
Authors: A. G. Sifalakis, E. P. Papadopoulou, Y. G. Saridakis
Abstract:
A generalized Dirichlet to Neumann map is one of the main aspects characterizing a recently introduced method for analyzing linear elliptic PDEs, through which it became possible to couple known and unknown components of the solution on the boundary of the domain without solving on its interior. For its numerical solution, a well conditioned quadratically convergent sine-Collocation method was developed, which yielded a linear system of equations with the diagonal blocks of its associated coefficient matrix being point diagonal. This structural property, among others, initiated interest for the employment of iterative methods for its solution. In this work we present a conclusive numerical study for the behavior of classical (Jacobi and Gauss-Seidel) and Krylov subspace (GMRES and Bi-CGSTAB) iterative methods when they are applied for the solution of the Dirichlet to Neumann map associated with the Laplace-s equation on regular polygons with the same boundary conditions on all edges.Keywords: Elliptic PDEs, Dirichlet to Neumann Map, Global Relation, Collocation, Iterative Methods, Jacobi, Gauss-Seidel, GMRES, Bi-CGSTAB.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17113799 Data Mining Classification Methods Applied in Drug Design
Authors: Mária Stachová, Lukáš Sobíšek
Abstract:
Data mining incorporates a group of statistical methods used to analyze a set of information, or a data set. It operates with models and algorithms, which are powerful tools with the great potential. They can help people to understand the patterns in certain chunk of information so it is obvious that the data mining tools have a wide area of applications. For example in the theoretical chemistry data mining tools can be used to predict moleculeproperties or improve computer-assisted drug design. Classification analysis is one of the major data mining methodologies. The aim of thecontribution is to create a classification model, which would be able to deal with a huge data set with high accuracy. For this purpose logistic regression, Bayesian logistic regression and random forest models were built using R software. TheBayesian logistic regression in Latent GOLD software was created as well. These classification methods belong to supervised learning methods. It was necessary to reduce data matrix dimension before construct models and thus the factor analysis (FA) was used. Those models were applied to predict the biological activity of molecules, potential new drug candidates.Keywords: data mining, classification, drug design, QSAR
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28493798 Development of a Multi-Factorial Instrument for Accident Analysis Based on Systemic Methods
Authors: C. V. Pietreanu, S. E. Zaharia, C. Dinu
Abstract:
The present research is built on three major pillars, commencing by making some considerations on accident investigation methods and pointing out both defining aspects and differences between linear and non-linear analysis. The traditional linear focus on accident analysis describes accidents as a sequence of events, while the latest systemic models outline interdependencies between different factors and define the processes evolution related to a specific (normal) situation. Linear and non-linear accident analysis methods have specific limitations, so the second point of interest is mirrored by the aim to discover the drawbacks of systemic models which becomes a starting point for developing new directions to identify risks or data closer to the cause of incidents/accidents. Since communication represents a critical issue in the interaction of human factor and has been proved to be the answer of the problems made by possible breakdowns in different communication procedures, from this focus point, on the third pylon a new error-modeling instrument suitable for risk assessment/accident analysis will be elaborated.
Keywords: Accident analysis, multi-factorial error modeling, risk, systemic methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10403797 Microscopic Emission and Fuel Consumption Modeling for Light-duty Vehicles Using Portable Emission Measurement System Data
Authors: Wei Lei, Hui Chen, Lin Lu
Abstract:
Microscopic emission and fuel consumption models have been widely recognized as an effective method to quantify real traffic emission and energy consumption when they are applied with microscopic traffic simulation models. This paper presents a framework for developing the Microscopic Emission (HC, CO, NOx, and CO2) and Fuel consumption (MEF) models for light-duty vehicles. The variable of composite acceleration is introduced into the MEF model with the purpose of capturing the effects of historical accelerations interacting with current speed on emission and fuel consumption. The MEF model is calibrated by multivariate least-squares method for two types of light-duty vehicle using on-board data collected in Beijing, China by a Portable Emission Measurement System (PEMS). The instantaneous validation results shows the MEF model performs better with lower Mean Absolute Percentage Error (MAPE) compared to other two models. Moreover, the aggregate validation results tells the MEF model produces reasonable estimations compared to actual measurements with prediction errors within 12%, 10%, 19%, and 9% for HC, CO, NOx emissions and fuel consumption, respectively.Keywords: Emission, Fuel consumption, Light-duty vehicle, Microscopic, Modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20053796 Application of Scanning Electron Microscopy and X-Ray Evaluation of the Main Digestion Methods for Determination of Macroelements in Plant Tissue
Authors: Krasimir I. Ivanov, Penka S. Zapryanova, Stefan V. Krustev, Violina R. Angelova
Abstract:
Three commonly used digestion methods (dry ashing, acid digestion, and microwave digestion) in different variants were compared for digestion of tobacco leaves. Three main macroelements (K, Ca and Mg) were analysed using AAS Spectrometer Spectra АА 220, Varian, Australia. The accuracy and precision of the measurements were evaluated by using Polish reference material CTR-VTL-2 (Virginia tobacco leaves). To elucidate the problems with elemental recovery X-Ray and SEM–EDS analysis of all residues after digestion were performed. The X-ray investigation showed a formation of KClO4 when HClO4 was used as a part of the acids mixture. The use of HF at Ca and Mg determination led to the formation of CaF2 and MgF2. The results were confirmed by energy dispersive X-ray microanalysis. SPSS program for Windows was used for statistical data processing.
Keywords: Digestion methods, determination of macroelements, plant tissue.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 940