Search results for: model errors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17458

Search results for: model errors

17308 Corpus Linguistics as a Tool for Translation Studies Analysis: A Bilingual Parallel Corpus of Students’ Translations

Authors: Juan-Pedro Rica-Peromingo

Abstract:

Nowadays, corpus linguistics has become a key research methodology for Translation Studies, which broadens the scope of cross-linguistic studies. In the case of the study presented here, the approach used focuses on learners with little or no experience to study, at an early stage, general mistakes and errors, the correct or incorrect use of translation strategies, and to improve the translational competence of the students. Led by Sylviane Granger and Marie-Aude Lefer of the Centre for English Corpus Linguistics of the University of Louvain, the MUST corpus (MUltilingual Student Translation Corpus) is an international project which brings together partners from Europe and worldwide universities and connects Learner Corpus Research (LCR) and Translation Studies (TS). It aims to build a corpus of translations carried out by students including both direct (L2 > L1) an indirect (L1 > L2) translations, from a great variety of text types, genres, and registers in a wide variety of languages: audiovisual translations (including dubbing, subtitling for hearing population and for deaf population), scientific, humanistic, literary, economic and legal translation texts. This paper focuses on the work carried out by the Spanish team from the Complutense University (UCMA), which is part of the MUST project, and it describes the specific features of the corpus built by its members. All the texts used by UCMA are either direct or indirect translations between English and Spanish. Students’ profiles comprise translation trainees, foreign language students with a major in English, engineers studying EFL and MA students, all of them with different English levels (from B1 to C1); for some of the students, this would be their first experience with translation. The MUST corpus is searchable via Hypal4MUST, a web-based interface developed by Adam Obrusnik from Masaryk University (Czech Republic), which includes a translation-oriented annotation system (TAS). A distinctive feature of the interface is that it allows source texts and target texts to be aligned, so we can be able to observe and compare in detail both language structures and study translation strategies used by students. The initial data obtained point out the kind of difficulties encountered by the students and reveal the most frequent strategies implemented by the learners according to their level of English, their translation experience and the text genres. We have also found common errors in the graduate and postgraduate university students’ translations: transfer errors, lexical errors, grammatical errors, text-specific translation errors, and cultural-related errors have been identified. Analyzing all these parameters will provide more material to bring better solutions to improve the quality of teaching and the translations produced by the students.

Keywords: corpus studies, students’ corpus, the MUST corpus, translation studies

Procedia PDF Downloads 146
17307 A Method to Determine Cutting Force Coefficients in Turning Using Mechanistic Approach

Authors: T. C. Bera, A. Bansal, D. Nema

Abstract:

During performing turning operation, cutting force plays a significant role in metal cutting process affecting tool-work piece deflection, vibration and eventually part quality. The present research work aims to develop a mechanistic cutting force model and to study the mechanistic constants used in the force model in case of turning operation. The proposed model can be used for the reliable and accurate estimation of the cutting forces establishing relationship of various force components (cutting force and feed force) with uncut chip thickness. The accurate estimation of cutting force is required to improve thin-walled part accuracy by controlling the tool-work piece deflection induced surface errors and tool-work piece vibration.

Keywords: turning, cutting forces, cutting constants, uncut chip thickness

Procedia PDF Downloads 521
17306 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics

Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima

Abstract:

This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.

Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks

Procedia PDF Downloads 164
17305 Mathematical Competence as It Is Defined through Learners' Errors in Arithmetic and Algebra

Authors: Michael Lousis

Abstract:

Mathematical competence is the great aim of every mathematical teaching and learning endeavour. This can be defined as an idealised conceptualisation of the quality of cognition and the ability of implementation in practice of the mathematical subject matter, which is included in the curriculum, and is displayed only through performance of doing mathematics. The present study gives a clear definition of mathematical competence in the domains of Arithmetic and Algebra that stems from the explanation of the learners’ errors in these domains. The learners, whose errors are explained, were Greek and English participants of a large, international, longitudinal, comparative research program entitled the Kassel Project. The participants’ errors emerged as results of their work in dealing with mathematical questions and problems of the tests, which were presented to them. The construction of the tests was such as only the outcomes of the participants’ work was to be encompassed and not their course of thinking, which resulted in these outcomes. The intention was that the tests had to provide undeviating comparable results and simultaneously avoid any probable bias. Any bias could stem from obtaining results by involving so many markers from different countries and cultures, with so many different belief systems concerning the assessment of learners’ course of thinking. In this way the validity of the research was protected. This fact forced the implementation of specific research methods and theoretical prospects to take place in order the participants’ erroneous way of thinking to be disclosed. These were Methodological Pragmatism, Symbolic Interactionism, Philosophy of Mind and the ideas of Computationalism, which were used for deciding and establishing the grounds of the adequacy and legitimacy of the obtained kinds of knowledge through the explanations given by the error analysis. The employment of this methodology and of these theoretical prospects resulted in the definition of the learners’ mathematical competence, which is the thesis of the present study. Thus, learners’ mathematical competence is depending upon three key elements that should be developed in their minds: appropriate representations, appropriate meaning, and appropriate developed schemata. This definition then determined the development of appropriate teaching practices and interventions conducive to the achievement and finally the entailment of mathematical competence.

Keywords: representations, meaning, appropriate developed schemata, computationalism, error analysis, explanations for the probable causes of the errors, Kassel Project, mathematical competence

Procedia PDF Downloads 268
17304 Two-Phase Sampling for Estimating a Finite Population Total in Presence of Missing Values

Authors: Daniel Fundi Murithi

Abstract:

Missing data is a real bane in many surveys. To overcome the problems caused by missing data, partial deletion, and single imputation methods, among others, have been proposed. However, problems such as discarding usable data and inaccuracy in reproducing known population parameters and standard errors are associated with them. For regression and stochastic imputation, it is assumed that there is a variable with complete cases to be used as a predictor in estimating missing values in the other variable, and the relationship between the two variables is linear, which might not be realistic in practice. In this project, we estimate population total in presence of missing values in two-phase sampling. Instead of regression or stochastic models, non-parametric model based regression model is used in imputing missing values. Empirical study showed that nonparametric model-based regression imputation is better in reproducing variance of population total estimate obtained when there were no missing values compared to mean, median, regression, and stochastic imputation methods. Although regression and stochastic imputation were better than nonparametric model-based imputation in reproducing population total estimates obtained when there were no missing values in one of the sample sizes considered, nonparametric model-based imputation may be used when the relationship between outcome and predictor variables is not linear.

Keywords: finite population total, missing data, model-based imputation, two-phase sampling

Procedia PDF Downloads 128
17303 Forecasting Residential Water Consumption in Hamilton, New Zealand

Authors: Farnaz Farhangi

Abstract:

Many people in New Zealand believe that the access to water is inexhaustible, and it comes from a history of virtually unrestricted access to it. For the region like Hamilton which is one of New Zealand’s fastest growing cities, it is crucial for policy makers to know about the future water consumption and implementation of rules and regulation such as universal water metering. Hamilton residents use water freely and they do not have any idea about how much water they use. Hence, one of proposed objectives of this research is focusing on forecasting water consumption using different methods. Residential water consumption time series exhibits seasonal and trend variations. Seasonality is the pattern caused by repeating events such as weather conditions in summer and winter, public holidays, etc. The problem with this seasonal fluctuation is that, it dominates other time series components and makes difficulties in determining other variations (such as educational campaign’s effect, regulation, etc.) in time series. Apart from seasonality, a stochastic trend is also combined with seasonality and makes different effects on results of forecasting. According to the forecasting literature, preprocessing (de-trending and de-seasonalization) is essential to have more performed forecasting results, while some other researchers mention that seasonally non-adjusted data should be used. Hence, I answer the question that is pre-processing essential? A wide range of forecasting methods exists with different pros and cons. In this research, I apply double seasonal ARIMA and Artificial Neural Network (ANN), considering diverse elements such as seasonality and calendar effects (public and school holidays) and combine their results to find the best predicted values. My hypothesis is the examination the results of combined method (hybrid model) and individual methods and comparing the accuracy and robustness. In order to use ARIMA, the data should be stationary. Also, ANN has successful forecasting applications in terms of forecasting seasonal and trend time series. Using a hybrid model is a way to improve the accuracy of the methods. Due to the fact that water demand is dominated by different seasonality, in order to find their sensitivity to weather conditions or calendar effects or other seasonal patterns, I combine different methods. The advantage of this combination is reduction of errors by averaging of each individual model. It is also useful when we are not sure about the accuracy of each forecasting model and it can ease the problem of model selection. Using daily residential water consumption data from January 2000 to July 2015 in Hamilton, I indicate how prediction by different methods varies. ANN has more accurate forecasting results than other method and preprocessing is essential when we use seasonal time series. Using hybrid model reduces forecasting average errors and increases the performance.

Keywords: artificial neural network (ANN), double seasonal ARIMA, forecasting, hybrid model

Procedia PDF Downloads 336
17302 Identification of Architectural Design Error Risk Factors in Construction Projects Using IDEF0 Technique

Authors: Sahar Tabarroki, Ahad Nazari

Abstract:

The design process is one of the most key project processes in the construction industry. Although architects have the responsibility to produce complete, accurate, and coordinated documents, architectural design is accompanied by many errors. A design error occurs when the constraints and requirements of the design are not satisfied. Errors are potentially costly and time-consuming to correct if not caught early during the design phase, and they become expensive in either construction documents or in the construction phase. The aim of this research is to identify the risk factors of architectural design errors, so identification of risks is necessary. First, a literature review in the design process was conducted and then a questionnaire was designed to identify the risks and risk factors. The questions in the form of the questionnaire were based on the “similar service description of study and supervision of architectural works” published by “Vice Presidency of Strategic Planning & Supervision of I.R. Iran” as the base of architects’ tasks. Second, the top 10 risks of architectural activities were identified. To determine the positions of possible causes of risks with respect to architectural activities, these activities were located in a design process modeled by the IDEF0 technique. The research was carried out by choosing a case study, checking the design drawings, interviewing its architect and client, and providing a checklist in order to identify the concrete examples of architectural design errors. The results revealed that activities such as “defining the current and future requirements of the project”, “studies and space planning,” and “time and cost estimation of suggested solution” has a higher error risk than others. Moreover, the most important causes include “unclear goals of a client”, “time force by a client”, and “lack of knowledge of architects about the requirements of end-users”. For error detecting in the case study, lack of criteria, standards and design criteria, and lack of coordination among them, was a barrier, anyway, “lack of coordination between architectural design and electrical and mechanical facility”, “violation of the standard dimensions and sizes in space designing”, “design omissions” were identified as the most important design errors.

Keywords: architectural design, design error, risk management, risk factor

Procedia PDF Downloads 130
17301 A Soft Error Rates (SER) Evaluation Method of Combinational Logic Circuit Based on Linear Energy Transfers

Authors: Man Li, Wanting Zhou, Lei Li

Abstract:

Communication stability is the primary concern of communication satellites. Communication satellites are easily affected by particle radiation to generate single event effects (SEE), which leads to soft errors (SE) of the combinational logic circuit. The existing research on soft error rates (SER) of the combined logic circuit is mostly based on the assumption that the logic gates being bombarded have the same pulse width. However, in the actual radiation environment, the pulse widths of the logic gates being bombarded are different due to different linear energy transfers (LET). In order to improve the accuracy of SER evaluation model, this paper proposes a soft error rate evaluation method based on LET. In this paper, the authors analyze the influence of LET on the pulse width of combinational logic and establish the pulse width model based on the LET. Based on this model, the error rate of test circuit ISCAS'85 is calculated. The effectiveness of the model is proved by comparing it with previous experiments.

Keywords: communication satellite, pulse width, soft error rates, LET

Procedia PDF Downloads 170
17300 BIM Model and Virtual Prototyping in Construction Management

Authors: Samar Alkindy

Abstract:

Purpose: The BIM model has been used to support the planning of different construction projects in the industry by showing the different stages of the construction process. The model has been instrumental in identifying some of the common errors in the construction process through the spatial arrangement. The continuous use of the BIM model in the construction industry has resulted in various radical changes such as virtual prototyping. Construction virtual prototyping is a highly advanced technology that incorporates a BIM model with realistic graphical simulations, and facilitates the simulation of the project before a product is built in the factory. The paper presents virtual prototyping in the construction industry by examining its application, challenges and benefits to a construction project. Methodology approach: A case study was conducted for this study in four major construction projects, which incorporate virtual construction prototyping in several stages of the construction project. Furthermore, there was the administration of interviews with the project manager and engineer and the planning manager. Findings: Data collected from the methodological approach shows a positive response for virtual construction prototyping in construction, especially concerning communication and visualization. Furthermore, the use of virtual prototyping has increased collaboration and efficiency between construction experts handling a project. During the planning stage, virtual prototyping has increased accuracy, reduced planning time, and reduced the amount of rework during the implementation stage. Irrespective of virtual prototyping being a new concept in the construction industry, the findings outline that the approach will benefit the management of construction projects.

Keywords: construction operations, construction planning, process simulation, virtual prototyping

Procedia PDF Downloads 230
17299 Implementing Fault Tolerance with Proxy Signature on the Improvement of RSA System

Authors: H. El-Kamchouchi, Heba Gaber, Fatma Ahmed, Dalia H. El-Kamchouchi

Abstract:

Fault tolerance and data security are two important issues in modern communication systems. During the transmission of data between the sender and receiver, errors may occur frequently. Therefore, the sender must re-transmit the data to the receiver in order to correct these errors, which makes the system very feeble. To improve the scalability of the scheme, we present a proxy signature scheme with fault tolerance over an efficient and secure authenticated key agreement protocol based on the improved RSA system. Authenticated key agreement protocols have an important role in building a secure communications network between the two parties.

Keywords: fault tolerance, improved RSA, key agreement, proxy signature

Procedia PDF Downloads 424
17298 Estimation of Sediment Transport into a Reservoir Dam

Authors: Kiyoumars Roushangar, Saeid Sadaghian

Abstract:

Although accurate sediment load prediction is very important in planning, designing, operating and maintenance of water resources structures, the transport mechanism is complex, and the deterministic transport models are based on simplifying assumptions often lead to large prediction errors. In this research, firstly, two intelligent ANN methods, Radial Basis and General Regression Neural Networks, are adopted to model of total sediment load transport into Madani Dam reservoir (north of Iran) using the measured data and then applicability of the sediment transport methods developed by Engelund and Hansen, Ackers and White, Yang, and Toffaleti for predicting of sediment load discharge are evaluated. Based on comparison of the results, it is found that the GRNN model gives better estimates than the sediment rating curve and mentioned classic methods.

Keywords: sediment transport, dam reservoir, RBF, GRNN, prediction

Procedia PDF Downloads 495
17297 Target and Biomarker Identification Platform to Design New Drugs against Aging and Age-Related Diseases

Authors: Peter Fedichev

Abstract:

We studied fundamental aspects of aging to develop a mathematical model of gene regulatory network. We show that aging manifests itself as an inherent instability of gene network leading to exponential accumulation of regulatory errors with age. To validate our approach we studied age-dependent omic data such as transcriptomes, metabolomes etc. of different model organisms and humans. We build a computational platform based on our model to identify the targets and biomarkers of aging to design new drugs against aging and age-related diseases. As biomarkers of aging, we choose the rate of aging and the biological age since they completely determine the state of the organism. Since rate of aging rapidly changes in response to an external stress, this kind of biomarker can be useful as a tool for quantitative efficacy assessment of drugs, their combinations, dose optimization, chronic toxicity estimate, personalized therapies selection, clinical endpoints achievement (within clinical research), and death risk assessments. According to our model, we propose a method for targets identification for further interventions against aging and age-related diseases. Being a biotech company, we offer a complete pipeline to develop an anti-aging drug-candidate.

Keywords: aging, longevity, biomarkers, senescence

Procedia PDF Downloads 272
17296 Orthogonal Regression for Nonparametric Estimation of Errors-In-Variables Models

Authors: Anastasiia Yu. Timofeeva

Abstract:

Two new algorithms for nonparametric estimation of errors-in-variables models are proposed. The first algorithm is based on penalized regression spline. The spline is represented as a piecewise-linear function and for each linear portion orthogonal regression is estimated. This algorithm is iterative. The second algorithm involves locally weighted regression estimation. When the independent variable is measured with error such estimation is a complex nonlinear optimization problem. The simulation results have shown the advantage of the second algorithm under the assumption that true smoothing parameters values are known. Nevertheless the use of some indexes of fit to smoothing parameters selection gives the similar results and has an oversmoothing effect.

Keywords: grade point average, orthogonal regression, penalized regression spline, locally weighted regression

Procedia PDF Downloads 415
17295 Block Matching Based Stereo Correspondence for Depth Calculation

Authors: G. Balakrishnan

Abstract:

Stereo Correspondence plays a major role in estimation of distance of an object from the stereo camera pair for various applications. In this paper, a stereo correspondence algorithm based on block-matching technique is presented. Initially, an energy matrix is calculated for every disparity obtained using modified Sum of Absolute Difference (SAD). Higher energy matrix errors are removed by using threshold value in order to reduce the mismatch errors. A smoothening filter is applied to eliminate unreliable disparity estimate across the object boundaries. The purpose is to improve the reliability of calculation of disparity map. The experimental results obtained shows that the final depth map produce better results and can be used to all the applications using stereo cameras.

Keywords: stereo matching, filters, energy matrix, disparity

Procedia PDF Downloads 214
17294 Simulating the Dynamics of E-waste Production from Mobile Phone: Model Development and Case Study of Rwanda

Authors: Rutebuka Evariste, Zhang Lixiao

Abstract:

Mobile phone sales and stocks showed an exponential growth in the past years globally and the number of mobile phones produced each year was surpassing one billion in 2007, this soaring growth of related e-waste deserves sufficient attentions paid to it regionally and globally as long as 40% of its total weight is made from metallic which 12 elements are identified to be highly hazardous and 12 are less harmful. Different research and methods have been used to estimate the obsolete mobile phones but none has developed a dynamic model and handle the discrepancy resulting from improper approach and error in the input data. The study aim was to develop a comprehensive dynamic system model for simulating the dynamism of e-waste production from mobile phone regardless the country or region and prevail over the previous errors. The logistic model method combined with STELLA program has been used to carry out this study. Then the simulation for Rwanda has been conducted and compared with others countries’ results as model testing and validation. Rwanda is about 1.5 million obsoletes mobile phone with 125 tons of waste in 2014 with e-waste production peak in 2017. It is expected to be 4.17 million obsoletes with 351.97 tons by 2020 along with environmental impact intensity of 21times to 2005. Thus, it is concluded through the model testing and validation that the present dynamic model is competent and able deal with mobile phone e-waste production the fact that it has responded to the previous studies questions from Czech Republic, Iran, and China.

Keywords: carrying capacity, dematerialization, logistic model, mobile phone, obsolescence, similarity, Stella, system dynamics

Procedia PDF Downloads 343
17293 Application of a Model-Free Artificial Neural Networks Approach for Structural Health Monitoring of the Old Lidingö Bridge

Authors: Ana Neves, John Leander, Ignacio Gonzalez, Raid Karoumi

Abstract:

Systematic monitoring and inspection are needed to assess the present state of a structure and predict its future condition. If an irregularity is noticed, repair actions may take place and the adequate intervention will most probably reduce the future costs with maintenance, minimize downtime and increase safety by avoiding the failure of the structure as a whole or of one of its structural parts. For this to be possible decisions must be made at the right time, which implies using systems that can detect abnormalities in their early stage. In this sense, Structural Health Monitoring (SHM) is seen as an effective tool for improving the safety and reliability of infrastructures. This paper explores the decision-making problem in SHM regarding the maintenance of civil engineering structures. The aim is to assess the present condition of a bridge based exclusively on measurements using the suggested method in this paper, such that action is taken coherently with the information made available by the monitoring system. Artificial Neural Networks are trained and their ability to predict structural behavior is evaluated in the light of a case study where acceleration measurements are acquired from a bridge located in Stockholm, Sweden. This relatively old bridge is presently still in operation despite experiencing obvious problems already reported in previous inspections. The prediction errors provide a measure of the accuracy of the algorithm and are subjected to further investigation, which comprises concepts like clustering analysis and statistical hypothesis testing. These enable to interpret the obtained prediction errors, draw conclusions about the state of the structure and thus support decision making regarding its maintenance.

Keywords: artificial neural networks, clustering analysis, model-free damage detection, statistical hypothesis testing, structural health monitoring

Procedia PDF Downloads 207
17292 An Approach to Specify Software Requirements in Semantic Form

Authors: Deepa Vijay, Chellammal Surianarayanan, Gopinath Ganapathy

Abstract:

Requirements of a software project serve as a guideline for the entire project team which enable the team towards producing the right outcome. As requirements are the key in deciding the success of the project, it should be specified in an unambiguous manner. Also, the requirements should be complete and consistent. It should be interpreted in the same way by the entire software project team as the customer interprets. Specifying requirements in textual manner is common in software development. This leads to poor understanding of the requirements which results in more errors and degraded quality. There are some literatures which focus on semantic way of specifying functional requirement which ensure the consistency and completeness of requirements. Alternately in the work, a method is proposed to map the syntactic requirements with corresponding semantics in the form of ontologies. This improves the understanding of requirements, prevents errors and improves quality.

Keywords: functional requirement, ontology, requirements management, semantics

Procedia PDF Downloads 364
17291 The Effect of Institutions on Economic Growth: An Analysis Based on Bayesian Panel Data Estimation

Authors: Mohammad Anwar, Shah Waliullah

Abstract:

This study investigated panel data regression models. This paper used Bayesian and classical methods to study the impact of institutions on economic growth from data (1990-2014), especially in developing countries. Under the classical and Bayesian methodology, the two-panel data models were estimated, which are common effects and fixed effects. For the Bayesian approach, the prior information is used in this paper, and normal gamma prior is used for the panel data models. The analysis was done through WinBUGS14 software. The estimated results of the study showed that panel data models are valid models in Bayesian methodology. In the Bayesian approach, the effects of all independent variables were positively and significantly affected by the dependent variables. Based on the standard errors of all models, we must say that the fixed effect model is the best model in the Bayesian estimation of panel data models. Also, it was proved that the fixed effect model has the lowest value of standard error, as compared to other models.

Keywords: Bayesian approach, common effect, fixed effect, random effect, Dynamic Random Effect Model

Procedia PDF Downloads 68
17290 Modeling the Saltatory Conduction in Myelinated Axons by Order Reduction

Authors: Ruxandra Barbulescu, Daniel Ioan, Gabriela Ciuprina

Abstract:

The saltatory conduction is the way the action potential is transmitted along a myelinated axon. The potential diffuses along the myelinated compartments and it is regenerated in the Ranvier nodes due to the ion channels allowing the flow across the membrane. For an efficient simulation of populations of neurons, it is important to use reduced order models both for myelinated compartments and for Ranvier nodes and to have control over their accuracy and inner parameters. The paper presents a reduced order model of this neural system which allows an efficient simulation method for the saltatory conduction in myelinated axons. This model is obtained by concatenating reduced order linear models of 1D myelinated compartments and nonlinear 0D models of Ranvier nodes. The models for the myelinated compartments are selected from a series of spatially distributed models developed and hierarchized according to their modeling errors. The extracted model described by a nonlinear PDE of hyperbolic type is able to reproduce the saltatory conduction with acceptable accuracy and takes into account the finite propagation speed of potential. Finally, this model is again reduced in order to make it suitable for the inclusion in large-scale neural circuits.

Keywords: action potential, myelinated segments, nonlinear models, Ranvier nodes, reduced order models, saltatory conduction

Procedia PDF Downloads 158
17289 A Corpus Output Error Analysis of Chinese L2 Learners From America, Myanmar, and Singapore

Authors: Qiao-Yu Warren Cai

Abstract:

Due to the rise of big data, building corpora and using them to analyze ChineseL2 learners’ language output has become a trend. Various empirical research has been conducted using Chinese corpora built by different academic institutes. However, most of the research analyzed the data in the Chinese corpora usingcorpus-based qualitative content analysis with descriptive statistics. Descriptive statistics can be used to make summations about the subjects or samples that research has actually measured to describe the numerical data, but the collected data cannot be generalized to the population. Comte, a Frenchpositivist, has argued since the 19th century that human beings’ knowledge, whether the discipline is humanistic and social science or natural science, should be verified in a scientific way to construct a universal theory to explain the truth and human beings behaviors. Inferential statistics, able to make judgments of the probability of a difference observed between groups being dependable or caused by chance (Free Geography Notes, 2015)and to infer from the subjects or examples what the population might think or behave, is just the right method to support Comte’s argument in the field of TCSOL. Also, inferential statistics is a core of quantitative research, but little research has been conducted by combing corpora with inferential statistics. Little research analyzes the differences in Chinese L2 learners’ language corpus output errors by using theOne-way ANOVA so that the findings of previous research are limited to inferring the population's Chinese errors according to the given samples’ Chinese corpora. To fill this knowledge gap in the professional development of Taiwanese TCSOL, the present study aims to utilize the One-way ANOVA to analyze corpus output errors of Chinese L2 learners from America, Myanmar, and Singapore. The results show that no significant difference exists in ‘shì (是) sentence’ and word order errors, but compared with Americans and Singaporeans, it is significantly easier for Myanmar to have ‘sentence blends.’ Based on the above results, the present study provides an instructional approach and contributes to further exploration of how Chinese L2 learners can have (and use) learning strategies to lower errors.

Keywords: Chinese corpus, error analysis, one-way analysis of variance, Chinese L2 learners, Americans, myanmar, Singaporeans

Procedia PDF Downloads 105
17288 Time Series Forecasting (TSF) Using Various Deep Learning Models

Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan

Abstract:

Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed-length window in the past as an explicit input. In this paper, we study how the performance of predictive models changes as a function of different look-back window sizes and different amounts of time to predict the future. We also consider the performance of the recent attention-based Transformer models, which have had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (RNN, LSTM, GRU, and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the UCI website, which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Average Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.

Keywords: air quality prediction, deep learning algorithms, time series forecasting, look-back window

Procedia PDF Downloads 152
17287 Hand-Held X-Ray Fluorescence Spectroscopy for Pre-Diagnostic Studies in Conservation, and Limitations

Authors: Irmak Gunes Yuceil

Abstract:

This paper outlines interferences and analytical errors which are encountered in the qualification and quantification of archaeological and ethnographic artifacts, by means of handheld x-ray fluorescence. These shortcomings were evaluated through case studies carried out on metallic artifacts related to various periods and cultures around Anatolia. An Innov-X Delta Standard 2000 handheld x-ray fluorescence spectrometer was used to collect data from 1361 artifacts, through 6789 measurements and 70 hours’ tube usage, in between 2013-2017. Spectrum processing was done by Delta Advanced PC Software. Qualitative and quantitative results screened by the device were compared with the spectrum graphs, and major discrepancies associated with physical and analytical interferences were clarified in this paper.

Keywords: hand-held x-ray fluorescence spectroscopy, art and archaeology, interferences and analytical errors, pre-diagnosis in conservation

Procedia PDF Downloads 194
17286 Nurse-Reported Perceptions of Medication Safety in Private Hospitals in Gauteng Province.

Authors: Madre Paarlber, Alwiena Blignaut

Abstract:

Background: Medication administration errors remains a global patient safety problem targeted by the WHO (World Health Organization), yet research on this matter is sparce within the South African context. Objective: The aim was to explore and describe nurses’ (medication administrators) perceptions regarding medication administration safety-related culture, incidence, causes, and reporting in the Gauteng Province of South Africa, and to determine any relationships between perceived variables concerned with medication safety (safety culture, incidences, causes, reporting of incidences, and reasons for non-reporting). Method: A quantitative research design was used through which self-administered online surveys were sent to 768 nurses (medication administrators) (n=217). The response rate was 28.26%. The survey instrument was synthesised from the Agency of Healthcare Research and Quality (AHRQ) Hospital Survey on Patient Safety Culture, the Registered Nurse Forecasting (RN4CAST) survey, a survey list prepared from a systematic review aimed at generating a comprehensive list of medication administration error causes and the Medication Administration Error Reporting Survey from Wakefield. Exploratory and confirmatory factor analyses were used to determine the validity and reliability of the survey. Descriptive and inferential statistical data analysis were used to analyse quantitative data. Relationships and correlations were identified between items, subscales and biographic data by using Spearmans’ Rank correlations, T-Tests and ANOVAs (Analysis of Variance). Nurses reported on their perceptions of medication administration safety-related culture, incidence, causes, and reporting in the Gauteng Province. Results: Units’ teamwork deemed satisfactory, punitive responses to errors accentuated. “Crisis mode” working, concerns regarding mistake recording and long working hours disclosed as impacting patient safety. Overall medication safety graded mostly positively. Work overload, high patient-nurse ratios, and inadequate staffing implicated as error-inducing. Medication administration errors were reported regularly. Fear and administrative response to errors effected non-report. Non-report of errors’ reasons was affected by non-punitive safety culture. Conclusions: Medication administration safety improvement is contingent on fostering a non-punitive safety culture within units. Anonymous medication error reporting systems and auditing nurses’ workload are recommended in the quest of improved medication safety within Gauteng Province private hospitals.

Keywords: incidence, medication administration errors, medication safety, reporting, safety culture

Procedia PDF Downloads 53
17285 Behavior of Cold Formed Steel in Trusses

Authors: Reinhard Hermawan Lasut, Henki Wibowo Ashadi

Abstract:

The use of materials in Indonesia's construction sector requires engineers and practitioners to develop efficient construction technology, one of the materials used in cold-formed steel. Generally, the use of cold-formed steel is used in the construction of roof trusses found in houses or factories. The failure of the roof truss structure causes errors in the calculation analysis in the form of cross-sectional dimensions or frame configuration. The roof truss structure, vertical distance effect to the span length at the edge of the frame carries the compressive load. If the span is too long, local buckling will occur which causes problems in the frame strength. The model analysis uses various shapes of roof trusses, span lengths and angles with analysis of the structural stiffness matrix method. Model trusses with one-fifth shortened span and one-sixth shortened span also The trusses model is reviewed with increasing angles. It can be concluded that the trusses model by shortening the span in the compression area can reduce deflection and the model by increasing the angle does not get good results because the higher the roof, the heavier the load carried by the roof so that the force is not channeled properly. The shape of the truss must be calculated correctly so the truss is able to withstand the working load so that there is no structural failure.

Keywords: cold-formed, trusses, deflection, stiffness matrix method

Procedia PDF Downloads 165
17284 A Numerical Investigation of Total Temperature Probes Measurement Performance

Authors: Erdem Meriç

Abstract:

Measuring total temperature of air flow accurately is a very important requirement in the development phases of many industrial products, including gas turbines and rockets. Thermocouples are very practical devices to measure temperature in such cases, but in high speed and high temperature flows, the temperature of thermocouple junction may deviate considerably from real flow total temperature due to the effects of heat transfer mechanisms of convection, conduction, and radiation. To avoid errors in total temperature measurement, special probe designs which are experimentally characterized are used. In this study, a validation case which is an experimental characterization of a specific class of total temperature probes is selected from the literature to develop a numerical conjugate heat transfer analysis methodology to study the total temperature probe flow field and solid temperature distribution. Validated conjugate heat transfer methodology is used to investigate flow structures inside and around the probe and effects of probe design parameters like the ratio between inlet and outlet hole areas and prob tip geometry on measurement accuracy. Lastly, a thermal model is constructed to account for errors in total temperature measurement for a specific class of probes in different operating conditions. Outcomes of this work can guide experimentalists to design a very accurate total temperature probe and quantify the possible error for their specific case.

Keywords: conjugate heat transfer, recovery factor, thermocouples, total temperature probes

Procedia PDF Downloads 133
17283 Attention States in the Sustained Attention to Response Task: Effects of Trial Duration, Mind-Wandering and Focus

Authors: Aisling Davies, Ciara Greene

Abstract:

Over the past decade the phenomenon of mind-wandering in cognitive tasks has attracted widespread scientific attention. Research indicates that mind-wandering occurrences can be detected through behavioural responses in the Sustained Attention to Response Task (SART) and several studies have attributed a specific pattern of responding around an error in this task to an observable effect of a mind-wandering state. SART behavioural responses are also widely accepted as indices of sustained attention and of general attention lapses. However, evidence suggests that these same patterns of responding may be attributable to other factors associated with more focused states and that it may also be possible to distinguish the two states within the same task. To use behavioural responses in the SART to study mind-wandering, it is essential to establish both the SART parameters that would increase the likelihood of errors due to mind-wandering, and exactly what type of responses are indicative of mind-wandering, neither of which have yet been determined. The aims of this study were to compare different versions of the SART to establish which task would induce the most mind-wandering episodes and to determine whether mind-wandering related errors can be distinguished from errors during periods of focus, by behavioural responses in the SART. To achieve these objectives, 25 Participants completed four modified versions of the SART that differed from the classic paradigm in several ways so to capture more instances of mind-wandering. The duration that trials were presented for was increased proportionately across each of the four versions of the task; Standard, Medium Slow, Slow, and Very Slow and participants intermittently responded to thought probes assessing their level of focus and degree of mind-wandering throughout. Error rates, reaction times and variability in reaction times decreased in proportion to the decrease in trial duration rate and the proportion of mind-wandering related errors increased, until the Very Slow condition where the extra decrease in duration no longer had an effect. Distinct reaction time patterns around an error, dependent on level of focus (high/low) and level of mind-wandering (high/low) were also observed indicating four separate attention states occurring within the SART. This study establishes the optimal duration of trial presentation for inducing mind-wandering in the SART, provides evidence supporting the idea that different attention states can be observed within the SART and highlights the importance of addressing other factors contributing to behavioural responses when studying mind-wandering during this task. A notable finding in relation to the standard SART, was that while more errors were observed in this version of the task, most of these errors were during periods of focus, raising significant questions about our current understanding of mind-wandering and associated failures of attention.

Keywords: attention, mind-wandering, trial duration rate, Sustained Attention to Response Task (SART)

Procedia PDF Downloads 182
17282 Derivation of Bathymetry from High-Resolution Satellite Images: Comparison of Empirical Methods through Geographical Error Analysis

Authors: Anusha P. Wijesundara, Dulap I. Rathnayake, Nihal D. Perera

Abstract:

Bathymetric information is fundamental importance to coastal and marine planning and management, nautical navigation, and scientific studies of marine environments. Satellite-derived bathymetry data provide detailed information in areas where conventional sounding data is lacking and conventional surveys are inaccessible. The two empirical approaches of log-linear bathymetric inversion model and non-linear bathymetric inversion model are applied for deriving bathymetry from high-resolution multispectral satellite imagery. This study compares these two approaches by means of geographical error analysis for the site Kankesanturai using WorldView-2 satellite imagery. Based on the Levenberg-Marquardt method calibrated the parameters of non-linear inversion model and the multiple-linear regression model was applied to calibrate the log-linear inversion model. In order to calibrate both models, Single Beam Echo Sounding (SBES) data in this study area were used as reference points. Residuals were calculated as the difference between the derived depth values and the validation echo sounder bathymetry data and the geographical distribution of model residuals was mapped. The spatial autocorrelation was calculated by comparing the performance of the bathymetric models and the results showing the geographic errors for both models. A spatial error model was constructed from the initial bathymetry estimates and the estimates of autocorrelation. This spatial error model is used to generate more reliable estimates of bathymetry by quantifying autocorrelation of model error and incorporating this into an improved regression model. Log-linear model (R²=0.846) performs better than the non- linear model (R²=0.692). Finally, the spatial error models improved bathymetric estimates derived from linear and non-linear models up to R²=0.854 and R²=0.704 respectively. The Root Mean Square Error (RMSE) was calculated for all reference points in various depth ranges. The magnitude of the prediction error increases with depth for both the log-linear and the non-linear inversion models. Overall RMSE for log-linear and the non-linear inversion models were ±1.532 m and ±2.089 m, respectively.

Keywords: log-linear model, multi spectral, residuals, spatial error model

Procedia PDF Downloads 297
17281 A Secure Proxy Signature Scheme with Fault Tolerance Based on RSA System

Authors: H. El-Kamchouchi, Heba Gaber, Fatma Ahmed, Dalia H. El-Kamchouchi

Abstract:

Due to the rapid growth in modern communication systems, fault tolerance and data security are two important issues in a secure transaction. During the transmission of data between the sender and receiver, errors may occur frequently. Therefore, the sender must re-transmit the data to the receiver in order to correct these errors, which makes the system very feeble. To improve the scalability of the scheme, we present a secure proxy signature scheme with fault tolerance over an efficient and secure authenticated key agreement protocol based on RSA system. Authenticated key agreement protocols have an important role in building a secure communications network between the two parties.

Keywords: proxy signature, fault tolerance, rsa, key agreement protocol

Procedia PDF Downloads 285
17280 Stress Analysis of Water Wall Tubes of a Coal-fired Boiler during Soot Blowing Operation

Authors: Pratch Kittipongpattana, Thongchai Fongsamootr

Abstract:

This research aimed to study the influences of a soot blowing operation and geometrical variables to the stress characteristic of water wall tubes located in soot blowing areas which caused the boilers of Mae Moh power plant to lose their generation hour. The research method is divided into 2 parts (a) measuring the strain on water wall tubes by using 3-element rosette strain gages orientation during a full capacity plant operation and in periods of soot blowing operations (b) creating a finite element model in order to calculate stresses on tubes and validating the model by using experimental data in a steady state plant operation. Then, the geometrical variables in the model were changed to study stresses on the tubes. The results revealed that the stress was not affected by the soot blowing process and the finite element model gave the results 1.24% errors from the experiment. The geometrical variables influenced the stress, with the most optimum tubes design in this research reduced the average stress from the present design 31.28%.

Keywords: boiler water wall tube, finite element, stress analysis, strain gage rosette

Procedia PDF Downloads 387
17279 Melanoma and Non-Melanoma, Skin Lesion Classification, Using a Deep Learning Model

Authors: Shaira L. Kee, Michael Aaron G. Sy, Myles Joshua T. Tan, Hezerul Abdul Karim, Nouar AlDahoul

Abstract:

Skin diseases are considered the fourth most common disease, with melanoma and non-melanoma skin cancer as the most common type of cancer in Caucasians. The alarming increase in Skin Cancer cases shows an urgent need for further research to improve diagnostic methods, as early diagnosis can significantly improve the 5-year survival rate. Machine Learning algorithms for image pattern analysis in diagnosing skin lesions can dramatically increase the accuracy rate of detection and decrease possible human errors. Several studies have shown the diagnostic performance of computer algorithms outperformed dermatologists. However, existing methods still need improvements to reduce diagnostic errors and generate efficient and accurate results. Our paper proposes an ensemble method to classify dermoscopic images into benign and malignant skin lesions. The experiments were conducted using the International Skin Imaging Collaboration (ISIC) image samples. The dataset contains 3,297 dermoscopic images with benign and malignant categories. The results show improvement in performance with an accuracy of 88% and an F1 score of 87%, outperforming other existing models such as support vector machine (SVM), Residual network (ResNet50), EfficientNetB0, EfficientNetB4, and VGG16.

Keywords: deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma

Procedia PDF Downloads 80