Search results for: computational error
3733 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).Keywords: time history dynamic analysis, basic modal displacement, earthquake-induced demands, shear steel structures
Procedia PDF Downloads 3553732 Research Activity in Computational Science Using High Performance Computing: Co-Authorship Network Analysis
Authors: Sul-Ah Ahn, Youngim Jung
Abstract:
The research activities of the computational scientists using high-performance computing are analyzed using bibliometric approaches. This study aims at providing computational scientists using high-performance computing and relevant policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of computational scientists using high-performance computing as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2006-2015. We extracted the author rank in the computational science field using high-performance computing by the number of papers published during ten years from 2006. Finally, we drew the co-authorship network for 50 top-authors and their coauthors and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.Keywords: co-authorship network analysis, computational science, high performance computing, research activity
Procedia PDF Downloads 3233731 Malay ESL (English as a Second Language) Students' Difficulties in Using English Prepositions
Authors: Chek Kim Loi
Abstract:
The study attempts to undertake an error analysis of prepositions employed in the written work of Form 4 Malay ESL (English as a Second Language) students in Malaysia. The error analysis is undertaken using Richards’s (1974) framework of intralingual and interlingual errors and Bennett’s (1975) framework in identifying prepositional concepts found in the sample. The study first identifies common prepositional errors in the written texts of 150 student participants. It then measures the relative intensities of these errors and finds out the possible causes for the occurrences of these errors. In this study, one significant finding is that among the nine concepts of prepositions examined, the participant students tended to make errors in the use of prepositions of time and place. The present study has pedagogical implications in teaching English prepositions to Malay ESL students.Keywords: error, interlingual, intralingual, preposition
Procedia PDF Downloads 1963730 Tests for Zero Inflation in Count Data with Measurement Error in Covariates
Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao
Abstract:
In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.Keywords: count data, measurement error, score test, zero inflation
Procedia PDF Downloads 2893729 Forecast Based on an Empirical Probability Function with an Adjusted Error Using Propagation of Error
Authors: Oscar Javier Herrera, Manuel Angel Camacho
Abstract:
This paper addresses a cutting edge method of business demand forecasting, based on an empirical probability function when the historical behavior of the data is random. Additionally, it presents error determination based on the numerical method technique ‘propagation of errors’. The methodology was conducted characterization and process diagnostics demand planning as part of the production management, then new ways to predict its value through techniques of probability and to calculate their mistake investigated, it was tools used numerical methods. All this based on the behavior of the data. This analysis was determined considering the specific business circumstances of a company in the sector of communications, located in the city of Bogota, Colombia. In conclusion, using this application it was possible to obtain the adequate stock of the products required by the company to provide its services, helping the company reduce its service time, increase the client satisfaction rate, reduce stock which has not been in rotation for a long time, code its inventory, and plan reorder points for the replenishment of stock.Keywords: demand forecasting, empirical distribution, propagation of error, Bogota
Procedia PDF Downloads 6313728 Numerical Evolution Methods of Rational Form for Diffusion Equations
Authors: Said Algarni
Abstract:
The purpose of this study was to investigate selected numerical methods that demonstrate good performance in solving PDEs. We adapted alternative method that involve rational polynomials. Padé time stepping (PTS) method, which is highly stable for the purposes of the present application and is associated with lower computational costs, was applied. Furthermore, PTS was modified for our study which focused on diffusion equations. Numerical runs were conducted to obtain the optimal local error control threshold.Keywords: Padé time stepping, finite difference, reaction diffusion equation, PDEs
Procedia PDF Downloads 3003727 Assessment of Intern Students' Attitudes towards Medical Errors
Authors: Nilgün Katrancı, Pınar Göv
Abstract:
With the acceleration and assessment of quality and patient safety works in healthcare services in the 21st century, activities to reduce errors have gained importance. The prevention and reduction of unintended consequences related to healthcare services and errors made during the delivery of healthcare services can be achieved by understanding the causes of the errors. Communication is the basic reason most frequently seen in such cases. Nurses who communicate with patients more closely and for longer time play a more critical role in ensuring patient safety compared to other healthcare professionals. To reduce the risk of medical errors and increase the quality of care, it is important to raise the awareness of nurses about patient safety in training period. This descriptive study was conducted between February 2017 and May 2017 to assess intern students' attitudes towards and knowledge of patient safety and medical errors. The target population of the study consists of intern students at the Faculty of Nursing in Gaziantep University (N=180). The study did not apply any sample selection method, and the research group consisted of 90 female and 37 male senior students who were available and accepted to take part in the study (N=127). The study used personal information form and medical error attitude scale to collect data. The medical error attitude scale consists of 16 items and 3 sub-dimensions. The most frequently seen medical error in the clinics the interns worked at was found as ‘Failure to comply with asepsis rules’ with a rate of 67,7%. The most frequent case among reasons for not disclosing an error is ‘noticing and correcting the error before affecting the patient’ with the rate of 70,9%. The most frequently expressed implications of disclosing a serious error for the intern students participating in the study are ‘harming patient trust (78%)’ and ‘possibility of overreaction by patient (62,2%)’. According to the results of the study, the awareness of the students about the importance of medical errors and error reporting was found high (3,48 ± 0,49). Consequently, it is important to assess and positively improve the attitudes of nurses and other healthcare professionals towards medical errors for the determination of causes of medical errors and their prevention.Keywords: healthcare service, intern student, medical error, patient safety
Procedia PDF Downloads 2033726 Dynamic Compensation for Environmental Temperature Variation in the Coolant Refrigeration Cycle as a Means of Increasing Machine-Tool Precision
Authors: Robbie C. Murchison, Ibrahim Küçükdemiral, Andrew Cowell
Abstract:
Thermal effects are the largest source of dimensional error in precision machining, and a major proportion is caused by ambient temperature variation. The use of coolant is a primary means of mitigating these effects, but there has been limited work on coolant temperature control. This research critically explored whether CNC-machine coolant refrigeration systems adapted to actively compensate for ambient temperature variation could increase machining accuracy. Accuracy data were collected from operators’ checklists for a CNC 5-axis mill and statistically reduced to bias and precision metrics for observations of one day over a sample period of 27 days. Temperature data were collected using three USB dataloggers in ambient air, the chiller inflow, and the chiller outflow. The accuracy and temperature data were analysed using Pearson correlation, then the thermodynamics of the system were described using system identification with MATLAB. It was found that 75% of thermal error is reflected in the hot coolant temperature but that this is negligibly dependent on ambient temperature. The effect of the coolant refrigeration process on hot coolant outflow temperature was also found to be negligible. Therefore, the evidence indicated that it would not be beneficial to adapt coolant chillers to compensate for ambient temperature variation. However, it is concluded that hot coolant outflow temperature is a robust and accessible source of thermal error data which could be used for prevention strategy evaluation or as the basis of other thermal error strategies.Keywords: CNC manufacturing, machine-tool, precision machining, thermal error
Procedia PDF Downloads 893725 Modelling Volatility of Cryptocurrencies: Evidence from GARCH Family of Models with Skewed Error Innovation Distributions
Authors: Timothy Kayode Samson, Adedoyin Isola Lawal
Abstract:
The past five years have shown a sharp increase in public interest in the crypto market, with its market capitalization growing from $100 billion in June 2017 to $2158.42 billion on April 5, 2022. Despite the outrageous nature of the volatility of cryptocurrencies, the use of skewed error innovation distributions in modelling the volatility behaviour of these digital currencies has not been given much research attention. Hence, this study models the volatility of 5 largest cryptocurrencies by market capitalization (Bitcoin, Ethereum, Tether, Binance coin, and USD Coin) using four variants of GARCH models (GJR-GARCH, sGARCH, EGARCH, and APARCH) estimated using three skewed error innovation distributions (skewed normal, skewed student- t and skewed generalized error innovation distributions). Daily closing prices of these currencies were obtained from Yahoo Finance website. Finding reveals that the Binance coin reported higher mean returns compared to other digital currencies, while the skewness indicates that the Binance coin, Tether, and USD coin increased more than they decreased in values within the period of study. For both Bitcoin and Ethereum, negative skewness was obtained, meaning that within the period of study, the returns of these currencies decreased more than they increased in value. Returns from these cryptocurrencies were found to be stationary but not normality distributed with evidence of the ARCH effect. The skewness parameters in all best forecasting models were all significant (p<.05), justifying of use of skewed error innovation distributions with a fatter tail than normal, Student-t, and generalized error innovation distributions. For Binance coin, EGARCH-sstd outperformed other volatility models, while for Bitcoin, Ethereum, Tether, and USD coin, the best forecasting models were EGARCH-sstd, APARCH-sstd, EGARCH-sged, and GJR-GARCH-sstd, respectively. This suggests the superiority of skewed Student t- distribution and skewed generalized error distribution over the skewed normal distribution.Keywords: skewed generalized error distribution, skewed normal distribution, skewed student t- distribution, APARCH, EGARCH, sGARCH, GJR-GARCH
Procedia PDF Downloads 1213724 On Constructing a Cubically Convergent Numerical Method for Multiple Roots
Authors: Young Hee Geum
Abstract:
We propose the numerical method defined by xn+1 = xn − λ[f(xn − μh(xn))/]f'(xn) , n ∈ N, and determine the control parameter λ and μ to converge cubically. In addition, we derive the asymptotic error constant. Applying this proposed scheme to various test functions, numerical results show a good agreement with the theory analyzed in this paper and are proven using Mathematica with its high-precision computability.Keywords: asymptotic error constant, iterative method, multiple root, root-finding
Procedia PDF Downloads 2213723 SSRUIC Students’ Attitude and Preference toward Error Corrections
Authors: Papitchaya Papangkorn
Abstract:
Matching the expectations of teachers and learners is significant for successful language learning. Moreover, teachers should discover what their learners think and feel about what and how they want to learn. Therefore, this study investigates International College, Suan Sunandha Rajabhat University students’ preferences toward error corrections in order to help SSRUIC teachers match their expectations and their learners because it is important for successful language learning. This study examined the learners’ attitude and preference toward error correction through 50 first year SSRUIC students both male (25) and female (25) in Bangkok, Thailand. The data were collected from a questionnaire and interviews to investigate the necessity and frequency, timing, type of errors, method of corrective feedback, and person who gives error correction in order to answer the overall research question and sub-questions. The findings indicate five suggestions regarding the overall research question. Firstly, errors should be treated, and always be treated. Secondly, treating errors after finish speaking is the most appropriate time. Thirdly, “errors that may cause problems in an understanding of listener” and “frequent spoken errors” should be treated. Fourthly, repetition and explicit feedback were the most popular types of feedback among males, whereas metalinguistic feedback was the most favoured types amongst females. Finally, teachers were the most preferred person to deliver corrective feedback for the learners. Although the results of the study are difficult to generalize to a larger population, which are Thai EFL learners because of the small sample, the findings provide useful information that may contribute to understanding of SSRUIC learners’ preferences toward error corrections and it might reduce the gap between what teachers employ and what students expect when receiving corrective feedback. The reduction of this gap may be useful for the learning process and could enhance the efforts of both teachers and learners in a Thai context.Keywords: attitude, corrective feedback, error, preference
Procedia PDF Downloads 3593722 Alternative Computational Arrangements on g-Group (g > 2) Profile Analysis
Authors: Emmanuel U. Ohaegbulem, Felix N. Nwobi
Abstract:
Alternative and simple computational arrangements in carrying out multivariate profile analysis when more than two groups (populations) are involved are presented. These arrangements have been demonstrated to not only yield equivalent results for the test statistics (the Wilks lambdas), but they have less computational efforts relative to other arrangements so far presented in the literature; in addition to being quite simple and easy to apply.Keywords: coincident profiles, g-group profile analysis, level profiles, parallel profiles, repeated measures MANOVA
Procedia PDF Downloads 4493721 Channel Estimation for LTE Downlink
Authors: Rashi Jain
Abstract:
The LTE systems employ Orthogonal Frequency Division Multiplexing (OFDM) as the multiple access technology for the Downlink channels. For enhanced performance, accurate channel estimation is required. Various algorithms such as Least Squares (LS), Minimum Mean Square Error (MMSE) and Recursive Least Squares (RLS) can be employed for the purpose. The paper proposes channel estimation algorithm based on Kalman Filter for LTE-Downlink system. Using the frequency domain pilots, the initial channel response is obtained using the LS criterion. Then Kalman Filter is employed to track the channel variations in time-domain. To suppress the noise within a symbol, threshold processing is employed. The paper draws comparison between the LS, MMSE, RLS and Kalman filter for channel estimation. The parameters for evaluation are Bit Error Rate (BER), Mean Square Error (MSE) and run-time.Keywords: LTE, channel estimation, OFDM, RLS, Kalman filter, threshold
Procedia PDF Downloads 3583720 A Novel Approach to Design of EDDR Architecture for High Speed Motion Estimation Testing Applications
Authors: T. Gangadhararao, K. Krishna Kishore
Abstract:
Motion Estimation (ME) plays a critical role in a video coder, testing such a module is of priority concern. While focusing on the testing of ME in a video coding system, this work presents an error detection and data recovery (EDDR) design, based on the residue-and-quotient (RQ) code, to embed into ME for video coding testing applications. An error in processing Elements (PEs), i.e. key components of a ME, can be detected and recovered effectively by using the proposed EDDR design. The proposed EDDR design for ME testing can detect errors and recover data with an acceptable area overhead and timing penalty.Keywords: area overhead, data recovery, error detection, motion estimation, reliability, residue-and-quotient (RQ) code
Procedia PDF Downloads 4323719 Uncertainty Quantification of Corrosion Anomaly Length of Oil and Gas Steel Pipelines Based on Inline Inspection and Field Data
Authors: Tammeen Siraj, Wenxing Zhou, Terry Huang, Mohammad Al-Amin
Abstract:
The high resolution inline inspection (ILI) tool is used extensively in the pipeline industry to identify, locate, and measure metal-loss corrosion anomalies on buried oil and gas steel pipelines. Corrosion anomalies may occur singly (i.e. individual anomalies) or as clusters (i.e. a colony of corrosion anomalies). Although the ILI technology has advanced immensely, there are measurement errors associated with the sizes of corrosion anomalies reported by ILI tools due limitations of the tools and associated sizing algorithms, and detection threshold of the tools (i.e. the minimum detectable feature dimension). Quantifying the measurement error in the ILI data is crucial for corrosion management and developing maintenance strategies that satisfy the safety and economic constraints. Studies on the measurement error associated with the length of the corrosion anomalies (in the longitudinal direction of the pipeline) has been scarcely reported in the literature and will be investigated in the present study. Limitations in the ILI tool and clustering process can sometimes cause clustering error, which is defined as the error introduced during the clustering process by including or excluding a single or group of anomalies in or from a cluster. Clustering error has been found to be one of the biggest contributory factors for relatively high uncertainties associated with ILI reported anomaly length. As such, this study focuses on developing a consistent and comprehensive framework to quantify the measurement errors in the ILI-reported anomaly length by comparing the ILI data and corresponding field measurements for individual and clustered corrosion anomalies. The analysis carried out in this study is based on the ILI and field measurement data for a set of anomalies collected from two segments of a buried natural gas pipeline currently in service in Alberta, Canada. Data analyses showed that the measurement error associated with the ILI-reported length of the anomalies without clustering error, denoted as Type I anomalies is markedly less than that for anomalies with clustering error, denoted as Type II anomalies. A methodology employing data mining techniques is further proposed to classify the Type I and Type II anomalies based on the ILI-reported corrosion anomaly information.Keywords: clustered corrosion anomaly, corrosion anomaly assessment, corrosion anomaly length, individual corrosion anomaly, metal-loss corrosion, oil and gas steel pipeline
Procedia PDF Downloads 3103718 An Error Analysis of English Communication of Suan Sunandha Rajabhat University Students
Authors: Chantima Wangsomchok
Abstract:
The main purposes of this study are (1) to test the students’ communicative competence within six main functions: greeting, parting, thanking, offering, requesting and suggesting, (2) to employ error analysis in the students’ communicative competence within those functions, and (3) to compare the characteristics of the error found from the investigation. The subjects of the study is 328 first-year undergraduates taking the Foundation English course in the first semester of the 2008 academic year at Suan Sunandha Rajabhat University. This study found that while the subjects showed high communicative competence in the use of the following three functions: greeting, thanking, and offering, they seemed to show poor communicative competence in suggesting, requesting and parting instead. In addition, this study found that the grammatical errors were likely to be most frequently found in the parting function. In the same way, the type of errors which were less frequently found was in the functions of thanking and requesting respectively. Instead, the students tended to have high pragmatic failure in the use of greeting and suggesting functions.Keywords: error analysis, functions of English language, communicative competence, cognitive science
Procedia PDF Downloads 4313717 Exploring Bidirectional Encoder Representations from the Transformers’ Capabilities to Detect English Preposition Errors
Authors: Dylan Elliott, Katya Pertsova
Abstract:
Preposition errors are some of the most common errors created by L2 speakers. In addition, improving error correction and detection methods remains an open issue in the realm of Natural Language Processing (NLP). This research investigates whether the bidirectional encoder representations from the transformers model (BERT) have the potential to correct preposition errors accurately enough to be useful in error correction software. This research finds that BERT performs strongly when the scope of its error correction is limited to preposition choice. The researchers used an open-source BERT model and over three hundred thousand edited sentences from Wikipedia, tagged for part of speech, where only a preposition edit had occurred. To test BERT’s ability to detect errors, a technique known as multi-level masking was used to generate suggestions based on sentence context for every prepositional environment in the test data. These suggestions were compared with the original errors in the data and their known corrections to evaluate BERT’s performance. The suggestions were further analyzed to determine if BERT more often agreed with the judgements of the Wikipedia editors. Both the untrained and fined-tuned models were compared. Finetuning led to a greater rate of error-detection which significantly improved recall, but lowered precision due to an increase in false positives or falsely flagged errors. However, in most cases, these false positives were not errors in preposition usage but merely cases where more than one preposition was possible. Furthermore, when BERT correctly identified an error, the model largely agreed with the Wikipedia editors, suggesting that BERT’s ability to detect misused prepositions is better than previously believed. To evaluate to what extent BERT’s false positives were grammatical suggestions, we plan to do a further crowd-sourcing study to test the grammaticality of BERT’s suggested sentence corrections against native speakers’ judgments.Keywords: BERT, grammatical error correction, preposition error detection, prepositions
Procedia PDF Downloads 1483716 Integrating and Evaluating Computational Thinking in an Undergraduate Marine Science Course
Authors: Dana Christensen
Abstract:
Undergraduate students, particularly in the environmental sciences, have difficulty displaying quantitative skills in their laboratory courses. Students spend time sampling in the field, often using new methods, and are expected to make sense of the data they collect. Computational thinking may be used to navigate these new experiences. We developed a curriculum for the marine science department at a small liberal arts college in the Northeastern United States based on previous computational thinking frameworks. This curriculum incorporates marine science data sets with specific objectives and topics selected by the faculty at the College. The curriculum was distributed to all students enrolled in introductory marine science classes as a mandatory module. Two pre-tests and post-tests will be used to quantitatively assess student progress on both content-based and computational principles. Student artifacts are being collected with each lesson to be coded for content-specific and computational-specific items in qualitative assessment. There is an overall gap in marine science education research, especially curricula that focus on computational thinking and associated quantitative assessment. The curricula itself, the assessments, and our results may be modified and applied to other environmental science courses due to the nature of the inquiry-based laboratory components that use quantitative skills to understand nature.Keywords: marine science, computational thinking, curriculum assessment, quantitative skills
Procedia PDF Downloads 593715 Knowledge Required for Avoiding Lexical Errors at Machine Translation
Authors: Yukiko Sasaki Alam
Abstract:
This research aims at finding out the causes that led to wrong lexical selections in machine translation (MT) rather than categorizing lexical errors, which has been a main practice in error analysis. By manually examining and analyzing lexical errors outputted by a MT system, it suggests what knowledge would help the system reduce lexical errors.Keywords: machine translation, error analysis, lexical errors, evaluation
Procedia PDF Downloads 3383714 Pattern of Refractive Error, Knowledge, Attitude and Practice about Eye Health among the Primary School Children in Bangladesh
Authors: Husain Rajib, K. S. Kishor, D. G. Jewel
Abstract:
Background: Uncorrected refractive error is a common cause of preventable visual impairment in pediatric age group which can be lead to blindness but early detection of visual impairment can reduce the problem that will have good effective in education and more involve in social activities. Glasses are the cheapest and commonest form of correction of refractive errors. To achieve this, patient must exhibit good compliance to spectacle wear. Patient’s attitude and perception of glasses and eye health could affect compliance. Material and method: A Prospective community based cross sectional study was designed in order to evaluate the knowledge, attitude and practices about refractive errors and eye health amongst the primary school going children. Result: Among 140 respondents, 72 were males and 68 were females. We found 50 children were myopic and out of them 26 were male and 24 were female, 27 children were hyperopic and out of them 14 were male and 13 were female. About 63 children were astigmatic and out of them 32 were male and 31 were female. The level of knowledge, attitude was satisfactory. The attitude of the students, teachers and parents was cooperative which helps to do cycloplegic refraction. Practice was not satisfactory due to social stigma and information gap. Conclusion: Knowledge of refractive error and acceptance of glasses for the correction of uncorrected refractive error. Public awareness program such as vision screening program, eye camp, and teachers training program are more beneficial for wearing and prescribing spectacle.Keywords: refractive error, stigma, knowledge, attitude, practice
Procedia PDF Downloads 2663713 Lexical Bundles in the Alexiad of Anna Comnena: Computational and Discourse Analysis Approach
Authors: Georgios Alexandropoulos
Abstract:
The purpose of this study is to examine the historical text of Alexiad by Anna Comnena using computational tools for the extraction of lexical bundles containing the name of her father, Alexius Comnenus. For this reason, in this research we apply corpus linguistics techniques for the automatic extraction of lexical bundles and through them we will draw conclusions about how these lexical bundles serve her support provided to her father.Keywords: lexical bundles, computational literature, critical discourse analysis, Alexiad
Procedia PDF Downloads 6253712 Ant Lion Optimization in a Fuzzy System for Benchmark Control Problem
Authors: Leticia Cervantes, Edith Garcia, Oscar Castillo
Abstract:
At today, there are several control problems where the main objective is to obtain the best control in the study to decrease the error in the application. Many techniques can use to control these problems such as Neural Networks, PID control, Fuzzy Logic, Optimization techniques and many more. In this case, fuzzy logic with fuzzy system and an optimization technique are used to control the case of study. In this case, Ant Lion Optimization is used to optimize a fuzzy system to control the velocity of a simple treadmill. The main objective is to achieve the control of the velocity in the control problem using the ALO optimization. First, a simple fuzzy system was used to control the velocity of the treadmill it has two inputs (error and error change) and one output (desired speed), then results were obtained but to decrease the error the ALO optimization was developed to optimize the fuzzy system of the treadmill. Having the optimization, the simulation was performed, and results can prove that using the ALO optimization the control of the velocity was better than a conventional fuzzy system. This paper describes some basic concepts to help to understand the idea in this work, the methodology of the investigation (control problem, fuzzy system design, optimization), the results are presented and the optimization is used for the fuzzy system. A comparison between the simple fuzzy system and the optimized fuzzy systems are presented where it can be proving the optimization improved the control with good results the major findings of the study is that ALO optimization is a good alternative to improve the control because it helped to decrease the error in control applications even using any control technique to optimized, As a final statement is important to mentioned that the selected methodology was good because the control of the treadmill was improve using the optimization technique.Keywords: ant lion optimization, control problem, fuzzy control, fuzzy system
Procedia PDF Downloads 4033711 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information
Authors: Haifeng Wang, Haili Zhang
Abstract:
Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.Keywords: computational social science, movie preference, machine learning, SVM
Procedia PDF Downloads 2603710 Random Walks and Option Pricing for European and American Options
Authors: Guillaume Leduc
Abstract:
In this paper, we describe a broad setting under which the error of the approximation can be quantified, controlled, and for which convergence occurs at a speed of n⁻¹ for European and American options. We describe how knowledge of the error allows for arbitrarily fast acceleration of the convergence.Keywords: random walk approximation, European and American options, rate of convergence, option pricing
Procedia PDF Downloads 4643709 The Verification Study of Computational Fluid Dynamics Model of the Aircraft Piston Engine
Authors: Lukasz Grabowski, Konrad Pietrykowski, Michal Bialy
Abstract:
This paper presents the results of the research to verify the combustion in aircraft piston engine Asz62-IR. This engine was modernized and a type of ignition system was developed. Due to the high costs of experiments of a nine-cylinder 1,000 hp aircraft engine, a simulation technique should be applied. Therefore, computational fluid dynamics to simulate the combustion process is a reasonable solution. Accordingly, the tests for varied ignition advance angles were carried out and the optimal value to be tested on a real engine was specified. The CFD model was created with the AVL Fire software. The engine in the research had two spark plugs for each cylinder and ignition advance angles had to be set up separately for each spark. The results of the simulation were verified by comparing the pressure in the cylinder. The courses of the indicated pressure of the engine mounted on a test stand were compared. The real course of pressure was measured with an optical sensor, mounted in a specially drilled hole between the valves. It was the OPTRAND pressure sensor, which was designed especially to engine combustion process research. The indicated pressure was measured in cylinder no 3. The engine was running at take-off power. The engine was loaded by a propeller at a special test bench. The verification of the CFD simulation results was based on the results of the test bench studies. The course of the simulated pressure obtained is within the measurement error of the optical sensor. This error is 1% and reflects the hysteresis and nonlinearity of the sensor. The real indicated pressure measured in the cylinder and the pressure taken from the simulation were compared. It can be claimed that the verification of CFD simulations based on the pressure is a success. The next step was to research on the impact of changing the ignition advance timing of spark plugs 1 and 2 on a combustion process. Moving ignition timing between 1 and 2 spark plug results in a longer and uneven firing of a mixture. The most optimal point in terms of indicated power occurs when ignition is simultaneous for both spark plugs, but so severely separated ignitions are assured that ignition will occur at all speeds and loads of engine. It should be confirmed by a bench experiment of the engine. However, this simulation research enabled us to determine the optimal ignition advance angle to be implemented into the ignition control system. This knowledge allows us to set up the ignition point with two spark plugs to achieve as large power as possible.Keywords: CFD model, combustion, engine, simulation
Procedia PDF Downloads 3623708 Lookup Table Reduction and Its Error Analysis of Hall Sensor-Based Rotation Angle Measurement
Authors: Young-San Shin, Seongsoo Lee
Abstract:
Hall sensor is widely used to measure rotation angle. When the Hall voltage is measured for linear displacement, it is converted to angular displacement using arctangent function, which requires a large lookup table. In this paper, a lookup table reduction technique is presented for angle measurement. When the input of the lookup table is small within a certain threshold, the change of the outputs with respect to the change of the inputs is relatively small. Thus, several inputs can share same output, which significantly reduce the lookup table size. Its error analysis was also performed, and the threshold was determined so as to maintain the error less than 1°. When the Hall voltage has 11-bit resolution, the lookup table size is reduced from 1,024 samples to 279 samples.Keywords: hall sensor, angle measurement, lookup table, arctangent
Procedia PDF Downloads 3373707 The Effectiveness of Orthogonal Frequency Division Multiplexing as Modulation Technique
Authors: Mohamed O. Babana
Abstract:
In wireless channel multipath is the propagation phenomena where the transmitted signal arrive at the receiver side with many of paths, the signal at these paths arrive with different time delay the results is random signal fading due to intersymbols interference(ISI). This paper deals with identification of orthogonal frequency division multiplexing (OFDM) technology, and how it is used to overcome intersymbol interference due to multipath. Also investigates the effect of Additive White Gaussian Noise Channel (AWGN) on OFDM using multi-level modulation of Phase Shift Keying (PSK), computer simulation to calculate the bit error rate (BER) under AWGN channel is applied. A comparison study is carried out to obtain the Bit Error Rate performance for OFDM to identify the best multi-level modulation of PSK.Keywords: intersymbol interference(ISI), bit error rate(BER), modulation, multiplexing, simulation
Procedia PDF Downloads 4253706 Automatic Speech Recognition Systems Performance Evaluation Using Word Error Rate Method
Authors: João Rato, Nuno Costa
Abstract:
The human verbal communication is a two-way process which requires a mutual understanding that will result in some considerations. This kind of communication, also called dialogue, besides the supposed human agents it can also be performed between human agents and machines. The interaction between Men and Machines, by means of a natural language, has an important role concerning the improvement of the communication between each other. Aiming at knowing the performance of some speech recognition systems, this document shows the results of the accomplished tests according to the Word Error Rate evaluation method. Besides that, it is also given a set of information linked to the systems of Man-Machine communication. After this work has been made, conclusions were drawn regarding the Speech Recognition Systems, among which it can be mentioned their poor performance concerning the voice interpretation in noisy environments.Keywords: automatic speech recognition, man-machine conversation, speech recognition, spoken dialogue systems, word error rate
Procedia PDF Downloads 3223705 Statistical Tools for SFRA Diagnosis in Power Transformers
Authors: Rahul Srivastava, Priti Pundir, Y. R. Sood, Rajnish Shrivastava
Abstract:
For the interpretation of the signatures of sweep frequency response analysis(SFRA) of transformer different types of statistical techniques serves as an effective tool for doing either phase to phase comparison or sister unit comparison. In this paper with the discussion on SFRA several statistics techniques like cross correlation coefficient (CCF), root square error (RSQ), comparative standard deviation (CSD), Absolute difference, mean square error(MSE),Min-Max ratio(MM) are presented through several case studies. These methods require sample data size and spot frequencies of SFRA signatures that are being compared. The techniques used are based on power signal processing tools that can simplify result and limits can be created for the severity of the fault occurring in the transformer due to several short circuit forces or due to ageing. The advantages of using statistics techniques for analyzing of SFRA result are being indicated through several case studies and hence the results are obtained which determines the state of the transformer.Keywords: absolute difference (DABS), cross correlation coefficient (CCF), mean square error (MSE), min-max ratio (MM-ratio), root square error (RSQ), standard deviation (CSD), sweep frequency response analysis (SFRA)
Procedia PDF Downloads 6973704 A Comparative Study of Various Control Methods for Rendezvous of a Satellite Couple
Authors: Hasan Basaran, Emre Unal
Abstract:
Formation flying of satellites is a mission that involves a relative position keeping of different satellites in the constellation. In this study, different control algorithms are compared with one another in terms of ΔV, velocity increment, and tracking error. Various control methods, covering continuous and impulsive approaches are implemented and tested for satellites flying in low Earth orbit. Feedback linearization, sliding mode control, and model predictive control are designed and compared with an impulsive feedback law, which is based on mean orbital elements. Feedback linearization and sliding mode control approaches have identical mathematical models that include second order Earth oblateness effects. The model predictive control, on the other hand, does not include any perturbations and assumes circular chief orbit. The comparison is done with 4 different initial errors and achieved with velocity increment, root mean square error, maximum steady state error, and settling time. It was observed that impulsive law consumed the least ΔV, while produced the highest maximum error in the steady state. The continuous control laws, however, consumed higher velocity increments and produced lower amounts of tracking errors. Finally, the inversely proportional relationship between tracking error and velocity increment was established.Keywords: chief-deputy satellites, feedback linearization, follower-leader satellites, formation flight, fuel consumption, model predictive control, rendezvous, sliding mode
Procedia PDF Downloads 105