Search results for: alignment error
2228 Learning Outcomes Alignment across Engineering Core Courses
Authors: A. Bouabid, B. Bielenberg, S. Ainane, N. Pasha
Abstract:
In this paper, a team of faculty members of the Petroleum Institute in Abu Dhabi, UAE representing six different courses across General Engineering (ENGR), Communication (COMM), and Design (STPS) worked together to establish a clear developmental progression of learning outcomes and performance indicators for targeted knowledge, areas of competency, and skills for the first three semesters of the Bachelor of Sciences in Engineering curriculum. The sequences of courses studied in this project were ENGR/COMM, COMM/STPS, and ENGR/STPS. For each course’s nine areas of knowledge, competency, and skills, the research team reviewed the existing learning outcomes and related performance indicators with a focus on identifying linkages across disciplines as well as within the courses of a discipline. The team reviewed existing performance indicators for developmental progression from semester to semester for same discipline related courses (vertical alignment) and for different discipline courses within the same semester (horizontal alignment). The results of this work have led to recommendations for modifications of the initial indicators when incoherence was identified, and/or for new indicators based on best practices (identified through literature searches) when gaps were identified. It also led to recommendations for modifications of the level of emphasis within each course to ensure developmental progression. The exercise has led to a revised Sequence Performance Indicator Mapping for the knowledge, skills, and competencies across the six core courses.Keywords: curriculum alignment, horizontal and vertical progression, performance indicators, skill level
Procedia PDF Downloads 2172227 Charging-Vacuum Helium Mass Spectrometer Leak Detection Technology in the Application of Space Products Leak Testing and Error Control
Authors: Jijun Shi, Lichen Sun, Jianchao Zhao, Lizhi Sun, Enjun Liu, Chongwu Guo
Abstract:
Because of the consistency of pressure direction, more short cycle, and high sensitivity, Charging-Vacuum helium mass spectrometer leak testing technology is the most popular leak testing technology for the seal testing of the spacecraft parts, especially the small and medium size ones. Usually, auxiliary pump was used, and the minimum detectable leak rate could reach 5E-9Pa•m3/s, even better on certain occasions. Relative error is more important when evaluating the results. How to choose the reference leak, the background level of helium, and record formats would affect the leak rate tested. In the linearity range of leak testing system, it would reduce 10% relative error if the reference leak with larger leak rate was used, and the relative error would reduce obviously if the background of helium was low efficiently, the record format of decimal was used, and the more stable data were recorded.Keywords: leak testing, spacecraft parts, relative error, error control
Procedia PDF Downloads 4512226 Robust ANOVA: An Illustrative Study in Horticultural Crop Research
Authors: Dinesh Inamadar, R. Venugopalan, K. Padmini
Abstract:
An attempt has been made in the present communication to elucidate the efficacy of robust ANOVA methods to analyze horticultural field experimental data in the presence of outliers. Results obtained fortify the use of robust ANOVA methods as there was substantiate reduction in error mean square, and hence the probability of committing Type I error, as compared to the regular approach.Keywords: outliers, robust ANOVA, horticulture, cook distance, type I error
Procedia PDF Downloads 3872225 A Survey of 2nd Year Students' Frequent Writing Error and the Effects of Participatory Error Correction Process
Authors: Chaiwat Tantarangsee
Abstract:
The purposes of this study are 1) to study the effects of participatory error correction process and 2) to find out the students’ satisfaction of such error correction process. This study is a Quasi Experimental Research with single group, in which data is collected 5 times preceding and following 4 experimental studies of participatory error correction process including providing coded indirect corrective feedback in the students’ texts with error treatment activities. Samples include 28 2nd year English Major students, Faculty of Humanities and Social Sciences, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the course; Reading and Writing English for Academic Purposes II, and tools for data collection include 5 writing tests of short texts and a questionnaire. Based on formative evaluation of the students’ writing ability prior to and after each of the 4 experiments, the research findings disclose the students’ higher scores with statistical difference at 0.05. Moreover, in terms of the effect size of such process, it is found that for mean of the students’ scores prior to and after the 4 experiments; d equals 1.0046, 1.1374, 1.297, and 1.0065 respectively. It can be concluded that participatory error correction process enables all of the students to learn equally well and there is improvement in their ability to write short texts. Finally, the students’ overall satisfaction of the participatory error correction process is in high level (Mean=4.32, S.D.=0.92).Keywords: coded indirect corrective feedback, participatory error correction process, error treatment, humanities and social sciences
Procedia PDF Downloads 5192224 Aliasing Free and Additive Error in Spectra for Alpha Stable Signals
Authors: R. Sabre
Abstract:
This work focuses on the symmetric alpha stable process with continuous time frequently used in modeling the signal with indefinitely growing variance, often observed with an unknown additive error. The objective of this paper is to estimate this error from discrete observations of the signal. For that, we propose a method based on the smoothing of the observations via Jackson polynomial kernel and taking into account the width of the interval where the spectral density is non-zero. This technique allows avoiding the “Aliasing phenomenon” encountered when the estimation is made from the discrete observations of a process with continuous time. We have studied the convergence rate of the estimator and have shown that the convergence rate improves in the case where the spectral density is zero at the origin. Thus, we set up an estimator of the additive error that can be subtracted for approaching the original signal without error.Keywords: spectral density, stable processes, aliasing, non parametric
Procedia PDF Downloads 1262223 A Novel Way to Create Qudit Quantum Error Correction Codes
Authors: Arun Moorthy
Abstract:
Quantum computing promises to provide algorithmic speedups for a number of tasks; however, similar to classical computing, effective error-correcting codes are needed. Current quantum computers require costly equipment to control each particle, so having fewer particles to control is ideal. Although traditional quantum computers are built using qubits (2-level systems), qudits (more than 2-levels) are appealing since they can have an equivalent computational space using fewer particles, meaning fewer particles need to be controlled. Currently, qudit quantum error-correction codes are available for different level qudit systems; however, these codes have sometimes overly specific constraints. When building a qudit system, it is important for researchers to have access to many codes to satisfy their requirements. This project addresses two methods to increase the number of quantum error correcting codes available to researchers. The first method is generating new codes for a given set of parameters. The second method is generating new error-correction codes by using existing codes as a starting point to generate codes for another level (i.e., a 5-level system code on a 2-level system). So, this project builds a website that researchers can use to generate new error-correction codes or codes based on existing codes.Keywords: qudit, error correction, quantum, qubit
Procedia PDF Downloads 1552222 Error Analysis of Students’ Freewriting: A Study of Adult English Learners’ Errors
Authors: Louella Nicole Gamao
Abstract:
Writing in English is accounted as a complex skill and process for foreign language learners who commit errors in writing are found as an inevitable part of language learners' writing. This study aims to explore and analyze the learners of English-as-a foreign Language (EFL) freewriting in a University in Taiwan by identifying the category of mistakes that often appear in their freewriting activity and analyzing the learners' awareness of each error. Hopefully, this present study will be able to gain further information about students' errors in their English writing that may contribute to further understanding of the benefits of freewriting activity that can be used for future purposes as a powerful tool in English writing courses for EFL classes. The present study adopted the framework of error analysis proposed by Dulay, Burt, and Krashen (1982), which consisted of a compilation of data, identification of errors, classification of error types, calculation of frequency of each error, and error interpretation. Survey questionnaires regarding students' awareness of errors were also analyzed and discussed. Using quantitative and qualitative approaches, this study provides a detailed description of the errors found in the students'freewriting output, explores the similarities and differences of the students' errors in both academic writing and freewriting, and lastly, analyzes the students' perception of their errors.Keywords: error, EFL, freewriting, taiwan, english
Procedia PDF Downloads 1032221 Delaunay Triangulations Efficiency for Conduction-Convection Problems
Authors: Bashar Albaalbaki, Roger E. Khayat
Abstract:
This work is a comparative study on the effect of Delaunay triangulation algorithms on discretization error for conduction-convection conservation problems. A structured triangulation and many unstructured Delaunay triangulations using three popular algorithms for node placement strategies are used. The numerical method employed is the vertex-centered finite volume method. It is found that when the computational domain can be meshed using a structured triangulation, the discretization error is lower for structured triangulations compared to unstructured ones for only low Peclet number values, i.e. when conduction is dominant. However, as the Peclet number is increased and convection becomes more significant, the unstructured triangulations reduce the discretization error. Also, no statistical correlation between triangulation angle extremums and the discretization error is found using 200 samples of randomly generated Delaunay and non-Delaunay triangulations. Thus, the angle extremums cannot be an indicator of the discretization error on their own and need to be combined with other triangulation quality measures, which is the subject of further studies.Keywords: conduction-convection problems, Delaunay triangulation, discretization error, finite volume method
Procedia PDF Downloads 982220 Performance of Total Vector Error of an Estimated Phasor within Local Area Networks
Authors: Ahmed Abdolkhalig, Rastko Zivanovic
Abstract:
This paper evaluates the Total Vector Error of an estimated Phasor as define in IEEE C37.118 standard within different medium access in Local Area Networks (LAN). Three different LAN models (CSMA/CD, CSMA/AMP, and Switched Ethernet) are evaluated. The Total Vector Error of the estimated Phasor has been evaluated for the effect of Nodes Number under the standardized network Band-width values defined in IEC 61850-9-2 communication standard (i.e. 0.1, 1, and 10 Gbps).Keywords: phasor, local area network, total vector error, IEEE C37.118, IEC 61850
Procedia PDF Downloads 3072219 A Comparative Analysis of ARIMA and Threshold Autoregressive Models on Exchange Rate
Authors: Diteboho Xaba, Kolentino Mpeta, Tlotliso Qejoe
Abstract:
This paper assesses the in-sample forecasting of the South African exchange rates comparing a linear ARIMA model and a SETAR model. The study uses a monthly adjusted data of South African exchange rates with 420 observations. Akaike information criterion (AIC) and the Schwarz information criteria (SIC) are used for model selection. Mean absolute error (MAE), root mean squared error (RMSE) and mean absolute percentage error (MAPE) are error metrics used to evaluate forecast capability of the models. The Diebold –Mariano (DM) test is employed in the study to check forecast accuracy in order to distinguish the forecasting performance between the two models (ARIMA and SETAR). The results indicate that both models perform well when modelling and forecasting the exchange rates, but SETAR seemed to outperform ARIMA.Keywords: ARIMA, error metrices, model selection, SETAR
Procedia PDF Downloads 2392218 GPU-Accelerated Triangle Mesh Simplification Using Parallel Vertex Removal
Authors: Thomas Odaker, Dieter Kranzlmueller, Jens Volkert
Abstract:
We present an approach to triangle mesh simplification designed to be executed on the GPU. We use a quadric error metric to calculate an error value for each vertex of the mesh and order all vertices based on this value. This step is followed by the parallel removal of a number of vertices with the lowest calculated error values. To allow for the parallel removal of multiple vertices we use a set of per-vertex boundaries that prevent mesh foldovers even when simplification operations are performed on neighbouring vertices. We execute multiple iterations of the calculation of the vertex errors, ordering of the error values and removal of vertices until either a desired number of vertices remains in the mesh or a minimum error value is reached. This parallel approach is used to speed up the simplification process while maintaining mesh topology and avoiding foldovers at every step of the simplification.Keywords: computer graphics, half edge collapse, mesh simplification, precomputed simplification, topology preserving
Procedia PDF Downloads 3622217 The Gap between Curriculum, Pedagogy, and National Standards of Vietnamese English Language Teacher Education
Authors: Thi Phuong Lan Nguyen
Abstract:
Vietnamese English Language Teacher Education (ELTE) has been changing a lot in response to the rapidly evolving socio-economic context requirements. The Vietnamese government assigns the Ministry of Education and Training (MOET) primary tasks to have policy changes to prepare for ELTE development in the globalization and socialization process. Many educational policies have been made to develop ELTE, however, they seem not to address the new global or social demands. The issue is that there are still significant disparities between the national policy and the institutional implementation. This study is to investigate the alignment between ELTE institutional curriculum, pedagogies, and MOET standards. This study used a mixed-method with the data sources from policy documents, a survey, and 33 interviews conducted with the lecturers and administrators from eleven Vietnamese ELTE institutions. The data have been analysed to understand the gap between policy and practice. The initial findings are (i) a low alignment of curriculum and language proficiency standards and (ii) a moderate alignment between curriculum and future-career skills standards. Many pedagogical challenges have been found. In order to address these gaps, it is necessary for the curriculum to be standards-based designed. It is also vital for professional development in order to improve the quality teaching. The study offers multiple perspectives on a complex issue. The study is meaningful not only to educational governance, but also to teaching practitioners, English language researchers, and English language learners. The significance lies in its relevance to English teaching careers across all parts of Vietnam, it yet remains relevant to ELTE in other countries teaching English as a foreign language.Keywords: alignment, curriculum, educational policy, English language teaching, pedagogy, standards
Procedia PDF Downloads 1622216 Operator Optimization Based on Hardware Architecture Alignment Requirements
Authors: Qingqing Gai, Junxing Shen, Yu Luo
Abstract:
Due to the hardware architecture characteristics, some operators tend to acquire better performance if the input/output tensor dimensions are aligned to a certain minimum granularity, such as convolution and deconvolution commonly used in deep learning. Furthermore, if the requirements are not met, the general strategy is to pad with 0 to satisfy the requirements, potentially leading to the under-utilization of the hardware resources. Therefore, for the convolution and deconvolution whose input and output channels do not meet the minimum granularity alignment, we propose to transfer the W-dimensional data to the C-dimension for computation (W2C) to enable the C-dimension to meet the hardware requirements. This scheme also reduces the number of computations in the W-dimension. Although this scheme substantially increases computation, the operator’s speed can improve significantly. It achieves remarkable speedups on multiple hardware accelerators, including Nvidia Tensor cores, Qualcomm digital signal processors (DSPs), and Huawei neural processing units (NPUs). All you need to do is modify the network structure and rearrange the operator weights offline without retraining. At the same time, for some operators, such as the Reducemax, we observe that transferring the Cdimensional data to the W-dimension(C2W) and replacing the Reducemax with the Maxpool can accomplish acceleration under certain circumstances.Keywords: convolution, deconvolution, W2C, C2W, alignment, hardware accelerator
Procedia PDF Downloads 1012215 Imp_hist-Si: Improved Hybrid Image Segmentation Technique for Satellite Imagery to Decrease the Segmentation Error Rate
Authors: Neetu Manocha
Abstract:
Image segmentation is a technique where a picture is parted into distinct parts having similar features which have a place with similar items. Various segmentation strategies have been proposed as of late by prominent analysts. But, after ultimate thorough research, the novelists have analyzed that generally, the old methods do not decrease the segmentation error rate. Then author finds the technique HIST-SI to decrease the segmentation error rates. In this technique, cluster-based and threshold-based segmentation techniques are merged together. After then, to improve the result of HIST-SI, the authors added the method of filtering and linking in this technique named Imp_HIST-SI to decrease the segmentation error rates. The goal of this research is to find a new technique to decrease the segmentation error rates and produce much better results than the HIST-SI technique. For testing the proposed technique, a dataset of Bhuvan – a National Geoportal developed and hosted by ISRO (Indian Space Research Organisation) is used. Experiments are conducted using Scikit-image & OpenCV tools of Python, and performance is evaluated and compared over various existing image segmentation techniques for several matrices, i.e., Mean Square Error (MSE) and Peak Signal Noise Ratio (PSNR).Keywords: satellite image, image segmentation, edge detection, error rate, MSE, PSNR, HIST-SI, linking, filtering, imp_HIST-SI
Procedia PDF Downloads 1352214 Lowering Error Floors by Concatenation of Low-Density Parity-Check and Array Code
Authors: Cinna Soltanpur, Mohammad Ghamari, Behzad Momahed Heravi, Fatemeh Zare
Abstract:
Low-density parity-check (LDPC) codes have been shown to deliver capacity approaching performance; however, problematic graphical structures (e.g. trapping sets) in the Tanner graph of some LDPC codes can cause high error floors in bit-error-ratio (BER) performance under conventional sum-product algorithm (SPA). This paper presents a serial concatenation scheme to avoid the trapping sets and to lower the error floors of LDPC code. The outer code in the proposed concatenation is the LDPC, and the inner code is a high rate array code. This approach applies an interactive hybrid process between the BCJR decoding for the array code and the SPA for the LDPC code together with bit-pinning and bit-flipping techniques. Margulis code of size (2640, 1320) has been used for the simulation and it has been shown that the proposed concatenation and decoding scheme can considerably improve the error floor performance with minimal rate loss.Keywords: concatenated coding, low–density parity–check codes, array code, error floors
Procedia PDF Downloads 3502213 Preliminary Roadway Alignment Design: A Spatial-Data Optimization Approach
Authors: Yassir Abdelrazig, Ren Moses
Abstract:
Roadway planning and design is a very complex process involving five key phases before a project is completed; planning, project development, final design, right-of-way, and construction. The planning phase for a new roadway transportation project is a very critical phase as it greatly affects all latter phases of the project. A location study is usually performed during the preliminary planning phase in a new roadway project. The objective of the location study is to develop alignment alternatives that are cost efficient considering land acquisition and construction costs. This paper describes a methodology to develop optimal preliminary roadway alignments utilizing spatial-data. Four optimization criteria are taken into consideration; roadway length, land cost, land slope, and environmental impacts. The basic concept of the methodology is to convert the proposed project area into a grid, which represents the search space for an optimal alignment. The aforementioned optimization criteria are represented in each of the grid’s cells. A spatial-data optimization technique is utilized to find the optimal alignment in the search space based on the four optimization criteria. Two case studies for new roadway projects in Duval County in the State of Florida are presented to illustrate the methodology. The optimization output alignments are compared to the proposed Florida Department of Transportation (FDOT) alignments. The comparison is based on right-of-way costs for the alignments. For both case studies, the right-of-way costs for the developed optimal alignments were found to be significantly lower than the FDOT alignments.Keywords: gemoetric design, optimization, planning, roadway planning, roadway design
Procedia PDF Downloads 3342212 Adaptive Motion Compensated Spatial Temporal Filter of Colonoscopy Video
Authors: Nidhal Azawi
Abstract:
Colonoscopy procedure is widely used in the world to detect an abnormality. Early diagnosis can help to heal many patients. Because of the unavoidable artifacts that exist in colon images, doctors cannot detect a colon surface precisely. The purpose of this work is to improve the visual quality of colonoscopy videos to provide better information for physicians by removing some artifacts. This work complements a series of work consisting of three previously published papers. In this paper, Optic flow is used for motion compensation, and then consecutive images are aligned/registered to integrate some information to create a new image that has or reveals more information than the original one. Colon images have been classified into informative and noninformative images by using a deep neural network. Then, two different strategies were used to treat informative and noninformative images. Informative images were treated by using Lucas Kanade (LK) with an adaptive temporal mean/median filter, whereas noninformative images are treated by using Lucas Kanade with a derivative of Gaussian (LKDOG) with adaptive temporal median images. A comparison result showed that this work achieved better results than that results in the state- of- the- art strategies for the same degraded colon images data set, which consists of 1000 images. The new proposed algorithm reduced the error alignment by about a factor of 0.3 with a 100% successfully image alignment ratio. In conclusion, this algorithm achieved better results than the state-of-the-art approaches in case of enhancing the informative images as shown in the results section; also, it succeeded to convert the non-informative images that have very few details/no details because of the blurriness/out of focus or because of the specular highlight dominate significant amount of an image to informative images.Keywords: optic flow, colonoscopy, artifacts, spatial temporal filter
Procedia PDF Downloads 1102211 Merging and Comparing Ontologies Generically
Authors: Xiuzhan Guo, Arthur Berrill, Ajinkya Kulkarni, Kostya Belezko, Min Luo
Abstract:
Ontology operations, e.g., aligning and merging, were studied and implemented extensively in different settings, such as categorical operations, relation algebras, and typed graph grammars, with different concerns. However, aligning and merging operations in the settings share some generic properties, e.g., idempotence, commutativity, associativity, and representativity, labeled by (I), (C), (A), and (R), respectively, which are defined on an ontology merging system (D~M), where D is a non-empty set of the ontologies concerned, ~ is a binary relation on D modeling ontology aligning and M is a partial binary operation on D modeling ontology merging. Given an ontology repository, a finite set O ⊆ D, its merging closure Ô is the smallest set of ontologies, which contains the repository and is closed with respect to merging. If (I), (C), (A), and (R) are satisfied, then both D and Ô are partially ordered naturally by merging, Ô is finite and can be computed, compared, and sorted efficiently, including sorting, selecting, and querying some specific elements, e.g., maximal ontologies and minimal ontologies. We also show that the ontology merging system, given by ontology V -alignment pairs and pushouts, satisfies the properties: (I), (C), (A), and (R) so that the merging system is partially ordered and the merging closure of a given repository with respect to pushouts can be computed efficiently.Keywords: ontology aligning, ontology merging, merging system, poset, merging closure, ontology V-alignment pair, ontology homomorphism, ontology V-alignment pair homomorphism, pushout
Procedia PDF Downloads 8892210 A Method for Improving the Embedded Runge Kutta Fehlberg 4(5)
Authors: Sunyoung Bu, Wonkyu Chung, Philsu Kim
Abstract:
In this paper, we introduce a method for improving the embedded Runge-Kutta-Fehlberg 4(5) method. At each integration step, the proposed method is comprised of two equations for the solution and the error, respectively. This solution and error are obtained by solving an initial value problem whose solution has the information of the error at each integration step. The constructed algorithm controls both the error and the time step size simultaneously and possesses a good performance in the computational cost compared to the original method. For the assessment of the effectiveness, EULR problem is numerically solved.Keywords: embedded Runge-Kutta-Fehlberg method, initial value problem, EULR problem, integration step
Procedia PDF Downloads 4592209 An Improved Model of Estimation Global Solar Irradiation from in situ Data: Case of Oran Algeria Region
Authors: Houcine Naim, Abdelatif Hassini, Noureddine Benabadji, Alex Van Den Bossche
Abstract:
In this paper, two models to estimate the overall monthly average daily radiation on a horizontal surface were applied to the site of Oran (35.38 ° N, 0.37 °W). We present a comparison between the first one is a regression equation of the Angstrom type and the second model is developed by the present authors some modifications were suggested using as input parameters: the astronomical parameters as (latitude, longitude, and altitude) and meteorological parameters as (relative humidity). The comparisons are made using the mean bias error (MBE), root mean square error (RMSE), mean percentage error (MPE), and mean absolute bias error (MABE). This comparison shows that the second model is closer to the experimental values that the model of Angstrom.Keywords: meteorology, global radiation, Angstrom model, Oran
Procedia PDF Downloads 2272208 A Generalized Weighted Loss for Support Vextor Classification and Multilayer Perceptron
Authors: Filippo Portera
Abstract:
Usually standard algorithms employ a loss where each error is the mere absolute difference between the true value and the prediction, in case of a regression task. In the present, we present several error weighting schemes that are a generalization of the consolidated routine. We study both a binary classification model for Support Vextor Classification and a regression net for Multylayer Perceptron. Results proves that the error is never worse than the standard procedure and several times it is better.Keywords: loss, binary-classification, MLP, weights, regression
Procedia PDF Downloads 892207 Composite Forecasts Accuracy for Automobile Sales in Thailand
Authors: Watchareeporn Chaimongkol
Abstract:
In this paper, we compare the statistical measures accuracy of composite forecasting model to estimate automobile customer demand in Thailand. A modified simple exponential smoothing and autoregressive integrate moving average (ARIMA) forecasting model is built to estimate customer demand of passenger cars, instead of using information of historical sales data. Our model takes into account special characteristic of the Thai automobile market such as sales promotion, advertising and publicity, petrol price, and interest rate for loan. We evaluate our forecasting model by comparing forecasts with actual data using six accuracy measurements, mean absolute percentage error (MAPE), geometric mean absolute error (GMAE), symmetric mean absolute percentage error (sMAPE), mean absolute scaled error (MASE), median relative absolute error (MdRAE), and geometric mean relative absolute error (GMRAE).Keywords: composite forecasting, simple exponential smoothing model, autoregressive integrate moving average model selection, accuracy measurements
Procedia PDF Downloads 3552206 A Coevolutionary Framework of Business-IT Alignment through the Lens of Enterprise Architecture
Authors: Mengmeng Zhang, Honghui Chen, Kalle Lyytinen
Abstract:
The major challenges for sustainable business-IT alignment (BITA) in a company root in its volatile external competitive environment, increasingly complex internal relationships, and subversive IT roles. Failure to adequately address BITA results in wasting organizational resources, losing competitive advantages, and failing to produce adequate returns on investments. The coevolution is more suitable to describe the dynamic relationships of business and IT and has received certain attention in recent years. Multiple mechanisms for achieving BITC (e.g., sharing domain knowledge, modular design) were obtained. However, instead of a complete managing process, BITC achievement is still hard to operate in practice. This study emphasizes what the BITC management process looks like and how to execute this coevolution step-by-step. A practical coevolutionary framework that combines the enterprise architecture (EA) method with misalignment analysis is proposed in this paper. It contains steps of EA design, misalignment detection, misalignment correction, and EA management /misalignment prevention. The step of misalignment correction is especially discussed at length. This study also evaluates the proposed framework by comparing the characteristics, principles, and approaches of coevolution in the literature.Keywords: business-IT alignment, business-IT coevolution, enterprise architecture, misalignment analysis, misalignment correction
Procedia PDF Downloads 1482205 Discrete Estimation of Spectral Density for Alpha Stable Signals Observed with an Additive Error
Authors: R. Sabre, W. Horrigue, J. C. Simon
Abstract:
This paper is interested in two difficulties encountered in practice when observing a continuous time process. The first is that we cannot observe a process over a time interval; we only take discrete observations. The second is the process frequently observed with a constant additive error. It is important to give an estimator of the spectral density of such a process taking into account the additive observation error and the choice of the discrete observation times. In this work, we propose an estimator based on the spectral smoothing of the periodogram by the polynomial Jackson kernel reducing the additive error. In order to solve the aliasing phenomenon, this estimator is constructed from observations taken at well-chosen times so as to reduce the estimator to the field where the spectral density is not zero. We show that the proposed estimator is asymptotically unbiased and consistent. Thus we obtain an estimate solving the two difficulties concerning the choice of the instants of observations of a continuous time process and the observations affected by a constant error.Keywords: spectral density, stable processes, aliasing, periodogram
Procedia PDF Downloads 1342204 Pin Count Aware Volumetric Error Detection in Arbitrary Microfluidic Bio-Chip
Authors: Kunal Das, Priya Sengupta, Abhishek K. Singh
Abstract:
Pin assignment, scheduling, routing and error detection for arbitrary biochemical protocols in Digital Microfluidic Biochip have been reported in this paper. The research work is concentrating on pin assignment for 2 or 3 droplets routing in the arbitrary biochemical protocol, scheduling and routing in m × n biochip. The volumetric error arises due to droplet split in the biochip. The volumetric error detection is also addressed using biochip AND logic gate which is known as microfluidic AND or mAND gate. The algorithm for pin assignment for m × n biochip required m+n-1 numbers of pins. The basic principle of this algorithm is that no same pin will be allowed to be placed in the same column, same row and diagonal and adjacent cells. The same pin should be placed a distance apart such that interference becomes less. A case study also reported in this paper.Keywords: digital microfludic biochip, cross-contamination, pin assignment, microfluidic AND gate
Procedia PDF Downloads 2702203 A Game of Information in Defense/Attack Strategies: Case of Poisson Attacks
Authors: Asma Ben Yaghlane, Mohamed Naceur Azaiez
Abstract:
In this paper, we briefly introduce the concept of Poisson attacks in the case of defense/attack strategies where attacks are assumed to be continuous. We suggest a game model in which the attacker will combine both criteria of a sufficient confidence level of a successful attack and a reasonably small size of the estimation error in order to launch an attack. Here, estimation error arises from assessing the system failure upon attack using aggregate data at the system level. The corresponding error is referred to as aggregation error. On the other hand, the defender will attempt to deter attack by making one or both criteria inapplicable. The defender will build his/her strategy by both strengthening the targeted system and increasing the size of error. We will formulate the defender problem based on appropriate optimization models. The attacker will opt for a Bayesian updating in assessing the impact on the improvement made by the defender. Then, the attacker will evaluate the feasibility of the attack before making the decision of whether or not to launch it. We will provide illustrations to better explain the process.Keywords: attacker, defender, game theory, information
Procedia PDF Downloads 4632202 Correction of Frequent English Writing Errors by Using Coded Indirect Corrective Feedback and Error Treatment
Authors: Chaiwat Tantarangsee
Abstract:
The purposes of this study are: 1) to study the frequent English writing errors of students registering the course: Reading and Writing English for Academic Purposes II, and 2) to find out the results of writing error correction by using coded indirect corrective feedback and writing error treatments. Samples include 28 2nd year English Major students, Faculty of Education, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the course; Reading and Writing English for Academic Purposes II, and tool for data collection includes 4 writing tests of short texts. The research findings disclose that frequent English writing errors found in this course comprise 7 types of grammatical errors, namely Fragment sentence, Subject-verb agreement, Wrong form of verb tense, Singular or plural noun endings, Run-ons sentence, Wrong form of verb pattern and Lack of parallel structure. Moreover, it is found that the results of writing error correction by using coded indirect corrective feedback and error treatment reveal the overall reduction of the frequent English writing errors and the increase of students’ achievement in the writing of short texts with the significance at .05.Keywords: coded indirect corrective feedback, error correction, error treatment, frequent English writing errors
Procedia PDF Downloads 2332201 Implementing Digital Control System in Robotics
Authors: Safiullah Abdullahi
Abstract:
This paper describes the design of a digital control system which controls the speed and direction of a robot. The robot is expected to follow a black thick line with the highest possible speed and lowest error around the line. The control system of the robot will correct for the angle error that is made between the frame axis of the robot and the line. The cause for error is the difference in speed of the two driving wheels of the robot which are driven by two separate DC motors, whereas the speed difference in wheels is due to the un-modeled fraction that is available in the wheels with different magnitudes in each. The control scheme is that a number of photo sensors are mounted in the front of the robot and report their position in reference to the black line to the digital controller. The controller then, evaluates the position error and generates the needed duty cycle for the related wheel motor to drive it faster or slower.Keywords: digital control, robot, controller, control system
Procedia PDF Downloads 5492200 Single Event Transient Tolerance Analysis in 8051 Microprocessor Using Scan Chain
Authors: Jun Sung Go, Jong Kang Park, Jong Tae Kim
Abstract:
As semi-conductor manufacturing technology evolves; the single event transient problem becomes more significant issue. Single event transient has a critical impact on both combinational and sequential logic circuits, so it is important to evaluate the soft error tolerance of the circuits at the design stage. In this paper, we present a soft error detecting simulation using scan chain. The simulation model generates a single event transient randomly in the circuit, and detects the soft error during the execution of the test patterns. We verified this model by inserting a scan chain in an 8051 microprocessor using 65 nm CMOS technology. While the test patterns generated by ATPG program are passing through the scan chain, we insert a single event transient and detect the number of soft errors per sub-module. The experiments show that the soft error rates per cell area of the SFR module is 277% larger than other modules.Keywords: scan chain, single event transient, soft error, 8051 processor
Procedia PDF Downloads 3432199 “To Err Is Human…” Revisiting Oral Error Correction in Class
Authors: David Steven Rosenstein
Abstract:
The widely accepted “Input Theory” of language acquisition proposes that language is basically acquired unconsciously through extensive exposure to all kinds of natural oral and written sources, especially those where the level of the input is slightly above the learner’s competence. As such, it implies that oral error correction by teachers in a classroom is unnecessary, a waste of time, and maybe even counterproductive. And yet, oral error correction by teachers in the classroom continues to be a very common phenomenon. While input theory advocates claim that such correction doesn’t work, interrupts a student’s train of thought, harms fluency, and may cause students embarrassment and fear, many teachers would disagree. They would claim that students know they make mistakes and want to be corrected in order to know they are improving, thereby encouraging students’ desire to keep studying. Moreover, good teachers can create a positive atmosphere where students will not be embarrassed or fearful. Perhaps now is the time to revisit oral error correction in the classroom and consider the results of research carried out long ago by the present speaker. The research indicates that oral error correction may be beneficial in many cases.Keywords: input theory, language acquisition, teachers' corrections, recurrent errors
Procedia PDF Downloads 27