Search results for: interpolated error shifting
2016 Mastering Digital Transformation with the Strategy Tandem Innovation Inside-Out/Outside-In: An Approach to Drive New Business Models, Services and Products in the Digital Age
Authors: S. N. Susenburger, D. Boecker
Abstract:
In the age of Volatility, Uncertainty, Complexity, and Ambiguity (VUCA), where digital transformation is challenging long standing traditional hardware and manufacturing companies, innovation needs a different methodology, strategy, mindset, and culture. What used to be a mindset of scaling per quantity is now shifting to orchestrating ecosystems, platform business models and service bundles. While large corporations are trying to mimic the nimbleness and versatile mindset of startups in the core of their digital strategies, they’re at the frontier of facing one of the largest organizational and cultural changes in history. This paper elaborates on how a manufacturing giant transformed its Corporate Information Technology (IT) to enable digital and Internet of Things (IoT) business while establishing the mindset and the approaches of the Innovation Inside-Out/Outside-In Strategy. It gives insights into the core elements of an innovation culture and the tactics and methodologies leveraged to support the cultural shift and transformation into an IoT company. This paper also outlines the core elements for an innovation culture and how the persona 'Connected Engineer' thrives in the digital innovation environment. Further, it explores how tapping domain-focused ecosystems in vibrant innovative cities can be used as a part of the strategy to facilitate partner co-innovation. Therefore, findings from several use cases, observations and surveys led to conclusion for the strategy tandem of Innovation Inside-Out/Outside-In. The findings indicate that it's crucial in which phases and maturity level the Innovation Inside-Out/Outside-In Strategy is activated: cultural aspects of the business and the regional ecosystem need to be considered, as well as cultural readiness from management and active contributors. The 'not invented here syndrome' is a barrier of large corporations that need to be addressed and managed to successfully drive partnerships, as well as embracing co-innovation and a mindset shifting away from physical products toward new business models, services, and IoT platforms. This paper elaborates on various methodologies and approaches tested in different countries and cultures, including the U.S., Brazil, Mexico, and Germany.Keywords: innovation management, innovation culture, innovation methodologies, digital transformation
Procedia PDF Downloads 1492015 Constructions of Linear and Robust Codes Based on Wavelet Decompositions
Authors: Alla Levina, Sergey Taranov
Abstract:
The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability
Procedia PDF Downloads 4912014 Accurate Positioning Method of Indoor Plastering Robot Based on Line Laser
Authors: Guanqiao Wang, Hongyang Yu
Abstract:
There is a lot of repetitive work in the traditional construction industry. These repetitive tasks can significantly improve production efficiency by replacing manual tasks with robots. There- fore, robots appear more and more frequently in the construction industry. Navigation and positioning are very important tasks for construction robots, and the requirements for accuracy of positioning are very high. Traditional indoor robots mainly use radiofrequency or vision methods for positioning. Compared with ordinary robots, the indoor plastering robot needs to be positioned closer to the wall for wall plastering, so the requirements for construction positioning accuracy are higher, and the traditional navigation positioning method has a large error, which will cause the robot to move. Without the exact position, the wall cannot be plastered, or the error of plastering the wall is large. A new positioning method is proposed, which is assisted by line lasers and uses image processing-based positioning to perform more accurate positioning on the traditional positioning work. In actual work, filter, edge detection, Hough transform and other operations are performed on the images captured by the camera. Each time the position of the laser line is found, it is compared with the standard value, and the position of the robot is moved or rotated to complete the positioning work. The experimental results show that the actual positioning error is reduced to less than 0.5 mm by this accurate positioning method.Keywords: indoor plastering robot, navigation, precise positioning, line laser, image processing
Procedia PDF Downloads 1482013 Signal Restoration Using Neural Network Based Equalizer for Nonlinear channels
Authors: Z. Zerdoumi, D. Benatia, , D. Chicouche
Abstract:
This paper investigates the application of artificial neural network to the problem of nonlinear channel equalization. The difficulties caused by channel distortions such as inter symbol interference (ISI) and nonlinearity can overcome by nonlinear equalizers employing neural networks. It has been shown that multilayer perceptron based equalizer outperform significantly linear equalizers. We present a multilayer perceptron based equalizer with decision feedback (MLP-DFE) trained with the back propagation algorithm. The capacity of the MLP-DFE to deal with nonlinear channels is evaluated. From simulation results it can be noted that the MLP based DFE improves significantly the restored signal quality, the steady state mean square error (MSE), and minimum Bit Error Rate (BER), when comparing with its conventional counterpart.Keywords: Artificial Neural Network, signal restoration, Nonlinear Channel equalization, equalization
Procedia PDF Downloads 4982012 Comparison of Various Classification Techniques Using WEKA for Colon Cancer Detection
Authors: Beema Akbar, Varun P. Gopi, V. Suresh Babu
Abstract:
Colon cancer causes the deaths of about half a million people every year. The common method of its detection is histopathological tissue analysis, it leads to tiredness and workload to the pathologist. A novel method is proposed that combines both structural and statistical pattern recognition used for the detection of colon cancer. This paper presents a comparison among the different classifiers such as Multilayer Perception (MLP), Sequential Minimal Optimization (SMO), Bayesian Logistic Regression (BLR) and k-star by using classification accuracy and error rate based on the percentage split method. The result shows that the best algorithm in WEKA is MLP classifier with an accuracy of 83.333% and kappa statistics is 0.625. The MLP classifier which has a lower error rate, will be preferred as more powerful classification capability.Keywords: colon cancer, histopathological image, structural and statistical pattern recognition, multilayer perception
Procedia PDF Downloads 5752011 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles
Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi
Abstract:
Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.Keywords: artificial neural networks, fuel consumption, friedman test, machine learning, statistical hypothesis testing
Procedia PDF Downloads 1802010 MMSE-Based Beamforming for Chip Interleaved CDMA in Aeronautical Mobile Radio Channel
Authors: Sherif K. El Dyasti, Esam A. Hagras, Adel E. El-Hennawy
Abstract:
This paper addresses the performance of antenna array beam-forming on Chip-Interleaved Code Division Multiple Access (CI_CDMA) system based on Minimum Mean Square Error (MMSE) detector in aeronautical mobile radio channel. Multipath fading, Doppler shifts caused by the speed of the aircraft, and Multiple Access Interference (MAI) are the most important reasons that affect and reduce the performance of aeronautical system. In this paper, we suggested the CI-CDMA with antenna array to combat this fading and improve the bit error rate (BER) performance. We further evaluate the performance of the proposed system in the four standard scenarios in aeronautical mobile radio channel.Keywords: aeronautical channel, CI-CDMA, beamforming, communication, information
Procedia PDF Downloads 4192009 Schedule a New Production Plan by Heuristic Methods
Authors: Hanife Merve Öztürk, Sıdıka Dalgan
Abstract:
In this project, a capacity analysis study is done at TAT A. Ş. Maret Plant. Production capacity of products which generate 80% of sales amount are determined. Obtained data entered the LEKIN Scheduling Program and we get production schedules by using heuristic methods. Besides heuristic methods, as mathematical model, disjunctive programming formulation is adapted to flexible job shop problems by adding a new constraint to find optimal schedule solution.Keywords: scheduling, flexible job shop problem, shifting bottleneck heuristic, mathematical modelling
Procedia PDF Downloads 4012008 Machine Learning Models for the Prediction of Heating and Cooling Loads of a Residential Building
Authors: Aaditya U. Jhamb
Abstract:
Due to the current energy crisis that many countries are battling, energy-efficient buildings are the subject of extensive research in the modern technological era because of growing worries about energy consumption and its effects on the environment. The paper explores 8 factors that help determine energy efficiency for a building: (relative compactness, surface area, wall area, roof area, overall height, orientation, glazing area, and glazing area distribution), with Tsanas and Xifara providing a dataset. The data set employed 768 different residential building models to anticipate heating and cooling loads with a low mean squared error. By optimizing these characteristics, machine learning algorithms may assess and properly forecast a building's heating and cooling loads, lowering energy usage while increasing the quality of people's lives. As a result, the paper studied the magnitude of the correlation between these input factors and the two output variables using various statistical methods of analysis after determining which input variable was most closely associated with the output loads. The most conclusive model was the Decision Tree Regressor, which had a mean squared error of 0.258, whilst the least definitive model was the Isotonic Regressor, which had a mean squared error of 21.68. This paper also investigated the KNN Regressor and the Linear Regression, which had to mean squared errors of 3.349 and 18.141, respectively. In conclusion, the model, given the 8 input variables, was able to predict the heating and cooling loads of a residential building accurately and precisely.Keywords: energy efficient buildings, heating load, cooling load, machine learning models
Procedia PDF Downloads 972007 Proportional and Integral Controller-Based Direct Current Servo Motor Speed Characterization
Authors: Adel Salem Bahakeem, Ahmad Jamal, Mir Md. Maruf Morshed, Elwaleed Awad Khidir
Abstract:
Direct Current (DC) servo motors, or simply DC motors, play an important role in many industrial applications such as manufacturing of plastics, precise positioning of the equipment, and operating computer-controlled systems where speed of feed control, maintaining the position, and ensuring to have a constantly desired output is very critical. These parameters can be controlled with the help of control systems such as the Proportional Integral Derivative (PID) controller. The aim of the current work is to investigate the effects of Proportional (P) and Integral (I) controllers on the steady state and transient response of the DC motor. The controller gains are varied to observe their effects on the error, damping, and stability of the steady and transient motor response. The current investigation is conducted experimentally on a servo trainer CE 110 using analog PI controller CE 120 and theoretically using Simulink in MATLAB. Both experimental and theoretical work involves varying integral controller gain to obtain the response to a steady-state input, varying, individually, the proportional and integral controller gains to obtain the response to a step input function at a certain frequency, and theoretically obtaining the proportional and integral controller gains for desired values of damping ratio and response frequency. Results reveal that a proportional controller helps reduce the steady-state and transient error between the input signal and output response and makes the system more stable. In addition, it also speeds up the response of the system. On the other hand, the integral controller eliminates the error but tends to make the system unstable with induced oscillations and slow response to eliminate the error. From the current work, it is desired to achieve a stable response of the servo motor in terms of its angular velocity subjected to steady-state and transient input signals by utilizing the strengths of both P and I controllers.Keywords: DC servo motor, proportional controller, integral controller, controller gain optimization, Simulink
Procedia PDF Downloads 1102006 A Survey of Types and Causes of Medication Errors and Related Factors in Clinical Nurses
Authors: Kouorsh Zarea, Fatemeh Hassani, Samira Beiranvand, Akram Mohamadi
Abstract:
Background and Objectives: Medication error in hospitals is a major cause of the errors which disrupt the health care system. The aim of this study was to assess the nurses’ medication errors and related factors. Material and methods: This was a descriptive study on 225 nurses in various hospitals, selected through multistage random sampling. Data was collected by three researcher made tools; demographic, medication error and related factors questionnaires. Data was analyzed by descriptive statistics, Chi-square, Kruskal-Wallis, One-way analysis of variance. Results: Based on the results obtained, the type of medication errors giving drugs to patients later or earlier (55.6%), multiple oral medication together regardless of their interactions (36%) and the postoperative analgesic without a prescription (34.2%), respectively. In addition, factors such as the shortage of nurses to patients’ ratio (57.3%), high load functions (51.1%) and fatigue caused by the extra work (40.4%), were the most important factors affecting the incidence of medication errors. The fear of legal issues (40%) are the most important factor is the lack of reported medication errors. Conclusions: Based on the results, effective management and promotion motivate nurses. Therefore, increasing scientific and clinical expertise in the field of nursing medication orders is recommended to prevent medication errors in various states of nursing intervention. Employing experienced staff in areas with high risk of medication errors and also supervising less-experienced staff through competent personnel are also suggested.Keywords: medication error, nurse, clinical care, drug errors
Procedia PDF Downloads 2672005 Variable Tree Structure QR Decomposition-M Algorithm (QRD-M) in Multiple Input Multiple Output-Orthogonal Frequency Division Multiplexing (MIMO-OFDM) Systems
Authors: Jae-Hyun Ro, Jong-Kwang Kim, Chang-Hee Kang, Hyoung-Kyu Song
Abstract:
In multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) systems, QR decomposition-M algorithm (QRD-M) has suboptimal error performance. However, the QRD-M has still high complexity due to many calculations at each layer in tree structure. To reduce the complexity of the QRD-M, proposed QRD-M modifies existing tree structure by eliminating unnecessary candidates at almost whole layers. The method of the elimination is discarding the candidates which have accumulated squared Euclidean distances larger than calculated threshold. The simulation results show that the proposed QRD-M has same bit error rate (BER) performance with lower complexity than the conventional QRD-M.Keywords: complexity, MIMO-OFDM, QRD-M, squared Euclidean distance
Procedia PDF Downloads 3352004 Mathematical Modeling for Diabetes Prediction: A Neuro-Fuzzy Approach
Authors: Vijay Kr. Yadav, Nilam Rathi
Abstract:
Accurate prediction of glucose level for diabetes mellitus is required to avoid affecting the functioning of major organs of human body. This study describes the fundamental assumptions and two different methodologies of the Blood glucose prediction. First is based on the back-propagation algorithm of Artificial Neural Network (ANN), and second is based on the Neuro-Fuzzy technique, called Fuzzy Inference System (FIS). Errors between proposed methods further discussed through various statistical methods such as mean square error (MSE), normalised mean absolute error (NMAE). The main objective of present study is to develop mathematical model for blood glucose prediction before 12 hours advanced using data set of three patients for 60 days. The comparative studies of the accuracy with other existing models are also made with same data set.Keywords: back-propagation, diabetes mellitus, fuzzy inference system, neuro-fuzzy
Procedia PDF Downloads 2592003 Career Guidance System Using Machine Learning
Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan
Abstract:
Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should properly evaluate their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, Neural Networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable to offer an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills
Procedia PDF Downloads 812002 A Comparative Study on a Tilt-Integral-Derivative Controller with Proportional-Integral-Derivative Controller for a Pacemaker
Authors: Aysan Esgandanian, Sabalan Daneshvar
Abstract:
The study is done to determine the comparison between proportional-integral-derivative controller (PID controller) and tilt-integral-derivative (TID controller) for cardiac pacemaker systems, which can automatically control the heart rate to accurately track a desired preset profile. The controller offers good adaption of heart to the physiological needs of the patient. The parameters of the both controllers are tuned by particle swarm optimization (PSO) algorithm which uses the integral of time square error as a fitness function to be minimized. Simulation results are performed on the developed cardiovascular system of humans and results demonstrate that the TID controller produces superior control performance than PID controllers. In this paper, all simulations were performed in Matlab.Keywords: integral of time square error, pacemaker systems, proportional-integral-derivative controller, PSO algorithm, tilt-integral-derivative controller
Procedia PDF Downloads 4632001 Career Guidance System Using Machine Learning
Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan
Abstract:
Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should evaluate properly their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, neural networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable of offering an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills
Procedia PDF Downloads 712000 High-Resolution Spatiotemporal Retrievals of Aerosol Optical Depth from Geostationary Satellite Using Sara Algorithm
Authors: Muhammad Bilal, Zhongfeng Qiu
Abstract:
Aerosols, suspended particles in the atmosphere, play an important role in the earth energy budget, climate change, degradation of atmospheric visibility, urban air quality, and human health. To fully understand aerosol effects, retrieval of aerosol optical properties such as aerosol optical depth (AOD) at high spatiotemporal resolution is required. Therefore, in the present study, hourly AOD observations at 500 m resolution were retrieved from the geostationary ocean color imager (GOCI) using the simplified aerosol retrieval algorithm (SARA) over the urban area of Beijing for the year 2016. The SARA requires top-of-the-atmosphere (TOA) reflectance, solar and sensor geometry information and surface reflectance observations to retrieve an accurate AOD. For validation of the GOCI retrieved AOD, AOD measurements were obtained from the aerosol robotic network (AERONET) version 3 level 2.0 (cloud-screened and quality assured) data. The errors and uncertainties were reported using the root mean square error (RMSE), relative percent mean error (RPME), and the expected error (EE = ± (0.05 + 0.15AOD). Results showed that the high spatiotemporal GOCI AOD observations were well correlated with the AERONET AOD measurements with a correlation coefficient (R) of 0.92, RMSE of 0.07, and RPME of 5%, and 90% of the observations were within the EE. The results suggested that the SARA is robust and has the ability to retrieve high-resolution spatiotemporal AOD observations over the urban area using the geostationary satellite.Keywords: AEORNET, AOD, SARA, GOCI, Beijing
Procedia PDF Downloads 1721999 Major Factors That Enhance Economic Growth in South Africa: A Re-Examination Using a Vector Error Correction Mechanism
Authors: Temitope L. A. Leshoro
Abstract:
This study explored several variables that enhance economic growth in South Africa, based on different growth theories while using the vector error correction model (VECM) technique. The impacts and contributions of each of these variables on GDP in South Africa were investigated. The motivation for this study was as a result of the weak economic growth that the country has been experiencing lately, as well as the continuous increase in unemployment rate and deteriorating health care system. Annual data spanning over the period 1974 to 2013 was employed. The results showed that the major determinants of GDP are trade openness, government spending, and health indicator; as these variables are not only economically significant but also statistically significant in explaining the changes in GDP in South Africa. Policy recommendations for economic growth enhancement are suggested based on the findings of this study.Keywords: economic growth, GDP, investment, health indicator, VECM
Procedia PDF Downloads 2761998 Fractional Euler Method and Finite Difference Formula Using Conformable Fractional Derivative
Authors: Ramzi B. Albadarneh
Abstract:
In this paper, we use the new definition of fractional derivative called conformable fractional derivative to derive some finite difference formulas and its error terms which are used to solve fractional differential equations and fractional partial differential equations, also to derive fractional Euler method and its error terms which can be applied to solve fractional differential equations. To provide the contribution of our work some applications on finite difference formulas and Euler Method are given.Keywords: conformable fractional derivative, finite difference formula, fractional derivative, finite difference formula
Procedia PDF Downloads 4391997 Corrective Feedback and Uptake Patterns in English Speaking Lessons at Hanoi Law University
Authors: Nhac Thanh Huong
Abstract:
New teaching methods have led to the changes in the teachers’ roles in an English class, in which teachers’ error correction is an integral part. Language error and corrective feedback have been the interest of many researchers in foreign language teaching. However, the techniques and the effectiveness of teachers’ feedback have been a question of much controversy. This present case study has been carried out with a view to finding out the patterns of teachers’ corrective feedback and their impact on students’ uptake in English speaking lessons of legal English major students at Hanoi Law University. In order to achieve those aims, the study makes use of classroom observations as the main method of data collection to seeks answers to the two following questions: 1. What patterns of corrective feedback occur in English speaking lessons for second- year legal English major students in Hanoi Law University?; 2. To what extent does that corrective feedback lead to students’ uptake? The study provided some important findings, among which was a close relationship between corrective feedback and uptake. In particular, recast was the most commonly used feedback type, yet it was the least effective in terms of students’ uptake and repair, while the most successful feedback, namely meta-linguistic feedback, clarification requests and elicitation, which led to students’ generated repair, was used at a much lower rate by teachers. Furthermore, it revealed that different types of errors needed different types of feedback. Also, the use of feedback depended on the students’ English proficiency level. In the light of findings, a number of pedagogical implications have been drawn in the hope of enhancing the effectiveness of teachers’ corrective feedback to students’ uptake in foreign language acquisition process.Keywords: corrective feedback, error, uptake, speaking English lesson
Procedia PDF Downloads 2651996 Multi Response Optimization in Drilling Al6063/SiC/15% Metal Matrix Composite
Authors: Hari Singh, Abhishek Kamboj, Sudhir Kumar
Abstract:
This investigation proposes a grey-based Taguchi method to solve the multi-response problems. The grey-based Taguchi method is based on the Taguchi’s design of experimental method, and adopts Grey Relational Analysis (GRA) to transfer multi-response problems into single-response problems. In this investigation, an attempt has been made to optimize the drilling process parameters considering weighted output response characteristics using grey relational analysis. The output response characteristics considered are surface roughness, burr height and hole diameter error under the experimental conditions of cutting speed, feed rate, step angle, and cutting environment. The drilling experiments were conducted using L27 orthogonal array. A combination of orthogonal array, design of experiments and grey relational analysis was used to ascertain best possible drilling process parameters that give minimum surface roughness, burr height and hole diameter error. The results reveal that combination of Taguchi design of experiment and grey relational analysis improves surface quality of drilled hole.Keywords: metal matrix composite, drilling, optimization, step drill, surface roughness, burr height, hole diameter error
Procedia PDF Downloads 3191995 GIS Application in Surface Runoff Estimation for Upper Klang River Basin, Malaysia
Authors: Suzana Ramli, Wardah Tahir
Abstract:
Estimation of surface runoff depth is a vital part in any rainfall-runoff modeling. It leads to stream flow calculation and later predicts flood occurrences. GIS (Geographic Information System) is an advanced and opposite tool used in simulating hydrological model due to its realistic application on topography. The paper discusses on calculation of surface runoff depth for two selected events by using GIS with Curve Number method for Upper Klang River basin. GIS enables maps intersection between soil type and land use that later produces curve number map. The results show good correlation between simulated and observed values with more than 0.7 of R2. Acceptable performance of statistical measurements namely mean error, absolute mean error, RMSE, and bias are also deduced in the paper.Keywords: surface runoff, geographic information system, curve number method, environment
Procedia PDF Downloads 2821994 Perceptual Image Coding by Exploiting Internal Generative Mechanism
Authors: Kuo-Cheng Liu
Abstract:
In the perceptual image coding, the objective is to shape the coding distortion such that the amplitude of distortion does not exceed the error visibility threshold, or to remove perceptually redundant signals from the image. While most researches focus on color image coding, the perceptual-based quantizer developed for luminance signals are always directly applied to chrominance signals such that the color image compression methods are inefficient. In this paper, the internal generative mechanism is integrated into the design of a color image compression method. The internal generative mechanism working model based on the structure-based spatial masking is used to assess the subjective distortion visibility thresholds that are visually consistent to human eyes better. The estimation method of structure-based distortion visibility thresholds for color components is further presented in a locally adaptive way to design quantization process in the wavelet color image compression scheme. Since the lowest subband coefficient matrix of images in the wavelet domain preserves the local property of images in the spatial domain, the error visibility threshold inherent in each coefficient of the lowest subband for each color component is estimated by using the proposed spatial error visibility threshold assessment. The threshold inherent in each coefficient of other subbands for each color component is then estimated in a local adaptive fashion based on the distortion energy allocation. By considering that the error visibility thresholds are estimated using predicting and reconstructed signals of the color image, the coding scheme incorporated with locally adaptive perceptual color quantizer does not require side information. Experimental results show that the entropies of three color components obtained by using proposed IGM-based color image compression scheme are lower than that obtained by using the existing color image compression method at perceptually lossless visual quality.Keywords: internal generative mechanism, structure-based spatial masking, visibility threshold, wavelet domain
Procedia PDF Downloads 2481993 Iterative Method for Lung Tumor Localization in 4D CT
Authors: Sarah K. Hagi, Majdi Alnowaimi
Abstract:
In the last decade, there were immense advancements in the medical imaging modalities. These advancements can scan a whole volume of the lung organ in high resolution images within a short time. According to this performance, the physicians can clearly identify the complicated anatomical and pathological structures of lung. Therefore, these advancements give large opportunities for more advance of all types of lung cancer treatment available and will increase the survival rate. However, lung cancer is still one of the major causes of death with around 19% of all the cancer patients. Several factors may affect survival rate. One of the serious effects is the breathing process, which can affect the accuracy of diagnosis and lung tumor treatment plan. We have therefore developed a semi automated algorithm to localize the 3D lung tumor positions across all respiratory data during respiratory motion. The algorithm can be divided into two stages. First, a lung tumor segmentation for the first phase of the 4D computed tomography (CT). Lung tumor segmentation is performed using an active contours method. Then, localize the tumor 3D position across all next phases using a 12 degrees of freedom of an affine transformation. Two data set where used in this study, a compute simulate for 4D CT using extended cardiac-torso (XCAT) phantom and 4D CT clinical data sets. The result and error calculation is presented as root mean square error (RMSE). The average error in data sets is 0.94 mm ± 0.36. Finally, evaluation and quantitative comparison of the results with a state-of-the-art registration algorithm was introduced. The results obtained from the proposed localization algorithm show a promising result to localize alung tumor in 4D CT data.Keywords: automated algorithm , computed tomography, lung tumor, tumor localization
Procedia PDF Downloads 6051992 Hierarchical Operation Strategies for Grid Connected Building Microgrid with Energy Storage and Photovoltatic Source
Authors: Seon-Ho Yoon, Jin-Young Choi, Dong-Jun Won
Abstract:
This paper presents hierarchical operation strategies which are minimizing operation error between day ahead operation plan and real time operation. Operating power systems between centralized and decentralized approaches can be represented as hierarchical control scheme, featured as primary control, secondary control and tertiary control. Primary control is known as local control, featuring fast response. Secondary control is referred to as microgrid Energy Management System (EMS). Tertiary control is responsible of coordinating the operations of multi-microgrids. In this paper, we formulated 3 stage microgrid operation strategies which are similar to hierarchical control scheme. First stage is to set a day ahead scheduled output power of Battery Energy Storage System (BESS) which is only controllable source in microgrid and it is optimized to minimize cost of exchanged power with main grid using Particle Swarm Optimization (PSO) method. Second stage is to control the active and reactive power of BESS to be operated in day ahead scheduled plan in case that State of Charge (SOC) error occurs between real time and scheduled plan. The third is rescheduling the system when the predicted error is over the limited value. The first stage can be compared with the secondary control in that it adjusts the active power. The second stage is comparable to the primary control in that it controls the error in local manner. The third stage is compared with the secondary control in that it manages power balancing. The proposed strategies will be applied to one of the buildings in Electronics and Telecommunication Research Institute (ETRI). The building microgrid is composed of Photovoltaic (PV) generation, BESS and load and it will be interconnected with the main grid. Main purpose of that is minimizing operation cost and to be operated in scheduled plan. Simulation results support validation of proposed strategies.Keywords: Battery Energy Storage System (BESS), Energy Management System (EMS), Microgrid (MG), Particle Swarm Optimization (PSO)
Procedia PDF Downloads 2491991 Application of Double Side Approach Method on Super Elliptical Winkler Plate
Authors: Hsiang-Wen Tang, Cheng-Ying Lo
Abstract:
In this study, the static behavior of super elliptical Winkler plate is analyzed by applying the double side approach method. The lack of information about super elliptical Winkler plates is the motivation of this study and we use the double side approach method to solve this problem because of its superior ability on efficiently treating problems with complex boundary shape. The double side approach method has the advantages of high accuracy, easy calculation procedure and less calculation load required. Most important of all, it can give the error bound of the approximate solution. The numerical results not only show that the double side approach method works well on this problem but also provide us the knowledge of static behavior of super elliptical Winkler plate in practical use.Keywords: super elliptical winkler plate, double side approach method, error bound, mechanic
Procedia PDF Downloads 3561990 Human Errors in IT Services, HFACS Model in Root Cause Categorization
Authors: Kari Saarelainen, Marko Jantti
Abstract:
IT service trending of root causes of service incidents and problems is an important part of proactive problem management and service improvement. Human error related root causes are an important root cause category also in IT service management, although it’s proportion among root causes is smaller than in the other industries. The research problem in this study is: How root causes of incidents related to human errors should be categorized in an ITSM organization to effectively support service improvement. Categorization based on IT service management processes and based on Human Factors Analysis and Classification System (HFACS) taxonomy was studied in a case study. HFACS is widely used in human error root cause categorization across many industries. Combining these two categorization models in a two dimensional matrix was found effective, yet impractical for daily work.Keywords: IT service management, ITIL, incident, problem, HFACS, swiss cheese model
Procedia PDF Downloads 4901989 Mapping Poverty in the Philippines: Insights from Satellite Data and Spatial Econometrics
Authors: Htet Khaing Lin
Abstract:
This study explores the relationship between a diverse set of variables, encompassing both environmental and socio-economic factors, and poverty levels in the Philippines for the years 2012, 2015, and 2018. Employing Ordinary Least Squares (OLS), Spatial Lag Models (SLM), and Spatial Error Models (SEM), this study delves into the dynamics of key indicators, including daytime and nighttime land surface temperature, cropland surface, urban land surface, rainfall, population size, normalized difference water, vegetation, and drought indices. The findings reveal consistent patterns and unexpected correlations, highlighting the need for nuanced policies that address the multifaceted challenges arising from the interplay of environmental and socio-economic factors.Keywords: poverty analysis, OLS, spatial lag models, spatial error models, Philippines, google earth engine, satellite data, environmental dynamics, socio-economic factors
Procedia PDF Downloads 1041988 Impact of Workers’ Remittances on Poverty in Pakistan: A Time Series Analysis by Ardl
Authors: Syed Aziz Rasool, Ayesha Zaman
Abstract:
Poverty is one of the most important problems for any developing nation. Workers’ remittances and investment plays a crucial role in development of any country by reducing the poverty level in Pakistan. This research studies the relationship between workers’ remittances and poverty alleviation. It also focused the significant effect on poverty reduction. This study uses time series data for the period of 1972-2013. Autoregressive Distributed Lag (ARDL)Model and Error Correction (ECM)Model has been used in order to find out the long run and short run relationship between the worker’s remittances and poverty level respectively. Thus, inflow of remittances showed the significant and negative impact on poverty level. Moreover, coefficient of error correction model explains the adjustment towards convergence and it has highly significant and negative value. According to this research, Policy makers should strongly focus on positive and effective policies to attract more remittances. JELCODE: JEL: J61 Procedia PDF Downloads 2871987 A Comparative Analysis of the Performance of COSMO and WRF Models in Quantitative Rainfall Prediction
Authors: Isaac Mugume, Charles Basalirwa, Daniel Waiswa, Mary Nsabagwa, Triphonia Jacob Ngailo, Joachim Reuder, Sch¨attler Ulrich, Musa Semujju
Abstract:
The Numerical weather prediction (NWP) models are considered powerful tools for guiding quantitative rainfall prediction. A couple of NWP models exist and are used at many operational weather prediction centers. This study considers two models namely the Consortium for Small–scale Modeling (COSMO) model and the Weather Research and Forecasting (WRF) model. It compares the models’ ability to predict rainfall over Uganda for the period 21st April 2013 to 10th May 2013 using the root mean square (RMSE) and the mean error (ME). In comparing the performance of the models, this study assesses their ability to predict light rainfall events and extreme rainfall events. All the experiments used the default parameterization configurations and with same horizontal resolution (7 Km). The results show that COSMO model had a tendency of largely predicting no rain which explained its under–prediction. The COSMO model (RMSE: 14.16; ME: -5.91) presented a significantly (p = 0.014) higher magnitude of error compared to the WRF model (RMSE: 11.86; ME: -1.09). However the COSMO model (RMSE: 3.85; ME: 1.39) performed significantly (p = 0.003) better than the WRF model (RMSE: 8.14; ME: 5.30) in simulating light rainfall events. All the models under–predicted extreme rainfall events with the COSMO model (RMSE: 43.63; ME: -39.58) presenting significantly higher error magnitudes than the WRF model (RMSE: 35.14; ME: -26.95). This study recommends additional diagnosis of the models’ treatment of deep convection over the tropics.Keywords: comparative performance, the COSMO model, the WRF model, light rainfall events, extreme rainfall events
Procedia PDF Downloads 262