Search results for: error correction method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19830

Search results for: error correction method

19530 Image Compression Using Block Power Method for SVD Decomposition

Authors: El Asnaoui Khalid, Chawki Youness, Aksasse Brahim, Ouanan Mohammed

Abstract:

In these recent decades, the important and fast growth in the development and demand of multimedia products is contributing to an insufficient in the bandwidth of device and network storage memory. Consequently, the theory of data compression becomes more significant for reducing the data redundancy in order to save more transfer and storage of data. In this context, this paper addresses the problem of the lossless and the near-lossless compression of images. This proposed method is based on Block SVD Power Method that overcomes the disadvantages of Matlab's SVD function. The experimental results show that the proposed algorithm has a better compression performance compared with the existing compression algorithms that use the Matlab's SVD function. In addition, the proposed approach is simple and can provide different degrees of error resilience, which gives, in a short execution time, a better image compression.

Keywords: image compression, SVD, block SVD power method, lossless compression, near lossless

Procedia PDF Downloads 356
19529 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption and GDP per capita for Oman: Time Series Analysis, 1980–2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of CO2 emissions and energy use in affecting the economic output, this paper is an effort to fulfil the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption, carbon dioxide (CO2) emissions and gross domestic product (GDP) for Oman using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey Fuller (ADF) test for stationary, Johansen maximum likelihood method for co-integration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. All the variables in this study show very strong significant effects on GDP in the country for the long term. The long-run equilibrium in the VECM suggests positive long-run causalities from CO2 emissions to GDP. Conversely, negative impacts of energy consumption on GDP are found to be significant in Oman during the period. In the short run, there exist negative unidirectional causalities among GDP, CO2 emissions and energy consumption running from GDP to CO2 emissions and from energy consumption to CO2 emissions. Overall, the results support arguments that there are relationships among environmental quality, energy use and economic output in Oman over of period 1980-2010.

Keywords: CO2 emissions, energy consumption, GDP, Oman, time series analysis

Procedia PDF Downloads 433
19528 Advanced Model for Calculation of the Neutral Axis Shifting and the Wall Thickness Distribution in Rotary Draw Bending Processes

Authors: B. Engel, H. Hassan

Abstract:

Rotary draw bending is a method which is being used in tube forming. In the tube bending process, the neutral axis moves towards the inner arc and the wall thickness distribution changes for tube’s cross section. Thinning takes place in the outer arc of the tube (extrados) due to the stretching of the material, whereas thickening occurs in the inner arc of the tube (intrados) due to the comparison of the material. The calculations of the wall thickness distribution, neutral axis shifting, and strain distribution have not been accurate enough, so far. The previous model (the geometrical model) describes the neutral axis shifting and wall thickness distribution. The geometrical of the tube, bending radius and bending angle are considered in the geometrical model, while the influence of the material properties of the tube forming are ignored. The advanced model is a modification of the previous model using material properties that depends on the correction factor. The correction factor is a purely empirically determined factor. The advanced model was compared with the Finite element simulation (FE simulation) using a different bending factor (Bf=bending radius/ diameter of the tube), wall thickness (Wf=diameter of the tube/ wall thickness), and material properties (strain hardening exponent). Finite element model of rotary draw bending has been performed in PAM-TUBE program (version: 2012). Results from the advanced model resemble the FE simulation and the experimental test.

Keywords: rotary draw bending, material properties, neutral axis shifting, wall thickness distribution

Procedia PDF Downloads 371
19527 Cubic Trigonometric B-Spline Approach to Numerical Solution of Wave Equation

Authors: Shazalina Mat Zin, Ahmad Abd. Majid, Ahmad Izani Md. Ismail, Muhammad Abbas

Abstract:

The generalized wave equation models various problems in sciences and engineering. In this paper, a new three-time level implicit approach based on cubic trigonometric B-spline for the approximate solution of wave equation is developed. The usual finite difference approach is used to discretize the time derivative while cubic trigonometric B-spline is applied as an interpolating function in the space dimension. Von Neumann stability analysis is used to analyze the proposed method. Two problems are discussed to exhibit the feasibility and capability of the method. The absolute errors and maximum error are computed to assess the performance of the proposed method. The results were found to be in good agreement with known solutions and with existing schemes in literature.

Keywords: collocation method, cubic trigonometric B-spline, finite difference, wave equation

Procedia PDF Downloads 502
19526 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.

Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document

Procedia PDF Downloads 126
19525 The Relationships between Energy Consumption, Carbon Dioxide (CO2) Emissions, and GDP for Turkey: Time Series Analysis, 1980-2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, and electricity), CO2 emissions and gross domestic product (GDP) for Turkey using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen’s maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. The long-run equilibrium in the VECM suggests no effects of the CO2 emissions and energy use on the GDP in Turkey. There exists a short-run bidirectional relationship between the electricity and natural gas consumption, and also there is a negative unidirectional causality running from the GDP to electricity use. Overall, the results partly support arguments that there are relationships between energy use and economic output; however, the effects may differ due to the source of energy such as in the case of Turkey for the period of 1980-2010. However, there is no significant relationship between the CO2 emissions and the GDP and between the CO2 emissions and the energy use both in the short term and long term.

Keywords: CO2 emissions, energy consumption, GDP, Turkey, time series analysis

Procedia PDF Downloads 484
19524 Anomalies of Visual Perceptual Skills Amongst School Children in Foundation Phase in Olievenhoutbosch, Gauteng Province, South Africa

Authors: Maria Bonolo Mathevula

Abstract:

Background: Children are important members of communities playing major role in the future of any given country (Pera, Fails, Gelsomini, &Garzotto, 2018). Visual Perceptual Skills (VPSs) in children are important health aspect of early childhood development through the Foundation Phases in school. Subsequently, children should undergo visual screening before commencement of schooling for early diagnosis ofVPSs anomalies because the primary role of VPSs is to capacitate children with academic performance in general. Aim : The aim of this study was to determine the anomalies of visual VPSs amongst school children in Foundation Phase. The study’s objectives were to determine the prevalence of VPSs anomalies amongst school children in Foundation Phase; Determine the relationship between children’s academic and VPSs anomalies; and to investigate the relationship between VPSs anomalies and refractive error. Methodology: This study was a mixed method whereby triangulated qualitative (interviews) and quantitative (questionnaire and clinical data) was used. This was, therefore, descriptive by nature. The study’s target population was school children in Foundation Phase. The study followed purposive sampling method. School children in Foundation Phase were purposively sampled to form part of this study provided their parents have given a signed the consent. Data was collected by the use of standardized interviews; questionnaire; clinical data card, and TVPS standard data card. Results: Although the study is still ongoing, the preliminary study outcome based on data collected from one of the Foundation Phases have suggested the following:While VPSs anomalies is not prevalent, it, however, have indirect relationship with children’s academic performance in Foundation phase; Notably, VPSs anomalies and refractive error are directly related since majority of children with refractive error, specifically compound hyperopic astigmatism, failed most subtests of TVPS standard tests. Conclusion: Based on the study’s preliminary findings, it was clear that optometrists still have a lot to do in as far as researching on VPSs is concerned. Furthermore, the researcher recommends that optometrist, as the primary healthcare professionals, should also conduct the school-readiness pre-assessment on children before commencement of their grades in Foundation phase.

Keywords: foundation phase, visual perceptual skills, school children, refractive error

Procedia PDF Downloads 76
19523 Improved Pitch Detection Using Fourier Approximation Method

Authors: Balachandra Kumaraswamy, P. G. Poonacha

Abstract:

Automatic Music Information Retrieval has been one of the challenging topics of research for a few decades now with several interesting approaches reported in the literature. In this paper we have developed a pitch extraction method based on a finite Fourier series approximation to the given window of samples. We then estimate pitch as the fundamental period of the finite Fourier series approximation to the given window of samples. This method uses analysis of the strength of harmonics present in the signal to reduce octave as well as harmonic errors. The performance of our method is compared with three best known methods for pitch extraction, namely, Yin, Windowed Special Normalization of the Auto-Correlation Function and Harmonic Product Spectrum methods of pitch extraction. Our study with artificially created signals as well as music files show that Fourier Approximation method gives much better estimate of pitch with less octave and harmonic errors.

Keywords: pitch, fourier series, yin, normalization of the auto- correlation function, harmonic product, mean square error

Procedia PDF Downloads 385
19522 Physical Properties of Uranium Dinitride UN2 by Using Density Functional Theory (DFT and DFT+U)

Authors: T. Zergoug, S. E. H. Abaidia, A. Nedjar, M. Y. Mokeddem

Abstract:

Physical properties of uranium di-nitride (UN2) were investigated in detail using first principles calculations based on density functional theory. To treat the strong correlation effects caused by 5f Uranium valence electrons, on-site Coulomb interaction correction via the Hubbard-like term, U (DFT+U) was employed. The UN2 structural, mechanical and thermodynamic properties were calculated within DFT and Various U of DFT+U approach. The Perdew–Burke–Ernzerhof (PBE.5.2) version of the generalized gradient approximation (GGA) is used to describe the exchange-correlation with the projector-augmented wave (PAW) pseudo potentials. A comparative study shows that results are improved by using the Hubbard formalism for a certain U value correction like the structural parameter. For some physical properties the variation versus Hubbard U is strong like Young modulus but for others it is weakly noticeable such as the density of state (DOS) or bulk modulus. We noticed also that up from U=7.5 eV, elastic results become not conform to the cubic cell elastic criteria since the C44 values turn out to be negative.

Keywords: uranium diNitride, UN2, DFT+U, elastic properties

Procedia PDF Downloads 409
19521 Orbit Determination from Two Position Vectors Using Finite Difference Method

Authors: Akhilesh Kumar, Sathyanarayan G., Nirmala S.

Abstract:

An unusual approach is developed to determine the orbit of satellites/space objects. The determination of orbits is considered a boundary value problem and has been solved using the finite difference method (FDM). Only positions of the satellites/space objects are known at two end times taken as boundary conditions. The technique of finite difference has been used to calculate the orbit between end times. In this approach, the governing equation is defined as the satellite's equation of motion with a perturbed acceleration. Using the finite difference method, the governing equations and boundary conditions are discretized. The resulting system of algebraic equations is solved using Tri Diagonal Matrix Algorithm (TDMA) until convergence is achieved. This methodology test and evaluation has been done using all GPS satellite orbits from National Geospatial-Intelligence Agency (NGA) precise product for Doy 125, 2023. Towards this, two hours of twelve sets have been taken into consideration. Only positions at the end times of each twelve sets are considered boundary conditions. This algorithm is applied to all GPS satellites. Results achieved using FDM compared with the results of NGA precise orbits. The maximum RSS error for the position is 0.48 [m] and the velocity is 0.43 [mm/sec]. Also, the present algorithm is applied on the IRNSS satellites for Doy 220, 2023. The maximum RSS error for the position is 0.49 [m], and for velocity is 0.28 [mm/sec]. Next, a simulation has been done for a Highly Elliptical orbit for DOY 63, 2023, for the duration of 6 hours. The RSS of difference in position is 0.92 [m] and velocity is 1.58 [mm/sec] for the orbital speed of more than 5km/sec. Whereas the RSS of difference in position is 0.13 [m] and velocity is 0.12 [mm/sec] for the orbital speed less than 5km/sec. Results show that the newly created method is reliable and accurate. Further applications of the developed methodology include missile and spacecraft targeting, orbit design (mission planning), space rendezvous and interception, space debris correlation, and navigation solutions.

Keywords: finite difference method, grid generation, NavIC system, orbit perturbation

Procedia PDF Downloads 49
19520 Estimation of Residual Stresses in Thick Walled Cylinder by Radial Basis Artificial Neural

Authors: Mohammad Heidari

Abstract:

In this paper a method for high strength steel is proposed of residual stresses in autofrettaged tubes by combination of artificial neural networks is presented. Many different thick walled cylinders that were subjected to different conditions were studied. At first, the residual stress is calculated by analytical solution. Then by changing of the parameters that influenced in residual stresses such as percentage of autofrettage, internal pressure, wall ratio of cylinder, material property of cylinder, bauschinger and hardening effect factor, a neural network is created. These parameters are the input of network. The output of network is residual stress. Numerical data, employed for training the network and capabilities of the model in predicting the residual stress has been verified. The output obtained from neural network model is compared with numerical results, and the amount of relative error has been calculated. Based on this verification error, it is shown that the radial basis function of neural network has the average error of 2.75% in predicting residual stress of thick wall cylinder. Further analysis of residual stress of thick wall cylinder under different input conditions has been investigated and comparison results of modeling with numerical considerations shows a good agreement, which also proves the feasibility and effectiveness of the adopted approach.

Keywords: thick walled cylinder, residual stress, radial basis, artificial neural network

Procedia PDF Downloads 383
19519 Variable Tree Structure QR Decomposition-M Algorithm (QRD-M) in Multiple Input Multiple Output-Orthogonal Frequency Division Multiplexing (MIMO-OFDM) Systems

Authors: Jae-Hyun Ro, Jong-Kwang Kim, Chang-Hee Kang, Hyoung-Kyu Song

Abstract:

In multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) systems, QR decomposition-M algorithm (QRD-M) has suboptimal error performance. However, the QRD-M has still high complexity due to many calculations at each layer in tree structure. To reduce the complexity of the QRD-M, proposed QRD-M modifies existing tree structure by eliminating unnecessary candidates at almost whole layers. The method of the elimination is discarding the candidates which have accumulated squared Euclidean distances larger than calculated threshold. The simulation results show that the proposed QRD-M has same bit error rate (BER) performance with lower complexity than the conventional QRD-M.

Keywords: complexity, MIMO-OFDM, QRD-M, squared Euclidean distance

Procedia PDF Downloads 305
19518 Studies on Affecting Factors of Wheel Slip and Odometry Error on Real-Time of Wheeled Mobile Robots: A Review

Authors: D. Vidhyaprakash, A. Elango

Abstract:

In real-time applications, wheeled mobile robots are increasingly used and operated in extreme and diverse conditions traversing challenging surfaces such as a pitted, uneven terrain, natural flat, smooth terrain, as well as wet and dry surfaces. In order to accomplish such tasks, it is critical that the motion control functions without wheel slip and odometry error during the navigation of the two-wheeled mobile robot (WMR). Wheel slip and odometry error are disrupting factors on overall WMR performance in the form of deviation from desired trajectory, navigation, travel time and budgeted energy consumption. The wheeled mobile robot’s ability to operate at peak performance on various work surfaces without wheel slippage and odometry error is directly connected to four main parameters, which are the range of payload distribution, speed, wheel diameter, and wheel width. This paper analyses the effects of those parameters on overall performance and is concerned with determining the ideal range of parameters for optimum performance.

Keywords: wheeled mobile robot, terrain, wheel slippage, odometryerror, trajectory

Procedia PDF Downloads 248
19517 The Non-Existence of Perfect 2-Error Correcting Lee Codes of Word Length 7 over Z

Authors: Catarina Cruz, Ana Breda

Abstract:

Tiling problems have been capturing the attention of many mathematicians due to their real-life applications. In this study, we deal with tilings of Zⁿ by Lee spheres, where n is a positive integer number, being these tilings related with error correcting codes on the transmission of information over a noisy channel. We focus our attention on the question ‘for what values of n and r does the n-dimensional Lee sphere of radius r tile Zⁿ?’. It seems that the n-dimensional Lee sphere of radius r does not tile Zⁿ for n ≥ 3 and r ≥ 2. Here, we prove that is not possible to tile Z⁷ with Lee spheres of radius 2 presenting a proof based on a combinatorial method and faithful to the geometric idea of the problem. The non-existence of such tilings has been studied by several authors being considered the most difficult cases those in which the radius of the Lee spheres is equal to 2. The relation between these tilings and error correcting codes is established considering the center of a Lee sphere as a codeword and the other elements of the sphere as words which are decoded by the central codeword. When the Lee spheres of radius r centered at elements of a set M ⊂ Zⁿ tile Zⁿ, M is a perfect r-error correcting Lee code of word length n over Z, denoted by PL(n, r). Our strategy to prove the non-existence of PL(7, 2) codes are based on the assumption of the existence of such code M. Without loss of generality, we suppose that O ∈ M, where O = (0, ..., 0). In this sense and taking into account that we are dealing with Lee spheres of radius 2, O covers all words which are distant two or fewer units from it. By the definition of PL(7, 2) code, each word which is distant three units from O must be covered by a unique codeword of M. These words have to be covered by codewords which dist five units from O. We prove the non-existence of PL(7, 2) codes showing that it is not possible to cover all the referred words without superposition of Lee spheres whose centers are distant five units from O, contradicting the definition of PL(7, 2) code. We achieve this contradiction by combining the cardinality of particular subsets of codewords which are distant five units from O. There exists an extensive literature on codes in the Lee metric. Here, we present a new approach to prove the non-existence of PL(7, 2) codes.

Keywords: Golomb-Welch conjecture, Lee metric, perfect Lee codes, tilings

Procedia PDF Downloads 130
19516 Malay ESL (English as a Second Language) Students' Difficulties in Using English Prepositions

Authors: Chek Kim Loi

Abstract:

The study attempts to undertake an error analysis of prepositions employed in the written work of Form 4 Malay ESL (English as a Second Language) students in Malaysia. The error analysis is undertaken using Richards’s (1974) framework of intralingual and interlingual errors and Bennett’s (1975) framework in identifying prepositional concepts found in the sample. The study first identifies common prepositional errors in the written texts of 150 student participants. It then measures the relative intensities of these errors and finds out the possible causes for the occurrences of these errors. In this study, one significant finding is that among the nine concepts of prepositions examined, the participant students tended to make errors in the use of prepositions of time and place. The present study has pedagogical implications in teaching English prepositions to Malay ESL students.

Keywords: error, interlingual, intralingual, preposition

Procedia PDF Downloads 168
19515 Application of NBR 14861: 2011 for the Design of Prestress Hollow Core Slabs Subjected to Shear

Authors: Alessandra Aparecida Vieira França, Adriana de Paula Lacerda Santos, Mauro Lacerda Santos Filho

Abstract:

The purpose of this research i to study the behavior of precast prestressed hollow core slabs subjected to shear. In order to achieve this goal, shear tests were performed using hollow core slabs 26,5cm thick, with and without a concrete cover of 5 cm, without cores filled, with two cores filled and three cores filled with concrete. The tests were performed according to the procedures recommended by FIP (1992), the EN 1168:2005 and following the method presented in Costa (2009). The ultimate shear strength obtained within the tests was compared with the values of theoretical resistant shear calculated in accordance with the codes, which are being used in Brazil, noted: NBR 6118:2003 and NBR 14861:2011. When calculating the shear resistance through the equations presented in NBR 14861:2011, it was found that provision is much more accurate for the calculation of the shear strength of hollow core slabs than the NBR 6118 code. Due to the large difference between the calculated results, even for slabs without cores filled, the authors consulted the committee that drafted the NBR 14861:2011 and found that there is an error in the text of the standard, because the coefficient that is suggested, actually presents the double value than the needed one! The ABNT, later on, soon issued an amendment of NBR 14861:2011 with the necessary corrections. During the tests for the present study, it was confirmed that the concrete filling the cores contributes to increase the shear strength of hollow core slabs. But in case of slabs 26,5 cm thick, the quantity should be limited to a maximum of two cores filled, because most of the results for slabs with three cores filled were smaller. This confirmed the recommendation of NBR 14861:2011which is consistent with standard practice. After analyzing the configuration of cracking and failure mechanisms of hollow core slabs during the shear tests, strut and tie models were developed representing the forces acting on the slab at the moment of rupture. Through these models the authors were able to calculate the tensile stress acting on the concrete ties (ribs) and scaled the geometry of these ties. The conclusions of the research performed are the experiments results have shown that the mechanism of failure of the hollow-core slabs can be predicted using the strut-and-tie procedure, within a good range of accuracy. In addition, the needed of the correction of the Brazilian standard to review the correction factor σcp duplicated (in NBR14861/2011), and the limitation of the number of cores (Holes) to be filled with concrete, to increase the strength of the slab for the shear resistance. It is also suggested the increasing the amount of test results with 26.5 cm thick, and a larger range of thickness slabs, in order to obtain results of shear tests with cores concreted after the release of prestressing force. Another set of shear tests on slabs must be performed in slabs with cores filled and cover concrete reinforced with welded steel mesh for comparison with results of theoretical values calculated by the new revision of the standard NBR 14861:2011.

Keywords: prestressed hollow core slabs, shear, strut, tie models

Procedia PDF Downloads 303
19514 Experimental Investigation on Residual Stresses in Welded Medium-Walled I-shaped Sections Fabricated from Q460GJ Structural Steel Plates

Authors: Qian Zhu, Shidong Nie, Bo Yang, Gang Xiong, Guoxin Dai

Abstract:

GJ steel is a new type of high-performance structural steel which has been increasingly adopted in practical engineering. Q460GJ structural steel has a nominal yield strength of 460 MPa, which does not decrease significantly with the increase of steel plate thickness like normal structural steel. Thus, Q460GJ structural steel is normally used in medium-walled welded sections. However, research works on the residual stress in GJ steel members are few though it is one of the vital factors that can affect the member and structural behavior. This article aims to investigate the residual stresses in welded I-shaped sections fabricated from Q460GJ structural steel plates by experimental tests. A total of four full scale welded medium-walled I-shaped sections were tested by sectioning method. Both circular curve correction method and straightening measurement method were adopted in this study to obtain the final magnitude and distribution of the longitudinal residual stresses. In addition, this paper also explores the interaction between flanges and webs. And based on the statistical evaluation of the experimental data, a multilayer residual stress model is proposed.

Keywords: Q460GJ structural steel, residual stresses, sectioning method, welded medium-walled I-shaped sections

Procedia PDF Downloads 289
19513 Wasting Human and Computer Resources

Authors: Mária Csernoch, Piroska Biró

Abstract:

The legends about “user-friendly” and “easy-to-use” birotical tools (computer-related office tools) have been spreading and misleading end-users. This approach has led us to the extremely high number of incorrect documents, causing serious financial losses in the creating, modifying, and retrieving processes. Our research proved that there are at least two sources of this underachievement: (1) The lack of the definition of the correctly edited, formatted documents. Consequently, end-users do not know whether their methods and results are correct or not. They are not aware of their ignorance. They are so ignorant that their ignorance does not allow them to realize their lack of knowledge. (2) The end-users’ problem-solving methods. We have found that in non-traditional programming environments end-users apply, almost exclusively, surface approach metacognitive methods to carry out their computer related activities, which are proved less effective than deep approach methods. Based on these findings we have developed deep approach methods which are based on and adapted from traditional programming languages. In this study, we focus on the most popular type of birotical documents, the text-based documents. We have provided the definition of the correctly edited text, and based on this definition, adapted the debugging method known in programming. According to the method, before the realization of text editing, a thorough debugging of already existing texts and the categorization of errors are carried out. With this method in advance to real text editing users learn the requirements of text-based documents and also of the correctly formatted text. The method has been proved much more effective than the previously applied surface approach methods. The advantages of the method are that the real text handling requires much less human and computer sources than clicking aimlessly in the GUI (Graphical User Interface), and the data retrieval is much more effective than from error-prone documents.

Keywords: deep approach metacognitive methods, error-prone birotical documents, financial losses, human and computer resources

Procedia PDF Downloads 359
19512 Image Features Comparison-Based Position Estimation Method Using a Camera Sensor

Authors: Jinseon Song, Yongwan Park

Abstract:

In this paper, propose method that can user’s position that based on database is built from single camera. Previous positioning calculate distance by arrival-time of signal like GPS (Global Positioning System), RF(Radio Frequency). However, these previous method have weakness because these have large error range according to signal interference. Method for solution estimate position by camera sensor. But, signal camera is difficult to obtain relative position data and stereo camera is difficult to provide real-time position data because of a lot of image data, too. First of all, in this research we build image database at space that able to provide positioning service with single camera. Next, we judge similarity through image matching of database image and transmission image from user. Finally, we decide position of user through position of most similar database image. For verification of propose method, we experiment at real-environment like indoor and outdoor. Propose method is wide positioning range and this method can verify not only position of user but also direction.

Keywords: positioning, distance, camera, features, SURF(Speed-Up Robust Features), database, estimation

Procedia PDF Downloads 320
19511 Three-Dimensional Positioning Method of Indoor Personnel Based on Millimeter Wave Radar Sensor

Authors: Chao Wang, Zuxue Xia, Wenhai Xia, Rui Wang, Jiayuan Hu, Rui Cheng

Abstract:

Aiming at the application of indoor personnel positioning under smog conditions, this paper proposes a 3D positioning method based on the IWR1443 millimeter wave radar sensor. The problem that millimeter-wave radar cannot effectively form contours in 3D point cloud imaging is solved. The results show that the method can effectively achieve indoor positioning and scene construction, and the maximum positioning error of the system is 0.130m.

Keywords: indoor positioning, millimeter wave radar, IWR1443 sensor, point cloud imaging

Procedia PDF Downloads 66
19510 Dynamic Compensation for Environmental Temperature Variation in the Coolant Refrigeration Cycle as a Means of Increasing Machine-Tool Precision

Authors: Robbie C. Murchison, Ibrahim Küçükdemiral, Andrew Cowell

Abstract:

Thermal effects are the largest source of dimensional error in precision machining, and a major proportion is caused by ambient temperature variation. The use of coolant is a primary means of mitigating these effects, but there has been limited work on coolant temperature control. This research critically explored whether CNC-machine coolant refrigeration systems adapted to actively compensate for ambient temperature variation could increase machining accuracy. Accuracy data were collected from operators’ checklists for a CNC 5-axis mill and statistically reduced to bias and precision metrics for observations of one day over a sample period of 27 days. Temperature data were collected using three USB dataloggers in ambient air, the chiller inflow, and the chiller outflow. The accuracy and temperature data were analysed using Pearson correlation, then the thermodynamics of the system were described using system identification with MATLAB. It was found that 75% of thermal error is reflected in the hot coolant temperature but that this is negligibly dependent on ambient temperature. The effect of the coolant refrigeration process on hot coolant outflow temperature was also found to be negligible. Therefore, the evidence indicated that it would not be beneficial to adapt coolant chillers to compensate for ambient temperature variation. However, it is concluded that hot coolant outflow temperature is a robust and accessible source of thermal error data which could be used for prevention strategy evaluation or as the basis of other thermal error strategies.

Keywords: CNC manufacturing, machine-tool, precision machining, thermal error

Procedia PDF Downloads 56
19509 Approximation of Geodesics on Meshes with Implementation in Rhinoceros Software

Authors: Marian Sagat, Mariana Remesikova

Abstract:

In civil engineering, there is a problem how to industrially produce tensile membrane structures that are non-developable surfaces. Nondevelopable surfaces can only be developed with a certain error and we want to minimize this error. To that goal, the non-developable surfaces are cut into plates along to the geodesic curves. We propose a numerical algorithm for finding approximations of open geodesics on meshes and surfaces based on geodesic curvature flow. For practical reasons, it is important to automatize the choice of the time step. We propose a method for automatic setting of the time step based on the diagonal dominance criterion for the matrix of the linear system obtained by discretization of our partial differential equation model. Practical experiments show reliability of this method. Because approximation of the model is made by numerical method based on classic derivatives, it is necessary to solve obstacles which occur for meshes with sharp corners. We solve this problem for big family of meshes with sharp corners via special rotations which can be seen as partial unfolding of the mesh. In practical applications, it is required that the approximation of geodesic has its vertices only on the edges of the mesh. This problem is solved by a specially designed pointing tracking algorithm. We also partially solve the problem of finding geodesics on meshes with holes. We implemented the whole algorithm in Rhinoceros (commercial 3D computer graphics and computer-aided design software ). It is done by using C# language as C# assembly library for Grasshopper, which is plugin in Rhinoceros.

Keywords: geodesic, geodesic curvature flow, mesh, Rhinoceros software

Procedia PDF Downloads 121
19508 Modelling Volatility of Cryptocurrencies: Evidence from GARCH Family of Models with Skewed Error Innovation Distributions

Authors: Timothy Kayode Samson, Adedoyin Isola Lawal

Abstract:

The past five years have shown a sharp increase in public interest in the crypto market, with its market capitalization growing from $100 billion in June 2017 to $2158.42 billion on April 5, 2022. Despite the outrageous nature of the volatility of cryptocurrencies, the use of skewed error innovation distributions in modelling the volatility behaviour of these digital currencies has not been given much research attention. Hence, this study models the volatility of 5 largest cryptocurrencies by market capitalization (Bitcoin, Ethereum, Tether, Binance coin, and USD Coin) using four variants of GARCH models (GJR-GARCH, sGARCH, EGARCH, and APARCH) estimated using three skewed error innovation distributions (skewed normal, skewed student- t and skewed generalized error innovation distributions). Daily closing prices of these currencies were obtained from Yahoo Finance website. Finding reveals that the Binance coin reported higher mean returns compared to other digital currencies, while the skewness indicates that the Binance coin, Tether, and USD coin increased more than they decreased in values within the period of study. For both Bitcoin and Ethereum, negative skewness was obtained, meaning that within the period of study, the returns of these currencies decreased more than they increased in value. Returns from these cryptocurrencies were found to be stationary but not normality distributed with evidence of the ARCH effect. The skewness parameters in all best forecasting models were all significant (p<.05), justifying of use of skewed error innovation distributions with a fatter tail than normal, Student-t, and generalized error innovation distributions. For Binance coin, EGARCH-sstd outperformed other volatility models, while for Bitcoin, Ethereum, Tether, and USD coin, the best forecasting models were EGARCH-sstd, APARCH-sstd, EGARCH-sged, and GJR-GARCH-sstd, respectively. This suggests the superiority of skewed Student t- distribution and skewed generalized error distribution over the skewed normal distribution.

Keywords: skewed generalized error distribution, skewed normal distribution, skewed student t- distribution, APARCH, EGARCH, sGARCH, GJR-GARCH

Procedia PDF Downloads 66
19507 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables

Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez

Abstract:

Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.

Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X

Procedia PDF Downloads 224
19506 The Use of Image Processing Responses Tools Applied to Analysing Bouguer Gravity Anomaly Map (Tangier-Tetuan's Area-Morocco)

Authors: Saad Bakkali

Abstract:

Image processing is a powerful tool for the enhancement of edges in images used in the interpretation of geophysical potential field data. Arial and terrestrial gravimetric surveys were carried out in the region of Tangier-Tetuan. From the observed and measured data of gravity Bouguer gravity anomalies map was prepared. This paper reports the results and interpretations of the transformed maps of Bouguer gravity anomaly of the Tangier-Tetuan area using image processing. Filtering analysis based on classical image process was applied. Operator image process like logarithmic and gamma correction are used. This paper also present the results obtained from this image processing analysis of the enhancement edges of the Bouguer gravity anomaly map of the Tangier-Tetuan zone.

Keywords: bouguer, tangier, filtering, gamma correction, logarithmic enhancement edges

Procedia PDF Downloads 398
19505 Velocity Logs Error Reduction for In-Service Calibration of Vessel Performance Indicators

Authors: Maria Tsompanoglou, Dimitris Armenis

Abstract:

Vessel behavior in different operational and weather conditions constitutes the main area of interest for the ship operator. Ship speed and fuel consumption are the most decisive parameters in this respect, as their correlation provides information about the economic and environmental efficiency of the vessel, becoming the basis of decision making in terms of maintenance and trading. In the analysis of vessel operational profile for the evaluation of fuel consumption and the equivalent CO2 emissions footprint, the indications of Speed Through Water are widely used. The seasonal and regional variations in seawater characteristics, which are available nowadays, can provide the basis for accurate estimation of the errors in Speed Through Water indications at any time. Accuracy in the speed value on a route basis can enable operator identify the ship fuel and propulsion efficiency and proceed with improvements. This paper discusses case studies, where the actual vessel speed was corrected by a post-processing algorithm. The effects of the vessel correction to standard Key Performance Indicators, as well as operational findings not identified earlier, are also discussed.

Keywords: data analytics, MATLAB, vessel performance monitoring, speed through water

Procedia PDF Downloads 273
19504 Numerical Evolution Methods of Rational Form for Diffusion Equations

Authors: Said Algarni

Abstract:

The purpose of this study was to investigate selected numerical methods that demonstrate good performance in solving PDEs. We adapted alternative method that involve rational polynomials. Padé time stepping (PTS) method, which is highly stable for the purposes of the present application and is associated with lower computational costs, was applied. Furthermore, PTS was modified for our study which focused on diffusion equations. Numerical runs were conducted to obtain the optimal local error control threshold.

Keywords: Padé time stepping, finite difference, reaction diffusion equation, PDEs

Procedia PDF Downloads 274
19503 Development of Advanced Linear Calibration Technique for Air Flow Sensing by Using CTA-Based Hot Wire Anemometry

Authors: Ming-Jong Tsai, T. M. Wu, R. C. Chu

Abstract:

The purpose of this study is to develop an Advanced linear calibration Technique for air flow sensing by using CTA-based Hot wire Anemometry. It contains a host PC with Human Machine Interface, a wind tunnel, a wind speed controller, an automatic data acquisition module, and nonlinear calibration model. To improve the fitting error by using single fitting polynomial, this study proposes a Multiple three-order Polynomial Fitting Method (MPFM) for fitting the non-linear output of a CTA-based Hot wire Anemometry. The CTA-based anemometer with built-in fitting parameters is installed in the wind tunnel, and the wind speed is controlled by the PC-based controller. The Hot-Wire anemometer's thermistor resistance change is converted into a voltage signal or temperature differences, and then sent to the PC through a DAQ card. After completion measurements of original signal, the Multiple polynomial mathematical coefficients can be automatically calculated, and then sent into the micro-processor in the Hot-Wire anemometer. Finally, the corrected Hot-Wire anemometer is verified for the linearity, the repeatability, error percentage, and the system outputs quality control reports.

Keywords: flow rate sensing, hot wire, constant temperature anemometry (CTA), linear calibration, multiple three-order polynomial fitting method (MPFM), temperature compensation

Procedia PDF Downloads 385
19502 Channel Estimation for LTE Downlink

Authors: Rashi Jain

Abstract:

The LTE systems employ Orthogonal Frequency Division Multiplexing (OFDM) as the multiple access technology for the Downlink channels. For enhanced performance, accurate channel estimation is required. Various algorithms such as Least Squares (LS), Minimum Mean Square Error (MMSE) and Recursive Least Squares (RLS) can be employed for the purpose. The paper proposes channel estimation algorithm based on Kalman Filter for LTE-Downlink system. Using the frequency domain pilots, the initial channel response is obtained using the LS criterion. Then Kalman Filter is employed to track the channel variations in time-domain. To suppress the noise within a symbol, threshold processing is employed. The paper draws comparison between the LS, MMSE, RLS and Kalman filter for channel estimation. The parameters for evaluation are Bit Error Rate (BER), Mean Square Error (MSE) and run-time.

Keywords: LTE, channel estimation, OFDM, RLS, Kalman filter, threshold

Procedia PDF Downloads 330
19501 Using Real Truck Tours Feedback for Address Geocoding Correction

Authors: Dalicia Bouallouche, Jean-Baptiste Vioix, Stéphane Millot, Eric Busvelle

Abstract:

When researchers or logistics software developers deal with vehicle routing optimization, they mainly focus on minimizing the total travelled distance or the total time spent in the tours by the trucks, and maximizing the number of visited customers. They assume that the upstream real data given to carry the optimization of a transporter tours is free from errors, like customers’ real constraints, customers’ addresses and their GPS-coordinates. However, in real transporter situations, upstream data is often of bad quality because of address geocoding errors and the irrelevance of received addresses from the EDI (Electronic Data Interchange). In fact, geocoders are not exempt from errors and could give impertinent GPS-coordinates. Also, even with a good geocoding, an inaccurate address can lead to a bad geocoding. For instance, when the geocoder has trouble with geocoding an address, it returns those of the center of the city. As well, an obvious geocoding issue is that the mappings used by the geocoders are not regularly updated. Thus, new buildings could not exist on maps until the next update. Even so, trying to optimize tours with impertinent customers GPS-coordinates, which are the most important and basic input data to take into account for solving a vehicle routing problem, is not really useful and will lead to a bad and incoherent solution tours because the locations of the customers used for the optimization are very different from their real positions. Our work is supported by a logistics software editor Tedies and a transport company Upsilon. We work with Upsilon's truck routes data to carry our experiments. In fact, these trucks are equipped with TOMTOM GPSs that continuously save their tours data (positions, speeds, tachograph-information, etc.). We, then, retrieve these data to extract the real truck routes to work with. The aim of this work is to use the experience of the driver and the feedback of the real truck tours to validate GPS-coordinates of well geocoded addresses, and bring a correction to the badly geocoded addresses. Thereby, when a vehicle makes its tour, for each visited customer, the vehicle might have trouble with finding this customer’s address at most once. In other words, the vehicle would be wrong at most once for each customer’s address. Our method significantly improves the quality of the geocoding. Hence, we achieve to automatically correct an average of 70% of GPS-coordinates of a tour addresses. The rest of the GPS-coordinates are corrected in a manual way by giving the user indications to help him to correct them. This study shows the importance of taking into account the feedback of the trucks to gradually correct address geocoding errors. Indeed, the accuracy of customer’s address and its GPS-coordinates play a major role in tours optimization. Unfortunately, address writing errors are very frequent. This feedback is naturally and usually taken into account by transporters (by asking drivers, calling customers…), to learn about their tours and bring corrections to the upcoming tours. Hence, we develop a method to do a big part of that automatically.

Keywords: driver experience feedback, geocoding correction, real truck tours

Procedia PDF Downloads 642