Search results for: machining error
1492 Estimation of PM10 Concentration Using Ground Measurements and Landsat 8 OLI Satellite Image
Authors: Salah Abdul Hameed Saleh, Ghada Hasan
Abstract:
The aim of this work is to produce an empirical model for the determination of particulate matter (PM10) concentration in the atmosphere using visible bands of Landsat 8 OLI satellite image over Kirkuk city- IRAQ. The suggested algorithm is established on the aerosol optical reflectance model. The reflectance model is a function of the optical properties of the atmosphere, which can be related to its concentrations. The concentration of PM10 measurements was collected using Particle Mass Profiler and Counter in a Single Handheld Unit (Aerocet 531) meter simultaneously by the Landsat 8 OLI satellite image date. The PM10 measurement locations were defined by a handheld global positioning system (GPS). The obtained reflectance values for visible bands (Coastal aerosol, Blue, Green and blue bands) of landsat 8 OLI image were correlated with in-suite measured PM10. The feasibility of the proposed algorithms was investigated based on the correlation coefficient (R) and root-mean-square error (RMSE) compared with the PM10 ground measurement data. A choice of our proposed multispectral model was founded on the highest value correlation coefficient (R) and lowest value of the root mean square error (RMSE) with PM10 ground data. The outcomes of this research showed that visible bands of Landsat 8 OLI were capable of calculating PM10 concentration with an acceptable level of accuracy.Keywords: air pollution, PM10 concentration, Lansat8 OLI image, reflectance, multispectral algorithms, Kirkuk area
Procedia PDF Downloads 4411491 An Evaluation of the Oxide Layers in Machining Swarfs to Improve Recycling
Authors: J. Uka, B. McKay, T. Minton, O. Adole, R. Lewis, S. J. Glanvill, L. Anguilano
Abstract:
Effective heat treatment conditions to obtain maximum aluminium swarf recycling are investigated in this work. Aluminium swarf briquettes underwent treatments at different temperatures and cooling times to investigate the improvements obtained in the recovery of aluminium metal. The main issue for the recovery of the metal from swarfs is to overcome the constraints due to the oxide layers present in high concentration in the swarfs since they have a high surface area. Briquettes supplied by Renishaw were heat treated at 650, 700, 750, 800 and 850 ℃ for 1-hour and then cooled at 2.3, 3.5 and 5 ℃/min. The resulting material was analysed using SEM EDX to observe the oxygen diffusion and aluminium coalescence at the boundary between adjacent swarfs. Preliminary results show that, swarf needs to be heat treated at a temperature of 850 ℃ and cooled down slowly at 2.3 ℃/min to have thin and discontinuous alumina layers between the adjacent swarf and consequently allowing aluminium coalescence. This has the potential to save energy and provide maximum financial profit in preparation of swarf briquettes for recycling.Keywords: reuse, recycle, aluminium, swarf, oxide layers
Procedia PDF Downloads 1311490 Exploring Time-Series Phosphoproteomic Datasets in the Context of Network Models
Authors: Sandeep Kaur, Jenny Vuong, Marcel Julliard, Sean O'Donoghue
Abstract:
Time-series data are useful for modelling as they can enable model-evaluation. However, when reconstructing models from phosphoproteomic data, often non-exact methods are utilised, as the knowledge regarding the network structure, such as, which kinases and phosphatases lead to the observed phosphorylation state, is incomplete. Thus, such reactions are often hypothesised, which gives rise to uncertainty. Here, we propose a framework, implemented via a web-based tool (as an extension to Minardo), which given time-series phosphoproteomic datasets, can generate κ models. The incompleteness and uncertainty in the generated model and reactions are clearly presented to the user via the visual method. Furthermore, we demonstrate, via a toy EGF signalling model, the use of algorithmic verification to verify κ models. Manually formulated requirements were evaluated with regards to the model, leading to the highlighting of the nodes causing unsatisfiability (i.e. error causing nodes). We aim to integrate such methods into our web-based tool and demonstrate how the identified erroneous nodes can be presented to the user via the visual method. Thus, in this research we present a framework, to enable a user to explore phosphorylation proteomic time-series data in the context of models. The observer can visualise which reactions in the model are highly uncertain, and which nodes cause incorrect simulation outputs. A tool such as this enables an end-user to determine the empirical analysis to perform, to reduce uncertainty in the presented model - thus enabling a better understanding of the underlying system.Keywords: κ-models, model verification, time-series phosphoproteomic datasets, uncertainty and error visualisation
Procedia PDF Downloads 2521489 Process Optimisation for Internal Cylindrical Rough Turning of Nickel Alloy 625 Weld Overlay
Authors: Lydia Chan, Islam Shyha, Dale Dreyer, John Hamilton, Phil Hackney
Abstract:
Nickel-based superalloys are generally known to be difficult to cut due to their strength, low thermal conductivity, and high work hardening tendency. Superalloy such as alloy 625 is often used in the oil and gas industry as a surfacing material to provide wear and corrosion resistance to components. The material is typically applied onto a metallic substrate through weld overlay cladding, an arc welding technique. Cladded surfaces are always rugged and carry a tough skin; this creates further difficulties to the machining process. The present work utilised design of experiment to optimise the internal cylindrical rough turning for weld overlay surfaces. An L27 orthogonal array was used to assess effects of the four selected key process variables: cutting insert, depth of cut, feed rate, and cutting speed. The optimal cutting conditions were determined based on productivity and the level of tool wear.Keywords: cylindrical turning, nickel superalloy, turning of overlay, weld overlay
Procedia PDF Downloads 3721488 Optimization of Cutting Forces in Drilling of Polimer Composites via Taguchi Methodology
Authors: Eser Yarar, Fahri Vatansever, A. Tamer Erturk, Sedat Karabay
Abstract:
In this study, drilling behavior of multi-layer orthotropic polyester composites reinforced with woven polyester fiber and PTFE particle was investigated. Conventional drilling methods have low cost and ease of use. Therefore, it is one of the most preferred machining methods. The increasing range of use of composite materials in many areas has led to the investigation of the machinability performance of these materials. The drilling capability of the synthetic polymer composite material was investigated by measuring the cutting forces using different tool diameters, feed rate and high cutting speed parameters. Cutting forces were measured using a dynamometer in the experiments. In order to evaluate the results of the experiment, the Taguchi experimental design method was used. According to the results, the optimum cutting parameters were obtained for 0.1 mm/rev, 1070 rpm and 2 mm diameter drill bit. Verification tests were performed for the optimum cutting parameters obtained according to the model. Verification experiments showed the success of the established model.Keywords: cutting force, drilling, polimer composite, Taguchi
Procedia PDF Downloads 1611487 Surface Elevation Dynamics Assessment Using Digital Elevation Models, Light Detection and Ranging, GPS and Geospatial Information Science Analysis: Ecosystem Modelling Approach
Authors: Ali K. M. Al-Nasrawi, Uday A. Al-Hamdany, Sarah M. Hamylton, Brian G. Jones, Yasir M. Alyazichi
Abstract:
Surface elevation dynamics have always responded to disturbance regimes. Creating Digital Elevation Models (DEMs) to detect surface dynamics has led to the development of several methods, devices and data clouds. DEMs can provide accurate and quick results with cost efficiency, in comparison to the inherited geomatics survey techniques. Nowadays, remote sensing datasets have become a primary source to create DEMs, including LiDAR point clouds with GIS analytic tools. However, these data need to be tested for error detection and correction. This paper evaluates various DEMs from different data sources over time for Apple Orchard Island, a coastal site in southeastern Australia, in order to detect surface dynamics. Subsequently, 30 chosen locations were examined in the field to test the error of the DEMs surface detection using high resolution global positioning systems (GPSs). Results show significant surface elevation changes on Apple Orchard Island. Accretion occurred on most of the island while surface elevation loss due to erosion is limited to the northern and southern parts. Concurrently, the projected differential correction and validation method aimed to identify errors in the dataset. The resultant DEMs demonstrated a small error ratio (≤ 3%) from the gathered datasets when compared with the fieldwork survey using RTK-GPS. As modern modelling approaches need to become more effective and accurate, applying several tools to create different DEMs on a multi-temporal scale would allow easy predictions in time-cost-frames with more comprehensive coverage and greater accuracy. With a DEM technique for the eco-geomorphic context, such insights about the ecosystem dynamic detection, at such a coastal intertidal system, would be valuable to assess the accuracy of the predicted eco-geomorphic risk for the conservation management sustainability. Demonstrating this framework to evaluate the historical and current anthropogenic and environmental stressors on coastal surface elevation dynamism could be profitably applied worldwide.Keywords: DEMs, eco-geomorphic-dynamic processes, geospatial Information Science, remote sensing, surface elevation changes,
Procedia PDF Downloads 2661486 Generative Adversarial Network Based Fingerprint Anti-Spoofing Limitations
Authors: Yehjune Heo
Abstract:
Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.Keywords: anti-spoofing, CNN, fingerprint recognition, GAN
Procedia PDF Downloads 1831485 Integrating Machine Learning and Rule-Based Decision Models for Enhanced B2B Sales Forecasting and Customer Prioritization
Authors: Wenqi Liu, Reginald Bailey
Abstract:
This study explores an advanced approach to enhancing B2B sales forecasting by integrating machine learning models with a rule-based decision framework. The methodology begins with the development of a machine learning classification model to predict conversion likelihood, aiming to improve accuracy over traditional methods like logistic regression. The classification model's effectiveness is measured using metrics such as accuracy, precision, recall, and F1 score, alongside a feature importance analysis to identify key predictors. Following this, a machine learning regression model is used to forecast sales value, with the objective of reducing mean absolute error (MAE) compared to linear regression techniques. The regression model's performance is assessed using MAE, root mean square error (RMSE), and R-squared metrics, emphasizing feature contribution to the prediction. To bridge the gap between predictive analytics and decision-making, a rule-based decision model is introduced that prioritizes customers based on predefined thresholds for conversion probability and predicted sales value. This approach significantly enhances customer prioritization and improves overall sales performance by increasing conversion rates and optimizing revenue generation. The findings suggest that this combined framework offers a practical, data-driven solution for sales teams, facilitating more strategic decision-making in B2B environments.Keywords: sales forecasting, machine learning, rule-based decision model, customer prioritization, predictive analytics
Procedia PDF Downloads 141484 Students' Errors in Translating Algebra Word Problems to Mathematical Structure
Authors: Ledeza Jordan Babiano
Abstract:
Translating statements into mathematical notations is one of the processes in word problem-solving. However, based on the literature, students still have difficulties with this skill. The purpose of this study was to investigate the translation errors of the students when they translate algebraic word problems into mathematical structures and locate the errors via the lens of the Translation-Verification Model. Moreover, this qualitative research study employed content analysis. During the data-gathering process, the students were asked to answer a six-item algebra word problem questionnaire, and their answers were analyzed by experts through blind coding using the Translation-Verification Model to determine their translation errors. After this, a focus group discussion was conducted, and the data gathered was analyzed through thematic analysis to determine the causes of the students’ translation errors. It was found out that students’ prevalent error in translation was the interpretation error, which was situated in the Attribute construct. The emerging themes during the FGD were: (1) The procedure of translation is strategically incorrect; (2) Lack of comprehension; (3) Algebra concepts related to difficulty; (4) Lack of spatial skills; (5) Unprepared for independent learning; and (6) The content of the problem is developmentally inappropriate. These themes boiled down to the major concept of independent learning preparedness in solving mathematical problems. This concept has subcomponents, which include contextual and conceptual factors in translation. Consequently, the results provided implications for instructors and professors in Mathematics to innovate their teaching pedagogies and strategies to address translation gaps among students.Keywords: mathematical structure, algebra word problems, translation, errors
Procedia PDF Downloads 461483 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models
Authors: I. V. Pinto, M. R. Sooriyarachchi
Abstract:
It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.Keywords: goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, penalized quasi-likelihood, power, quasi-likelihood, type-I error
Procedia PDF Downloads 1421482 Feasibility Study of Measurement of Turning Based-Surfaces Using Perthometer, Optical Profiler and Confocal Sensor
Authors: Khavieya Anandhan, Soundarapandian Santhanakrishnan, Vijayaraghavan Laxmanan
Abstract:
In general, measurement of surfaces is carried out by using traditional methods such as contact type stylus instruments. This prevalent approach is challenged by using non-contact instruments such as optical profiler, co-ordinate measuring machine, laser triangulation sensors, machine vision system, etc. Recently, confocal sensor is trying to be used in the surface metrology field. This sensor, such as a confocal sensor, is explored in this study to determine the surface roughness value for various turned surfaces. Turning is a crucial machining process to manufacture products such as grooves, tapered domes, threads, tapers, etc. The roughness value of turned surfaces are in the range of range 0.4-12.5 µm, were taken for analysis. Three instruments were used, namely, perthometer, optical profiler, and confocal sensor. Among these, in fact, a confocal sensor is least explored, despite its good resolution about 5 nm. Thus, such a high-precision sensor was used in this study to explore the possibility of measuring turned surfaces. Further, using this data, measurement uncertainty was also studied.Keywords: confocal sensor, optical profiler, surface roughness, turned surfaces
Procedia PDF Downloads 1321481 Multi-Point Dieless Forming Product Defect Reduction Using Reliability-Based Robust Process Optimization
Authors: Misganaw Abebe Baye, Ji-Woo Park, Beom-Soo Kang
Abstract:
The product quality of multi-point dieless forming (MDF) is identified to be dependent on the process parameters. Moreover, a certain variation of friction and material properties may have a substantially worse influence on the final product quality. This study proposed on how to compensate the MDF product defects by minimizing the sensitivity of noise parameter variations. This can be attained by reliability-based robust optimization (RRO) technique to obtain the optimal process setting of the controllable parameters. Initially two MDF Finite Element (FE) simulations of AA3003-H14 saddle shape showed a substantial amount of dimpling, wrinkling, and shape error. FE analyses are consequently applied on ABAQUS commercial software to obtain the correlation between the control process setting and noise variation with regard to the product defects. The best prediction models are chosen from the family of metamodels to swap the computational expensive FE simulation. Genetic algorithm (GA) is applied to determine the optimal process settings of the control parameters. Monte Carlo Analysis (MCA) is executed to determine how the noise parameter variation affects the final product quality. Finally, the RRO FE simulation and the experimental result show that the amendment of the control parameters in the final forming process leads to a considerably better-quality product.Keywords: dimpling, multi-point dieless forming, reliability-based robust optimization, shape error, variation, wrinkling
Procedia PDF Downloads 2521480 Government Final Consumption Expenditure and Household Consumption Expenditure NPISHS in Nigeria
Authors: Usman A. Usman
Abstract:
Undeniably, unlike the Classical side, the Keynesian perspective of the aggregate demand side indeed has a significant position in the policy, growth, and welfare of Nigeria due to government involvement and ineffective demand of the population living with poor per capita income. This study seeks to investigate the effect of Government Final Consumption Expenditure, Financial Deepening on Households, and NPISHs Final consumption expenditure using data on Nigeria from 1981 to 2019. This study employed the ADF stationarity test, Johansen Cointegration test, and Vector Error Correction Model. The results of the study revealed that the coefficient of Government final consumption expenditure has a positive effect on household consumption expenditure in the long run. There is a long-run and short-run relationship between gross fixed capital formation and household consumption expenditure. The coefficients cpsgdp (financial deepening and gross fixed capital formation posit a negative impact on household final consumption expenditure. The coefficients money supply lm2gdp, which is another proxy for financial deepening, and the coefficient FDI have a positive effect on household final consumption expenditure in the long run. Therefore, this study recommends that Gross fixed capital formation stimulates household consumption expenditure; a legal framework to support investment is a panacea to increasing hoodmold income and consumption and reducing poverty in Nigeria. Therefore, this should be a key central component of policy.Keywords: government final consumption expenditure, household consumption expenditure, vector error correction model, cointegration
Procedia PDF Downloads 501479 Approximation of Geodesics on Meshes with Implementation in Rhinoceros Software
Authors: Marian Sagat, Mariana Remesikova
Abstract:
In civil engineering, there is a problem how to industrially produce tensile membrane structures that are non-developable surfaces. Nondevelopable surfaces can only be developed with a certain error and we want to minimize this error. To that goal, the non-developable surfaces are cut into plates along to the geodesic curves. We propose a numerical algorithm for finding approximations of open geodesics on meshes and surfaces based on geodesic curvature flow. For practical reasons, it is important to automatize the choice of the time step. We propose a method for automatic setting of the time step based on the diagonal dominance criterion for the matrix of the linear system obtained by discretization of our partial differential equation model. Practical experiments show reliability of this method. Because approximation of the model is made by numerical method based on classic derivatives, it is necessary to solve obstacles which occur for meshes with sharp corners. We solve this problem for big family of meshes with sharp corners via special rotations which can be seen as partial unfolding of the mesh. In practical applications, it is required that the approximation of geodesic has its vertices only on the edges of the mesh. This problem is solved by a specially designed pointing tracking algorithm. We also partially solve the problem of finding geodesics on meshes with holes. We implemented the whole algorithm in Rhinoceros (commercial 3D computer graphics and computer-aided design software ). It is done by using C# language as C# assembly library for Grasshopper, which is plugin in Rhinoceros.Keywords: geodesic, geodesic curvature flow, mesh, Rhinoceros software
Procedia PDF Downloads 1461478 Experimental Study and Neural Network Modeling in Prediction of Surface Roughness on Dry Turning Using Two Different Cutting Tool Nose Radii
Authors: Deba Kumar Sarma, Sanjib Kr. Rajbongshi
Abstract:
Surface finish is an important product quality in machining. At first, experiments were carried out to investigate the effect of the cutting tool nose radius (considering 1mm and 0.65mm) in prediction of surface finish with process parameters of cutting speed, feed and depth of cut. For all possible cutting conditions, full factorial design was considered as two levels four parameters. Commercial Mild Steel bar and High Speed Steel (HSS) material were considered as work-piece and cutting tool material respectively. In order to obtain functional relationship between process parameters and surface roughness, neural network was used which was found to be capable for the prediction of surface roughness within a reasonable degree of accuracy. It was observed that tool nose radius of 1mm provides better surface finish in comparison to 0.65 mm. Also, it was observed that feed rate has a significant influence on surface finish.Keywords: full factorial design, neural network, nose radius, surface finish
Procedia PDF Downloads 3651477 Simulation of Optimal Runoff Hydrograph Using Ensemble of Radar Rainfall and Blending of Runoffs Model
Authors: Myungjin Lee, Daegun Han, Jongsung Kim, Soojun Kim, Hung Soo Kim
Abstract:
Recently, the localized heavy rainfall and typhoons are frequently occurred due to the climate change and the damage is becoming bigger. Therefore, we may need a more accurate prediction of the rainfall and runoff. However, the gauge rainfall has the limited accuracy in space. Radar rainfall is better than gauge rainfall for the explanation of the spatial variability of rainfall but it is mostly underestimated with the uncertainty involved. Therefore, the ensemble of radar rainfall was simulated using error structure to overcome the uncertainty and gauge rainfall. The simulated ensemble was used as the input data of the rainfall-runoff models for obtaining the ensemble of runoff hydrographs. The previous studies discussed about the accuracy of the rainfall-runoff model. Even if the same input data such as rainfall is used for the runoff analysis using the models in the same basin, the models can have different results because of the uncertainty involved in the models. Therefore, we used two models of the SSARR model which is the lumped model, and the Vflo model which is a distributed model and tried to simulate the optimum runoff considering the uncertainty of each rainfall-runoff model. The study basin is located in Han river basin and we obtained one integrated runoff hydrograph which is an optimum runoff hydrograph using the blending methods such as Multi-Model Super Ensemble (MMSE), Simple Model Average (SMA), Mean Square Error (MSE). From this study, we could confirm the accuracy of rainfall and rainfall-runoff model using ensemble scenario and various rainfall-runoff model and we can use this result to study flood control measure due to climate change. Acknowledgements: This work is supported by the Korea Agency for Infrastructure Technology Advancement(KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 18AWMP-B083066-05).Keywords: radar rainfall ensemble, rainfall-runoff models, blending method, optimum runoff hydrograph
Procedia PDF Downloads 2791476 Surface Roughness Modeling in Dry Face Milling of Annealed and Hardened AISI 52100 Steel
Authors: Mohieddine Benghersallah, Mohamed Zakaria Zahaf, Ali Medjber, Idriss Tibakh
Abstract:
The objective of this study is to analyse the effects of cutting parameters on surface roughness in dry face milling using statistical techniques. We studied the effect of the microstructure of AISI 52100 steel on machinability before and after hardening. The machining tests were carried out on a high rigidity vertical milling machine with a 25 mm diameter face milling cutter equipped with micro-grain bicarbide inserts with PVD (Ti, AlN) coating in GC1030 grade. A Taguchi L9 experiment plan is adopted. Analysis of variance (ANOVA) was used to determine the effects of cutting parameters (Vc, fz, ap) on the roughness (Ra) of the machined surface. Regression analysis to assess the machinability of steel presented mathematical models of roughness and the combination of parameters to minimize it. The recorded results show that feed per tooth has the most significant effect on the surface condition for both steel treatment conditions. The best roughnesses were obtained for the hardened AISI 52100 steel.Keywords: machinability, heat treatment, microstructure, surface roughness, Taguchi method
Procedia PDF Downloads 1461475 An Improved Robust Algorithm Based on Cubature Kalman Filter for Single-Frequency Global Navigation Satellite System/Inertial Navigation Tightly Coupled System
Authors: Hao Wang, Shuguo Pan
Abstract:
The Global Navigation Satellite System (GNSS) signal received by the dynamic vehicle in the harsh environment will be frequently interfered with and blocked, which generates gross error affecting the positioning accuracy of the GNSS/Inertial Navigation System (INS) integrated navigation. Therefore, this paper put forward an improved robust Cubature Kalman filter (CKF) algorithm for single-frequency GNSS/INS tightly coupled system ambiguity resolution. Firstly, the dynamic model and measurement model of a single-frequency GNSS/INS tightly coupled system was established, and the method for GNSS integer ambiguity resolution with INS aided is studied. Then, we analyzed the influence of pseudo-range observation with gross error on GNSS/INS integrated positioning accuracy. To reduce the influence of outliers, this paper improved the CKF algorithm and realized an intelligent selection of robust strategies by judging the ill-conditioned matrix. Finally, a field navigation test was performed to demonstrate the effectiveness of the proposed algorithm based on the double-differenced solution mode. The experiment has proved the improved robust algorithm can greatly weaken the influence of separate, continuous, and hybrid observation anomalies for enhancing the reliability and accuracy of GNSS/INS tightly coupled navigation solutions.Keywords: GNSS/INS integrated navigation, ambiguity resolution, Cubature Kalman filter, Robust algorithm
Procedia PDF Downloads 951474 Reasons for the Selection of Information-Processing Framework and the Philosophy of Mind as a General Account for an Error Analysis and Explanation on Mathematics
Authors: Michael Lousis
Abstract:
This research study is concerned with learner’s errors on Arithmetic and Algebra. The data resulted from a broader international comparative research program called Kassel Project. However, its conceptualisation differed from and contrasted with that of the main program, which was mostly based on socio-demographic data. The way in which the research study was conducted, was not dependent on the researcher’s discretion, but was absolutely dictated by the nature of the problem under investigation. This is because the phenomenon of learners’ mathematical errors is due neither to the intentions of learners nor to institutional processes, rules and norms, nor to the educators’ intentions and goals; but rather to the way certain information is presented to learners and how their cognitive apparatus processes this information. Several approaches for the study of learners’ errors have been developed from the beginning of the 20th century, encompassing different belief systems. These approaches were based on the behaviourist theory, on the Piagetian- constructivist research framework, the perspective that followed the philosophy of science and the information-processing paradigm. The researcher of the present study was forced to disclose the learners’ course of thinking that led them in specific observable actions with the result of showing particular errors in specific problems, rather than analysing scripts with the students’ thoughts presented in a written form. This, in turn, entailed that the choice of methods would have to be appropriate and conducive to seeing and realising the learners’ errors from the perspective of the participants in the investigation. This particular fact determined important decisions to be made concerning the selection of an appropriate framework for analysing the mathematical errors and giving explanations. Thus the rejection of the belief systems concerning behaviourism, the Piagetian-constructivist, and philosophy of science perspectives took place, and the information-processing paradigm in conjunction with the philosophy of mind were adopted as a general account for the elaboration of data. This paper explains why these decisions were appropriate and beneficial for conducting the present study and for the establishment of the ensued thesis. Additionally, the reasons for the adoption of the information-processing paradigm in conjunction with the philosophy of mind give sound and legitimate bases for the development of future studies concerning mathematical error analysis are explained.Keywords: advantages-disadvantages of theoretical prospects, behavioral prospect, critical evaluation of theoretical prospects, error analysis, information-processing paradigm, opting for the appropriate approach, philosophy of science prospect, Piagetian-constructivist research frameworks, review of research in mathematical errors
Procedia PDF Downloads 1891473 Government Final Consumption Expenditure Financial Deepening and Household Consumption Expenditure NPISHs in Nigeria
Authors: Usman A. Usman
Abstract:
Undeniably, unlike the Classical side, the Keynesian perspective of the aggregate demand side indeed has a significant position in the policy, growth, and welfare of Nigeria due to government involvement and ineffective demand of the population living with poor per capita income. This study seeks to investigate the effect of Government Final Consumption Expenditure, Financial Deepening on Households, and NPISHs Final consumption expenditure using data on Nigeria from 1981 to 2019. This study employed the ADF stationarity test, Johansen Cointegration test, and Vector Error Correction Model. The results of the study revealed that the coefficient of Government final consumption expenditure has a positive effect on household consumption expenditure in the long run. There is a long-run and short-run relationship between gross fixed capital formation and household consumption expenditure. The coefficients cpsgdp financial deepening and gross fixed capital formation posit a negative impact on household final consumption expenditure. The coefficients money supply lm2gdp, which is another proxy for financial deepening, and the coefficient FDI have a positive effect on household final consumption expenditure in the long run. Therefore, this study recommends that Gross fixed capital formation stimulates household consumption expenditure; a legal framework to support investment is a panacea to increasing hoodmold income and consumption and reducing poverty in Nigeria. Therefore, this should be a key central component of policy.Keywords: household, government expenditures, vector error correction model, johansen test
Procedia PDF Downloads 591472 Construction of Large Scale UAVs Using Homebuilt Composite Techniques
Authors: Brian J. Kozak, Joshua D. Shipman, Peng Hao Wang, Blake Shipp
Abstract:
The unmanned aerial system (UAS) industry is growing at a rapid pace. This growth has increased the demand for low cost, custom made and high strength unmanned aerial vehicles (UAV). The area of most growth is in the area of 25 kg to 200 kg vehicles. Vehicles this size are beyond the size and scope of simple wood and fabric designs commonly found in hobbyist aircraft. These high end vehicles require stronger materials to complete their mission. Traditional aircraft construction materials such as aluminum are difficult to use without machining or advanced computer controlled tooling. However, by using general aviation composite aircraft homebuilding techniques and materials, a large scale UAV can be constructed cheaply and easily. Furthermore, these techniques could be used to easily manufacture cost made composite shapes and airfoils that would be cost prohibitive when using metals. These homebuilt aircraft techniques are being demonstrated by the researchers in the construction of a 75 kg aircraft.Keywords: composite aircraft, homebuilding, unmanned aerial system industry, UAS, unmanned aerial vehicles, UAV
Procedia PDF Downloads 1351471 The Bayesian Premium Under Entropy Loss
Authors: Farouk Metiri, Halim Zeghdoudi, Mohamed Riad Remita
Abstract:
Credibility theory is an experience rating technique in actuarial science which can be seen as one of quantitative tools that allows the insurers to perform experience rating, that is, to adjust future premiums based on past experiences. It is used usually in automobile insurance, worker's compensation premium, and IBNR (incurred but not reported claims to the insurer) where credibility theory can be used to estimate the claim size amount. In this study, we focused on a popular tool in credibility theory which is the Bayesian premium estimator, considering Lindley distribution as a claim distribution. We derive this estimator under entropy loss which is asymmetric and squared error loss which is a symmetric loss function with informative and non-informative priors. In a purely Bayesian setting, the prior distribution represents the insurer’s prior belief about the insured’s risk level after collection of the insured’s data at the end of the period. However, the explicit form of the Bayesian premium in the case when the prior is not a member of the exponential family could be quite difficult to obtain as it involves a number of integrations which are not analytically solvable. The paper finds a solution to this problem by deriving this estimator using numerical approximation (Lindley approximation) which is one of the suitable approximation methods for solving such problems, it approaches the ratio of the integrals as a whole and produces a single numerical result. Simulation study using Monte Carlo method is then performed to evaluate this estimator and mean squared error technique is made to compare the Bayesian premium estimator under the above loss functions.Keywords: bayesian estimator, credibility theory, entropy loss, monte carlo simulation
Procedia PDF Downloads 3331470 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS
Authors: Eunsu Jang, Kang Park
Abstract:
In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis
Procedia PDF Downloads 3991469 Optimization of Surface Finish in Milling Operation Using Live Tooling via Taguchi Method
Authors: Harish Kumar Ponnappan, Joseph C. Chen
Abstract:
The main objective of this research is to optimize the surface roughness of a milling operation on AISI 1018 steel using live tooling on a HAAS ST-20 lathe. In this study, Taguchi analysis is used to optimize the milling process by investigating the effect of different machining parameters on surface roughness. The L9 orthogonal array is designed with four controllable factors with three different levels each and an uncontrollable factor, resulting in 18 experimental runs. The optimal parameters determined from Taguchi analysis were feed rate – 76.2 mm/min, spindle speed 1150 rpm, depth of cut – 0.762 mm and 2-flute TiN coated high-speed steel as tool material. The process capability Cp and process capability index Cpk values were improved from 0.62 and -0.44 to 1.39 and 1.24 respectively. The average surface roughness values from the confirmation runs were 1.30 µ, decreasing the defect rate from 87.72% to 0.01%. The purpose of this study is to efficiently utilize the Taguchi design to optimize the surface roughness in a milling operation using live tooling.Keywords: live tooling, surface roughness, taguchi analysis, CNC milling operation, CNC turning operation
Procedia PDF Downloads 1391468 Application of Grey Theory in the Forecast of Facility Maintenance Hours for Office Building Tenants and Public Areas
Authors: Yen Chia-Ju, Cheng Ding-Ruei
Abstract:
This study took case office building as subject and explored the responsive work order repair request of facilities and equipment in offices and public areas by gray theory, with the purpose of providing for future related office building owners, executive managers, property management companies, mechanical and electrical companies as reference for deciding and assessing forecast model. Important conclusions of this study are summarized as follows according to the study findings: 1. Grey Relational Analysis discusses the importance of facilities repair number of six categories, namely, power systems, building systems, water systems, air conditioning systems, fire systems and manpower dispatch in order. In terms of facilities maintenance importance are power systems, building systems, water systems, air conditioning systems, manpower dispatch and fire systems in order. 2. GM (1,N) and regression method took maintenance hours as dependent variables and repair number, leased area and tenants number as independent variables and conducted single month forecast based on 12 data from January to December 2011. The mean absolute error and average accuracy of GM (1,N) from verification results were 6.41% and 93.59%; the mean absolute error and average accuracy of regression model were 4.66% and 95.34%, indicating that they have highly accurate forecast capability.Keywords: rey theory, forecast model, Taipei 101, office buildings, property management, facilities, equipment
Procedia PDF Downloads 4431467 Microwave Dielectric Constant Measurements of Titanium Dioxide Using Five Mixture Equations
Authors: Jyh Sheen, Yong-Lin Wang
Abstract:
This research dedicates to find a different measurement procedure of microwave dielectric properties of ceramic materials with high dielectric constants. For the composite of ceramic dispersed in the polymer matrix, the dielectric constants of the composites with different concentrations can be obtained by various mixture equations. The other development of mixture rule is to calculate the permittivity of ceramic from measurements on composite. To do this, the analysis method and theoretical accuracy on six basic mixture laws derived from three basic particle shapes of ceramic fillers have been reported for dielectric constants of ceramic less than 40 at microwave frequency. Similar researches have been done for other well-known mixture rules. They have shown that both the physical curve matching with experimental results and low potential theory error are important to promote the calculation accuracy. Recently, a modified of mixture equation for high dielectric constant ceramics at microwave frequency has also been presented for strontium titanate (SrTiO3) which was selected from five more well known mixing rules and has shown a good accuracy for high dielectric constant measurements. However, it is still not clear the accuracy of this modified equation for other high dielectric constant materials. Therefore, the five more well known mixing rules are selected again to understand their application to other high dielectric constant ceramics. The other high dielectric constant ceramic, TiO2 with dielectric constant 100, was then chosen for this research. Their theoretical error equations are derived. In addition to the theoretical research, experimental measurements are always required. Titanium dioxide is an interesting ceramic for microwave applications. In this research, its powder is adopted as the filler material and polyethylene powder is like the matrix material. The dielectric constants of those ceramic-polyethylene composites with various compositions were measured at 10 GHz. The theoretical curves of the five published mixture equations are shown together with the measured results to understand the curve matching condition of each rule. Finally, based on the experimental observation and theoretical analysis, one of the five rules was selected and modified to a new powder mixture equation. This modified rule has show very good curve matching with the measurement data and low theoretical error. We can then calculate the dielectric constant of pure filler medium (titanium dioxide) by those mixing equations from the measured dielectric constants of composites. The accuracy on the estimating dielectric constant of pure ceramic by various mixture rules will be compared. This modified mixture rule has also shown good measurement accuracy on the dielectric constant of titanium dioxide ceramic. This study can be applied to the microwave dielectric properties measurements of other high dielectric constant ceramic materials in the future.Keywords: microwave measurement, dielectric constant, mixture rules, composites
Procedia PDF Downloads 3651466 Attention States in the Sustained Attention to Response Task: Effects of Trial Duration, Mind-Wandering and Focus
Authors: Aisling Davies, Ciara Greene
Abstract:
Over the past decade the phenomenon of mind-wandering in cognitive tasks has attracted widespread scientific attention. Research indicates that mind-wandering occurrences can be detected through behavioural responses in the Sustained Attention to Response Task (SART) and several studies have attributed a specific pattern of responding around an error in this task to an observable effect of a mind-wandering state. SART behavioural responses are also widely accepted as indices of sustained attention and of general attention lapses. However, evidence suggests that these same patterns of responding may be attributable to other factors associated with more focused states and that it may also be possible to distinguish the two states within the same task. To use behavioural responses in the SART to study mind-wandering, it is essential to establish both the SART parameters that would increase the likelihood of errors due to mind-wandering, and exactly what type of responses are indicative of mind-wandering, neither of which have yet been determined. The aims of this study were to compare different versions of the SART to establish which task would induce the most mind-wandering episodes and to determine whether mind-wandering related errors can be distinguished from errors during periods of focus, by behavioural responses in the SART. To achieve these objectives, 25 Participants completed four modified versions of the SART that differed from the classic paradigm in several ways so to capture more instances of mind-wandering. The duration that trials were presented for was increased proportionately across each of the four versions of the task; Standard, Medium Slow, Slow, and Very Slow and participants intermittently responded to thought probes assessing their level of focus and degree of mind-wandering throughout. Error rates, reaction times and variability in reaction times decreased in proportion to the decrease in trial duration rate and the proportion of mind-wandering related errors increased, until the Very Slow condition where the extra decrease in duration no longer had an effect. Distinct reaction time patterns around an error, dependent on level of focus (high/low) and level of mind-wandering (high/low) were also observed indicating four separate attention states occurring within the SART. This study establishes the optimal duration of trial presentation for inducing mind-wandering in the SART, provides evidence supporting the idea that different attention states can be observed within the SART and highlights the importance of addressing other factors contributing to behavioural responses when studying mind-wandering during this task. A notable finding in relation to the standard SART, was that while more errors were observed in this version of the task, most of these errors were during periods of focus, raising significant questions about our current understanding of mind-wandering and associated failures of attention.Keywords: attention, mind-wandering, trial duration rate, Sustained Attention to Response Task (SART)
Procedia PDF Downloads 1821465 Comparison of Efficient Production of Small Module Gears
Authors: Vaclav Musil, Robert Cep, Sarka Malotova, Jiri Hajnys, Frantisek Spalek
Abstract:
The new designs of satellite gears comprising a number of small gears pose high requirements on the precise production of small module gears. The objective of the experimental activity stated in this article was to compare the conventional rolling gear cutting technology with the modern wire electrical discharge machining (WEDM) technology for the production of small module gear m=0.6 mm (thickness of 2.5 mm and material 30CrMoV9). The WEDM technology lies in copying the profile of gearing from the rendered trajectory which is then transferred to the track of a wire electrode. During the experiment, we focused on the comparison of these production methods. Main measured parameters which significantly influence the lifetime and noise was chosen. The first parameter was to compare the precision of gearing profile in respect to the mathematic model. The second monitored parameter was the roughness and surface topology of the gear tooth side. The experiment demonstrated high accuracy of WEDM technology, but a low quality of machined surface.Keywords: precision of gearing, small module gears, surface topology, WEDM technology
Procedia PDF Downloads 2311464 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations
Authors: Karthikeyan Kalirajan, Ashok Joshi
Abstract:
An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection
Procedia PDF Downloads 4241463 Statistical Analysis of Surface Roughness and Tool Life Using (RSM) in Face Milling
Authors: Mohieddine Benghersallah, Lakhdar Boulanouar, Salim Belhadi
Abstract:
Currently, higher production rate with required quality and low cost is the basic principle in the competitive manufacturing industry. This is mainly achieved by using high cutting speed and feed rates. Elevated temperatures in the cutting zone under these conditions shorten tool life and adversely affect the dimensional accuracy and surface integrity of component. Thus it is necessary to find optimum cutting conditions (cutting speed, feed rate, machining environment, tool material and geometry) that can produce components in accordance with the project and having a relatively high production rate. Response surface methodology is a collection of mathematical and statistical techniques that are useful for modelling and analysis of problems in which a response of interest is influenced by several variables and the objective is to optimize this response. The work presented in this paper examines the effects of cutting parameters (cutting speed, feed rate and depth of cut) on to the surface roughness through the mathematical model developed by using the data gathered from a series of milling experiments performed.Keywords: Statistical analysis (RSM), Bearing steel, Coating inserts, Tool life, Surface Roughness, End milling.
Procedia PDF Downloads 429