Search results for: distance measurement error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6231

Search results for: distance measurement error

5241 Continuous Wave Interference Effects on Global Position System Signal Quality

Authors: Fang Ye, Han Yu, Yibing Li

Abstract:

Radio interference is one of the major concerns in using the global positioning system (GPS) for civilian and military applications. Interference signals are produced not only through all electronic systems but also illegal jammers. Among different types of interferences, continuous wave (CW) interference has strong adverse impacts on the quality of the received signal. In this paper, we make more detailed analysis for CW interference effects on GPS signal quality. Based on the C/A code spectrum lines, the influence of CW interference on the acquisition performance of GPS receivers is further analysed. This influence is supported by simulation results using GPS software receiver. As the most important user parameter of GPS receivers, the mathematical expression of bit error probability is also derived in the presence of CW interference, and the expression is consistent with the Monte Carlo simulation results. The research on CW interference provides some theoretical gist and new thoughts on monitoring the radio noise environment and improving the anti-jamming ability of GPS receivers.

Keywords: GPS, CW interference, acquisition performance, bit error probability, Monte Carlo

Procedia PDF Downloads 260
5240 Impact of Changes of the Conceptual Framework for Financial Reporting on the Indicators of the Financial Statement

Authors: Nadezhda Kvatashidze

Abstract:

The International Accounting Standards Board updated the conceptual framework for financial reporting. The main reason behind it is to resolve the tasks of the accounting, which are caused by the market development and business-transactions of a new economic content. Also, the investors call for higher transparency of information and responsibility for the results in order to make a more accurate risk assessment and forecast. All these make it necessary to further develop the conceptual framework for financial reporting so that the users get useful information. The market development and certain shortcomings of the conceptual framework revealed in practice require its reconsideration and finding new solutions. Some issues and concepts, such as disclosure and supply of information, its qualitative characteristics, assessment, and measurement uncertainty had to be supplemented and perfected. The criteria of recognition of certain elements (assets and liabilities) of reporting had to be updated, too and all this is set out in the updated edition of the conceptual framework for financial reporting, a comprehensive collection of concepts underlying preparation of the financial statement. The main objective of conceptual framework revision is to improve financial reporting and development of clear concepts package. This will support International Accounting Standards Board (IASB) to set common “Approach & Reflection” for similar transactions on the basis of mutually accepted concepts. As a result, companies will be able to develop coherent accounting policies for those transactions or events that are occurred from particular deals to which no standard is used or when standard allows choice of accounting policy.

Keywords: conceptual framework, measurement basis, measurement uncertainty, neutrality, prudence, stewardship

Procedia PDF Downloads 126
5239 [Keynote Speech]: Feature Selection and Predictive Modeling of Housing Data Using Random Forest

Authors: Bharatendra Rai

Abstract:

Predictive data analysis and modeling involving machine learning techniques become challenging in presence of too many explanatory variables or features. Presence of too many features in machine learning is known to not only cause algorithms to slow down, but they can also lead to decrease in model prediction accuracy. This study involves housing dataset with 79 quantitative and qualitative features that describe various aspects people consider while buying a new house. Boruta algorithm that supports feature selection using a wrapper approach build around random forest is used in this study. This feature selection process leads to 49 confirmed features which are then used for developing predictive random forest models. The study also explores five different data partitioning ratios and their impact on model accuracy are captured using coefficient of determination (r-square) and root mean square error (rsme).

Keywords: housing data, feature selection, random forest, Boruta algorithm, root mean square error

Procedia PDF Downloads 323
5238 A Minimum Spanning Tree-Based Method for Initializing the K-Means Clustering Algorithm

Authors: J. Yang, Y. Ma, X. Zhang, S. Li, Y. Zhang

Abstract:

The traditional k-means algorithm has been widely used as a simple and efficient clustering method. However, the algorithm often converges to local minima for the reason that it is sensitive to the initial cluster centers. In this paper, an algorithm for selecting initial cluster centers on the basis of minimum spanning tree (MST) is presented. The set of vertices in MST with same degree are regarded as a whole which is used to find the skeleton data points. Furthermore, a distance measure between the skeleton data points with consideration of degree and Euclidean distance is presented. Finally, MST-based initialization method for the k-means algorithm is presented, and the corresponding time complexity is analyzed as well. The presented algorithm is tested on five data sets from the UCI Machine Learning Repository. The experimental results illustrate the effectiveness of the presented algorithm compared to three existing initialization methods.

Keywords: degree, initial cluster center, k-means, minimum spanning tree

Procedia PDF Downloads 411
5237 Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based on Local Color Histograms

Authors: Mawloud Mosbah, Bachir Boucheham

Abstract:

Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms.

Keywords: CBIR, color global histogram, color local histogram, weak segmentation, Euclidean distance

Procedia PDF Downloads 360
5236 QI Wireless Charging a Scope of Magnetic Inductive Coupling

Authors: Sreenesh Shashidharan, Umesh Gaikwad

Abstract:

QI or 'Chee' which is an interface standard for inductive electrical power transfer over distances of up to 4 cm (1.6 inches). The Qi system comprises a power transmission pad and a compatible receiver in a portable device which is placed on top of the power transmission pad, which charges using the principle of electromagnetic induction. An alternating current is passed through the transmitter coil, generating a magnetic field. This, in turn, induces a voltage in the receiver coil; this can be used to power a mobile device or charge a battery. The efficiency of the power transfer depends on the coupling (k) between the inductors and their quality (Q) The coupling is determined by the distance between the inductors (z) and the relative size (D2 /D). The coupling is further determined by the shape of the coils and the angle between them. If the receiver coil is at a certain distance to the transmitter coil, only a fraction of the magnetic flux, which is generated by the transmitter coil, penetrates the receiver coil and contributes to the power transmission. The more flux reaches the receiver, the better the coils are coupled.

Keywords: inductive electric power, electromagnetic induction, magnetic flux, coupling

Procedia PDF Downloads 732
5235 Phylogenetic Relationships of the Malaysian Primates Cercopithecine Based on COI Gene Sequences

Authors: B. M. Md-Zain, N. A. Rahman, M. A. B. Abdul-Latiff, W. M. R. Idris

Abstract:

We conducted molecular research to portray phylogenetic relationships of Malaysian primates particularly in the genus of Macaca. We have sequenced cytochrome C oxidase subunit I (COI) of mitochondrial DNA of several individuals from M. fascicularis and M. arctoides. PCR amplifications were performed and COI DNA sequences were aligned using ClustalW. Phylogenetic trees were constructed using distance analyses by employing neighbor-joining algorithm (NJ). We managed to sequence 700 bp of COI DNA sequences. The tree topology showed that M. fascicularis did not clump based on phyleogeography division in Peninsular Malaysia. Individuals from Negeri Sembilan merged together with samples from Perak and Penang into one clade. In addition, phylogenetic analyses indicated that M. arctoides was classified into sinica group instead of fascicularis group supported by genetic distance data. COI gene is an effective locus to clarify phylogenetic position of M. arctoides but not in discriminating M. fascicularis population in Peninsular Malaysia.

Keywords: cercopithecine, long-tailed macaque, Macaca fascicularis, Macaca arctoides

Procedia PDF Downloads 357
5234 The Link between Money Market and Economic Growth in Nigeria: Vector Error Correction Model Approach

Authors: Uyi Kizito Ehigiamusoe

Abstract:

The paper examines the impact of money market on economic growth in Nigeria using data for the period 1980-2012. Econometrics techniques such as Ordinary Least Squares Method, Johanson’s Co-integration Test and Vector Error Correction Model were used to examine both the long-run and short-run relationship. Evidence from the study suggest that though a long-run relationship exists between money market and economic growth, but the present state of the Nigerian money market is significantly and negatively related to economic growth. The link between the money market and the real sector of the economy remains very weak. This implies that the market is not yet developed enough to produce the needed growth that will propel the Nigerian economy because of several challenges. It was therefore recommended that government should create the appropriate macroeconomic policies, legal framework and sustain the present reforms with a view to developing the market so as to promote productive activities, investments, and ultimately economic growth.

Keywords: economic growth, investments, money market, money market challenges, money market instruments

Procedia PDF Downloads 344
5233 An Analysis of Pick Travel Distances for Non-Traditional Unit Load Warehouses with Multiple P/D Points

Authors: Subir S. Rao

Abstract:

Existing warehouse configurations use non-traditional aisle designs with a central P/D point in their models, which is mathematically simple but less practical. Many warehouses use multiple P/D points to avoid congestion for pickers, and different warehouses have different flow policies and infrastructure for using the P/D points. Many warehouses use multiple P/D points with non-traditional aisle designs in their analytical models. Standard warehouse models introduce one-sided multiple P/D points in a flying-V warehouse and minimize pick distance for a one-way travel between an active P/D point and a pick location with P/D points, assuming uniform flow rates. A simulation of the mathematical model generally uses four fixed configurations of P/D points which are on two different sides of the warehouse. It can be easily proved that if the source and destination P/D points are both chosen randomly, in a uniform way, then minimizing the one-way travel is the same as minimizing the two-way travel. Another warehouse configuration analytically models the warehouse for multiple one-sided P/D points while keeping the angle of the cross-aisles and picking aisles as a decision variable. The minimization of the one-way pick travel distance from the P/D point to the pick location by finding the optimal position/angle of the cross-aisle and picking aisle for warehouses having different numbers of multiple P/D points with variable flow rates is also one of the objectives. Most models of warehouses with multiple P/D points are one-way travel models and we extend these analytical models to minimize the two-way pick travel distance wherein the destination P/D is chosen optimally for the return route, which is not similar to minimizing the one-way travel. In most warehouse models, the return P/D is chosen randomly, but in our research, the return route P/D point is chosen optimally. Such warehouses are common in practice, where the flow rates at the P/D points are flexible and depend totally on the position of the picks. A good warehouse management system is efficient in consolidating orders over multiple P/D points in warehouses where the P/D is flexible in function. In the latter arrangement, pickers and shrink-wrap processes are not assigned to particular P/D points, which ultimately makes the P/D points more flexible and easy to use interchangeably for picking and deposits. The number of P/D points considered in this research uniformly increases from a single-central one to a maximum of each aisle symmetrically having a P/D point below it.

Keywords: non-traditional warehouse, V cross-aisle, multiple P/D point, pick travel distance

Procedia PDF Downloads 40
5232 Modernization of the Economic Price Adjustment Software

Authors: Roger L. Goodwin

Abstract:

The US Consumer Price Indices (CPIs) measures hundreds of items in the US economy. Many social programs and government benefits index to the CPIs. In mid to late 1990, much research went into changes to the CPI by a Congressional Advisory Committee. One thing can be said from the research is that, aside from there are alternative estimators for the CPI; any fundamental change to the CPI will affect many government programs. The purpose of this project is to modernize an existing process. This paper will show the development of a small, visual, software product that documents the Economic Price Adjustment (EPA) for long-term contracts. The existing workbook does not provide the flexibility to calculate EPAs where the base-month and the option-month are different. Nor does the workbook provide automated error checking. The small, visual, software product provides the additional flexibility and error checking. This paper presents the feedback to project.

Keywords: Consumer Price Index, Economic Price Adjustment, contracts, visualization tools, database, reports, forms, event procedures

Procedia PDF Downloads 318
5231 Soil Stress State under Tractive Tire and Compaction Model

Authors: Prathuang Usaborisut, Dithaporn Thungsotanon

Abstract:

Soil compaction induced by a tractor towing trailer becomes a major problem associated to sugarcane productivity. Soil beneath the tractor’s tire is not only under compressing stress but also shearing stress. Therefore, in order to help to understand such effects on soil, this research aimed to determine stress state in soil and predict compaction of soil under a tractive tire. The octahedral stress ratios under the tires were higher than one and much higher under higher draft forces. Moreover, the ratio was increasing with increase of number of tire’s passage. Soil compaction model was developed using data acquired from triaxial tests. The model was then used to predict soil bulk density under tractive tire. The maximum error was about 4% at 15 cm depth under lower draft force and tended to increase with depth and draft force. At depth of 30 cm and under higher draft force, the maximum error was about 16%.

Keywords: draft force, soil compaction model, stress state, tractive tire

Procedia PDF Downloads 352
5230 Assessment of Planet Image for Land Cover Mapping Using Soft and Hard Classifiers

Authors: Lamyaa Gamal El-Deen Taha, Ashraf Sharawi

Abstract:

Planet image is a new data source from planet lab. This research is concerned with the assessment of Planet image for land cover mapping. Two pixel based classifiers and one subpixel based classifier were compared. Firstly, rectification of Planet image was performed. Secondly, a comparison between minimum distance, maximum likelihood and neural network classifications for classification of Planet image was performed. Thirdly, the overall accuracy of classification and kappa coefficient were calculated. Results indicate that neural network classification is best followed by maximum likelihood classifier then minimum distance classification for land cover mapping.

Keywords: planet image, land cover mapping, rectification, neural network classification, multilayer perceptron, soft classifiers, hard classifiers

Procedia PDF Downloads 187
5229 A Multilayer Perceptron Neural Network Model Optimized by Genetic Algorithm for Significant Wave Height Prediction

Authors: Luis C. Parra

Abstract:

The significant wave height prediction is an issue of great interest in the field of coastal activities because of the non-linear behavior of the wave height and its complexity of prediction. This study aims to present a machine learning model to forecast the significant wave height of the oceanographic wave measuring buoys anchored at Mooloolaba of the Queensland Government Data. Modeling was performed by a multilayer perceptron neural network-genetic algorithm (GA-MLP), considering Relu(x) as the activation function of the MLPNN. The GA is in charge of optimized the MLPNN hyperparameters (learning rate, hidden layers, neurons, and activation functions) and wrapper feature selection for the window width size. Results are assessed using Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The GAMLPNN algorithm was performed with a population size of thirty individuals for eight generations for the prediction optimization of 5 steps forward, obtaining a performance evaluation of 0.00104 MSE, 0.03222 RMSE, 0.02338 MAE, and 0.71163% of MAPE. The results of the analysis suggest that the MLPNNGA model is effective in predicting significant wave height in a one-step forecast with distant time windows, presenting 0.00014 MSE, 0.01180 RMSE, 0.00912 MAE, and 0.52500% of MAPE with 0.99940 of correlation factor. The GA-MLP algorithm was compared with the ARIMA forecasting model, presenting better performance criteria in all performance criteria, validating the potential of this algorithm.

Keywords: significant wave height, machine learning optimization, multilayer perceptron neural networks, evolutionary algorithms

Procedia PDF Downloads 107
5228 Advocating for Those with Limited Mobility

Authors: Dorothy I. Riddle

Abstract:

Limited mobility (or an inability to walk more than 15 meters without sitting down to rest) restricts full community participation for 13 percent of Canadian adults or 4.2 million persons), yet Canadian accessibility standards are silent on distance to be walked as an accessibility barrier to be addressed. Instead, they focus on ensuring access for the wheeled mobility devices used regularly by le The Accessible Canada Act mandates that Canada be barrier free by 2040, which will necessitate eliminating distance to be walked as a barrier in federal programs and services. This paper details the results of a multi-year research project funded by Accessibility Standards Canada to document the lived experience of those struggling with limited mobility and make recommendations regarding how to ensure accessibility for those with limited mobility. Over 2,600 Canadians from across Canada participated in an online survey and follow-up focus groups. The results underscored the importance of providing not only mobility supports in public facilities but also the information necessary for planning access to federal programs and services. As numerous participants indicated, if they weren’t sure how far they would have to walk, they simply stayed home and depended on friends and relatives for help with errands or appointments. This included failing to participate in civic activities, such as voting, for fear of having to walk too far and stand unsupported for too long. Types of information that were deemed critical included whether or not mobility aids were available, where seating to rest was located throughout the facility, what alternatives to standing while waiting for service and having to walk to the service provider (rather than the provider coming to the customer) were available, and diagrams of accessible parking and its relationship to elevators and services.

Keywords: accessibility standards, distance to be walked, limited mobility, mobility aids, service to customer

Procedia PDF Downloads 81
5227 The Cloud Systems Used in Education: Properties and Overview

Authors: Agah Tuğrul Korucu, Handan Atun

Abstract:

Diversity and usefulness of information that used in education are have increased due to development of technology. Web technologies have made enormous contributions to the distance learning system especially. Mobile systems, one of the most widely used technology in distance education, made much easier to access web technologies. Not bounding by space and time, individuals have had the opportunity to access the information on web. In addition to this, the storage of educational information and resources and accessing these information and resources is crucial for both students and teachers. Because of this importance, development and dissemination of web technologies supply ease of access to information and resources are provided by web technologies. Dynamic web technologies introduced as new technologies that enable sharing and reuse of information, resource or applications via the Internet and bring websites into expandable platforms are commonly known as Web 2.0 technologies. Cloud systems are one of the dynamic web technologies that defined as a model provides approaching the demanded information independent from time and space in appropriate circumstances and developed by NIST. One of the most important advantages of cloud systems is meeting the requirements of users directly on the web regardless of hardware, software, and dealing with install. Hence, this study aims at using cloud services in education and investigating the services provided by the cloud computing. Survey method has been used as research method. In the findings of this research the fact that cloud systems are used such studies as resource sharing, collaborative work, assignment submission and feedback, developing project in the field of education, and also, it is revealed that cloud systems have plenty of significant advantages in terms of facilitating teaching activities and the interaction between teacher, student and environment.

Keywords: cloud systems, cloud systems in education, online learning environment, integration of information technologies, e-learning, distance learning

Procedia PDF Downloads 349
5226 Optimal Concentration of Fluorescent Nanodiamonds in Aqueous Media for Bioimaging and Thermometry Applications

Authors: Francisco Pedroza-Montero, Jesús Naín Pedroza-Montero, Diego Soto-Puebla, Osiris Alvarez-Bajo, Beatriz Castaneda, Sofía Navarro-Espinoza, Martín Pedroza-Montero

Abstract:

Nanodiamonds have been widely studied for their physical properties, including chemical inertness, biocompatibility, optical transparency from the ultraviolet to the infrared region, high thermal conductivity, and mechanical strength. In this work, we studied how the fluorescence spectrum of nanodiamonds quenches concerning the concentration in aqueous solutions systematically ranging from 0.1 to 10 mg/mL. Our results demonstrated a non-linear fluorescence quenching as the concentration increases for both of the NV zero-phonon lines; the 5 mg/mL concentration shows the maximum fluorescence emission. Furthermore, this behaviour is theoretically explained as an electronic recombination process that modulates the intensity in the NV centres. Finally, to gain more insight, the FRET methodology is used to determine the fluorescence efficiency in terms of the fluorophores' separation distance. Thus, the concentration level is simulated as follows, a small distance between nanodiamonds would be considered a highly concentrated system, whereas a large distance would mean a low concentrated one. Although the 5 mg/mL concentration shows the maximum intensity, our main interest is focused on the concentration of 0.5 mg/mL, which our studies demonstrate the optimal human cell viability (99%). In this respect, this concentration has the feature of being as biocompatible as water giving the possibility to internalize it in cells without harming the living media. To this end, not only can we track nanodiamonds on the surface or inside the cell with excellent precision due to their fluorescent intensity, but also, we can perform thermometry tests transforming a fluorescence contrast image into a temperature contrast image.

Keywords: nanodiamonds, fluorescence spectroscopy, concentration, bioimaging, thermometry

Procedia PDF Downloads 405
5225 Some Issues of Measurement of Impairment of Non-Financial Assets in the Public Sector

Authors: Mariam Vardiashvili

Abstract:

The economic value of the asset impairment process is quite large. Impairment reflects the reduction of future economic benefits or service potentials itemized in the asset. The assets owned by public sector entities bring economic benefits or are used for delivery of the free-of-charge services. Consequently, they are classified as cash-generating and non-cash-generating assets. IPSAS 21 - Impairment of non-cash-generating assets, and IPSAS 26 - Impairment of cash-generating assets, have been designed considering this specificity.  When measuring impairment of assets, it is important to select the relevant methods. For measurement of the impaired Non-Cash-Generating Assets, IPSAS 21 recommends three methods: Depreciated Replacement Cost Approach, Restoration Cost Approach, and  Service Units Approach. Impairment of Value in Use of Cash-Generating Assets (according to IPSAS 26) is measured by discounted value of the money sources to be received in future. Value in use of the cash-generating asserts (as per IPSAS 26) is measured by the discounted value of the money sources to be received in the future. The article provides classification of the assets in the public sector  as non-cash-generating assets and cash-generating assets and, deals also with the factors which should be considered when evaluating  impairment of assets. An essence of impairment of the non-financial assets and the methods of measurement thereof evaluation are formulated according to IPSAS 21 and IPSAS 26. The main emphasis is put on different methods of measurement of the value in use of the impaired Cash-Generating Assets and Non-Cash-Generation Assets and the methods of their selection. The traditional and the expected cash flow approaches for calculation of the discounted value are reviewed. The article also discusses the issues of recognition of impairment loss and its reflection in the financial reporting. The article concludes that despite a functional purpose of the impaired asset, whichever method is used for measuring the asset, presentation of realistic information regarding the value of the assets should be ensured in the financial reporting. In the theoretical development of the issue, the methods of scientific abstraction, analysis and synthesis were used. The research was carried out with a systemic approach. The research process uses international standards of accounting, theoretical researches and publications of Georgian and foreign scientists.

Keywords: cash-generating assets, non-cash-generating assets, recoverable (usable restorative) value, value of use

Procedia PDF Downloads 143
5224 Parametric Optimization of High-Performance Electric Vehicle E-Gear Drive for Radiated Noise Using 1-D System Simulation

Authors: Sanjai Sureshkumar, Sathish G. Kumar, P. V. V. Sathyanarayana

Abstract:

For e-gear drivetrain, the transmission error and the resulting variation in mesh stiffness is one of the main source of excitation in High performance Electric Vehicle. These vibrations are transferred through the shaft to the bearings and then to the e-Gear drive housing eventually radiating noise. A parametrical model developed in 1-D system simulation by optimizing the micro and macro geometry along with bearing properties and oil filtration to achieve least transmission error and high contact ratio. Histogram analysis is performed to condense the actual road load data into condensed duty cycle to find the bearing forces. The structural vibration generated by these forces will be simulated in a nonlinear solver obtaining the normal surface velocity of the housing and the results will be carried forward to Acoustic software wherein a virtual environment of the surrounding (actual testing scenario) with accurate microphone position will be maintained to predict the sound pressure level of radiated noise and directivity plot of the e-Gear Drive. Order analysis will be carried out to find the root cause of the vibration and whine noise. Broadband spectrum will be checked to find the rattle noise source. Further, with the available results, the design will be optimized, and the next loop of simulation will be performed to build a best e-Gear Drive on NVH aspect. Structural analysis will be also carried out to check the robustness of the e-Gear Drive.

Keywords: 1-D system simulation, contact ratio, e-Gear, mesh stiffness, micro and macro geometry, transmission error, radiated noise, NVH

Procedia PDF Downloads 149
5223 Extended Kalman Filter and Markov Chain Monte Carlo Method for Uncertainty Estimation: Application to X-Ray Fluorescence Machine Calibration and Metal Testing

Authors: S. Bouhouche, R. Drai, J. Bast

Abstract:

This paper is concerned with a method for uncertainty evaluation of steel sample content using X-Ray Fluorescence method. The considered method of analysis is a comparative technique based on the X-Ray Fluorescence; the calibration step assumes the adequate chemical composition of metallic analyzed sample. It is proposed in this work a new combined approach using the Kalman Filter and Markov Chain Monte Carlo (MCMC) for uncertainty estimation of steel content analysis. The Kalman filter algorithm is extended to the model identification of the chemical analysis process using the main factors affecting the analysis results; in this case, the estimated states are reduced to the model parameters. The MCMC is a stochastic method that computes the statistical properties of the considered states such as the probability distribution function (PDF) according to the initial state and the target distribution using Monte Carlo simulation algorithm. Conventional approach is based on the linear correlation, the uncertainty budget is established for steel Mn(wt%), Cr(wt%), Ni(wt%) and Mo(wt%) content respectively. A comparative study between the conventional procedure and the proposed method is given. This kind of approaches is applied for constructing an accurate computing procedure of uncertainty measurement.

Keywords: Kalman filter, Markov chain Monte Carlo, x-ray fluorescence calibration and testing, steel content measurement, uncertainty measurement

Procedia PDF Downloads 283
5222 A New and Simple Method of Plotting Binocular Single Vision Field (BSVF) using the Cervical Range of Motion - CROM - Device

Authors: Mihir Kothari, Heena Khan, Vivek Rathod

Abstract:

Assessment of binocular single vision field (BSVF) is traditionally done using a Goldmann perimeter. The measurement of BSVF is important for the management of incomitant strabismus, viz. orbital fractures, thyroid orbitopathy, oculomotor cranial nerve palsies, Duane syndrome etc. In this paper, we describe a new technique for measuring BSVF using a CROM device. Goldmann perimeter is very bulky and expensive (Euro 5000.00 or more) instrument which is 'almost' obsolete from the contemporary ophthalmology practice. Whereas, CROM can be easily made in the DIY (do it yourself) manner for the fraction of the price of the perimeter (only Euro 15.00). Moreover, CROM is useful for the accurate measurement of ocular torticollis vis. nystagmus, paralytic or incomitant squint etc, and it is highly portable.

Keywords: binocular single vision, perimetry, cervical rgen of motion, visual field, binocular single vision field

Procedia PDF Downloads 66
5221 Cost Benefit Analysis: Evaluation among the Millimetre Wavebands and SHF Bands of Small Cell 5G Networks

Authors: Emanuel Teixeira, Anderson Ramos, Marisa Lourenço, Fernando J. Velez, Jon M. Peha

Abstract:

This article discusses the benefit cost analysis aspects of millimetre wavebands (mmWaves) and Super High Frequency (SHF). The devaluation along the distance of the carrier-to-noise-plus-interference ratio with the coverage distance is assessed by considering two different path loss models, the two-slope urban micro Line-of-Sight (UMiLoS) for the SHF band and the modified Friis propagation model, for frequencies above 24 GHz. The equivalent supported throughput is estimated at the 5.62, 28, 38, 60 and 73 GHz frequency bands and the influence of carrier-to-noise-plus-interference ratio in the radio and network optimization process is explored. Mostly owing to the lessening caused by the behaviour of the two-slope propagation model for SHF band, the supported throughput at this band is higher than at the millimetre wavebands only for the longest cell lengths. The benefit cost analysis of these pico-cellular networks was analysed for regular cellular topologies, by considering the unlicensed spectrum. For shortest distances, we can distinguish an optimal of the revenue in percentage terms for values of the cell length, R ≈ 10 m for the millimeter wavebands and for longest distances an optimal of the revenue can be observed at R ≈ 550 m for the 5.62 GHz. It is possible to observe that, for the 5.62 GHz band, the profit is slightly inferior than for millimetre wavebands, for the shortest Rs, and starts to increase for cell lengths approximately equal to the ratio between the break-point distance and the co-channel reuse factor, achieving a maximum for values of R approximately equal to 550 m.

Keywords: millimetre wavebands, SHF band, SINR, cost benefit analysis, 5G

Procedia PDF Downloads 141
5220 Verification of Satellite and Observation Measurements to Build Solar Energy Projects in North Africa

Authors: Samy A. Khalil, U. Ali Rahoma

Abstract:

The measurements of solar radiation, satellite data has been routinely utilize to estimate solar energy. However, the temporal coverage of satellite data has some limits. The reanalysis, also known as "retrospective analysis" of the atmosphere's parameters, is produce by fusing the output of NWP (Numerical Weather Prediction) models with observation data from a variety of sources, including ground, and satellite, ship, and aircraft observation. The result is a comprehensive record of the parameters affecting weather and climate. The effectiveness of reanalysis datasets (ERA-5) for North Africa was evaluate against high-quality surfaces measured using statistical analysis. Estimating the distribution of global solar radiation (GSR) over five chosen areas in North Africa through ten-years during the period time from 2011 to 2020. To investigate seasonal change in dataset performance, a seasonal statistical analysis was conduct, which showed a considerable difference in mistakes throughout the year. By altering the temporal resolution of the data used for comparison, the performance of the dataset is alter. Better performance is indicate by the data's monthly mean values, but data accuracy is degraded. Solar resource assessment and power estimation are discuses using the ERA-5 solar radiation data. The average values of mean bias error (MBE), root mean square error (RMSE) and mean absolute error (MAE) of the reanalysis data of solar radiation vary from 0.079 to 0.222, 0.055 to 0.178, and 0.0145 to 0.198 respectively during the period time in the present research. The correlation coefficient (R2) varies from 0.93 to 99% during the period time in the present research. This research's objective is to provide a reliable representation of the world's solar radiation to aid in the use of solar energy in all sectors.

Keywords: solar energy, ERA-5 analysis data, global solar radiation, North Africa

Procedia PDF Downloads 98
5219 Forecasting Free Cash Flow of an Industrial Enterprise Using Fuzzy Set Tools

Authors: Elena Tkachenko, Elena Rogova, Daria Koval

Abstract:

The paper examines the ways of cash flows forecasting in the dynamic external environment. The so-called new reality in economy lowers the predictability of the companies’ performance indicators due to the lack of long-term steady trends in external conditions of development and fast changes in the markets. The traditional methods based on the trend analysis lead to a very high error of approximation. The macroeconomic situation for the last 10 years is defined by continuous consequences of financial crisis and arising of another one. In these conditions, the instruments of forecasting on the basis of fuzzy sets show good results. The fuzzy sets based models turn out to lower the error of approximation to acceptable level and to provide the companies with reliable cash flows estimation that helps to reach the financial stability. In the paper, the applicability of the model of cash flows forecasting based on fuzzy logic was analyzed.

Keywords: cash flow, industrial enterprise, forecasting, fuzzy sets

Procedia PDF Downloads 208
5218 Application of Fuzzy Analytical Hierarchical Process in Evaluation Supply Chain Performance Measurement

Authors: Riyadh Jamegh, AllaEldin Kassam, Sawsan Sabih

Abstract:

In modern trends of market, organizations face high-pressure environment which is characterized by globalization, high competition, and customer orientation, so it is very crucial to control and know the weak and strong points of the supply chain in order to improve their performance. So the performance measurements presented as an important tool of supply chain management because it's enabled the organizations to control, understand, and improve their efficiency. This paper aims to identify supply chain performance measurement (SCPM) by using Fuzzy Analytical Hierarchical Process (FAHP). In our real application, the performance of organizations estimated based on four parameters these are cost parameter indicator of cost (CPI), inventory turnover parameter indicator of (INPI), raw material parameter (RMPI), and safety stock level parameter indicator (SSPI), these indicators vary in impact on performance depending upon policies and strategies of organization. In this research (FAHP) technique has been used to identify the importance of such parameters, and then first fuzzy inference (FIR1) is applied to identify performance indicator of each factor depending on the importance of the factor and its value. Then, the second fuzzy inference (FIR2) also applied to integrate the effect of these indicators and identify (SCPM) which represent the required output. The developed approach provides an effective tool for evaluation of supply chain performance measurement.

Keywords: fuzzy performance measurements, supply chain, fuzzy logic, key performance indicator

Procedia PDF Downloads 142
5217 Mathematical Modeling of the Operating Process and a Method to Determine the Design Parameters in an Electromagnetic Hammer Using Solenoid Electromagnets

Authors: Song Hyok Choe

Abstract:

This study presented a method to determine the optimum design parameters based on a mathematical model of the operating process in a manual electromagnetic hammer using solenoid electromagnets. The operating process of the electromagnetic hammer depends on the circuit scheme of the power controller. Mathematical modeling of the operating process was carried out by considering the energy transfer process in the forward and reverse windings and the electromagnetic force acting on the impact and brake pistons. Using the developed mathematical model, the initial design data of a manual electromagnetic hammer proposed in this paper are encoded and analyzed in Matlab. On the other hand, a measuring experiment was carried out by using a measurement device to check the accuracy of the developed mathematical model. The relative errors of the analytical results for measured stroke distance of the impact piston, peak value of forward stroke current and peak value of reverse stroke current were −4.65%, 9.08% and 9.35%, respectively. Finally, it was shown that the mathematical model of the operating process of an electromagnetic hammer is relatively accurate, and it can be used to determine the design parameters of the electromagnetic hammer. Therefore, the design parameters that can provide the required impact energy in the manual electromagnetic hammer were determined using a mathematical model developed. The proposed method will be used for the further design and development of the various types of percussion rock drills.

Keywords: solenoid electromagnet, electromagnetic hammer, stone processing, mathematical modeling

Procedia PDF Downloads 47
5216 Numerical and Experimental Investigation of the Aerodynamic Performances of Counter-Rotating Rotors

Authors: Ibrahim Beldjilali, Adel Ghenaiet

Abstract:

The contra-rotating axial machine is a promising solution for several applications, where high pressure and efficiencies are needed. Also, they allow reducing the speed of rotation, the radial spacing and a better flexibility of use. However, this requires a better understanding of their operation, including the influence of second rotor on the overall aerodynamic performances. This work consisted of both experimental and numerical studies to characterize this counter-rotating fan, especially the analysis of the effects of the blades stagger angle and the inter-distance between the rotors. The experimental study served to validate the computational fluid dynamics model (CFD) used in the simulations. The numerical study permitted to cover a wider range of parameter and deeper investigation on flow structures details, including the effects of blade stagger angle and inter-distance, associated with the interaction between the rotors. As a result, there is a clear improvement in aerodynamic performance compared with a conventional machine.

Keywords: aerodynamic performance, axial fan, counter rotating rotors, CFD, experimental study

Procedia PDF Downloads 159
5215 Service-Oriented Enterprise Architecture (SoEA) Adoption and Maturity Measurement Model: A Systematic Review

Authors: Nur Azaliah Abu Bakar, Harihodin Selamat, Mohd Nazri Kama

Abstract:

This article provides a systematic review of existing research related to the Service-oriented Enterprise Architecture (SoEA) adoption and maturity measurement model. The review’s main goals are to support research, to facilitate other researcher’s search for relevant studies and to propose areas for future studies within this area. In addition, this article provides useful information on SoEA adoption issues and its related maturity model, based on research-based knowledge. The review results suggest that motives, critical success factors (CSFs), implementation status and benefits are the most frequently studied areas and that each of these areas would benefit from further exposure.

Keywords: systematic literature review, service-oriented architecture, adoption, maturity model

Procedia PDF Downloads 324
5214 Monthly Labor Forces Surveys Portray Smooth Labor Markets and Bias Fixed Effects Estimation: Evidence from Israel’s Transition from Quarterly to Monthly Surveys

Authors: Haggay Etkes

Abstract:

This study provides evidence for the impact of monthly interviews conducted for the Israeli Labor Force Surveys (LFSs) on estimated flows between labor force (LF) statuses and on coefficients in fixed-effects estimations. The study uses the natural experiment of parallel interviews for the quarterly and the monthly LFSs in Israel in 2011 for demonstrating that the Labor Force Participation (LFP) rate of Jewish persons who participated in the monthly LFS increased between interviews, while in the quarterly LFS it decreased. Interestingly, the estimated impact on the LFP rate of self-reporting individuals is 2.6–3.5 percentage points while the impact on the LFP rate of individuals whose data was reported by another member of their household (a proxy), is lower and statistically insignificant. The relative increase of the LFP rate in the monthly survey is a result of a lower rate of exit from the LF and a somewhat higher rate of entry into the LF relative to these flows in the quarterly survey. These differing flows have a bearing on labor search models as the monthly survey portrays a labor market with less friction and a “steady state” LFP rate that is 5.9 percentage points higher than the quarterly survey. The study also demonstrates that monthly interviews affect a specific group (45–64 year-olds); thus the sign of coefficient of age as an explanatory variable in fixed-effects regressions on LFP is negative in the monthly survey and positive in the quarterly survey.

Keywords: measurement error, surveys, search, LFSs

Procedia PDF Downloads 270
5213 Real-Time Radar Tracking Based on Nonlinear Kalman Filter

Authors: Milca F. Coelho, K. Bousson, Kawser Ahmed

Abstract:

To accurately track an aerospace vehicle in a time-critical situation and in a highly nonlinear environment, is one of the strongest interests within the aerospace community. The tracking is achieved by estimating accurately the state of a moving target, which is composed of a set of variables that can provide a complete status of the system at a given time. One of the main ingredients for a good estimation performance is the use of efficient estimation algorithms. A well-known framework is the Kalman filtering methods, designed for prediction and estimation problems. The success of the Kalman Filter (KF) in engineering applications is mostly due to the Extended Kalman Filter (EKF), which is based on local linearization. Besides its popularity, the EKF presents several limitations. To address these limitations and as a possible solution to tracking problems, this paper proposes the use of the Ensemble Kalman Filter (EnKF). Although the EnKF is being extensively used in the context of weather forecasting and it is being recognized for producing accurate and computationally effective estimation on systems with a very high dimension, it is almost unknown by the tracking community. The EnKF was initially proposed as an attempt to improve the error covariance calculation, which on the classic Kalman Filter is difficult to implement. Also, in the EnKF method the prediction and analysis error covariances have ensemble representations. These ensembles have sizes which limit the number of degrees of freedom, in a way that the filter error covariance calculations are a lot more practical for modest ensemble sizes. In this paper, a realistic simulation of a radar tracking was performed, where the EnKF was applied and compared with the Extended Kalman Filter. The results suggested that the EnKF is a promising tool for tracking applications, offering more advantages in terms of performance.

Keywords: Kalman filter, nonlinear state estimation, optimal tracking, stochastic environment

Procedia PDF Downloads 147
5212 Error Analysis in English Essays Writing of Thai Students with Different English Language Experiences

Authors: Sirirat Choophan Atthaphonphiphat

Abstract:

The objective of the study is to analyze errors in English essay writing of Thai (Suratthani Rajabhat University)’s students with different English language experiences. 16 subjects were divided into 2 groups depending on their English language experience. The data were collected from English essay writing about 'My daily life'. The finding shows that 275 tokens of errors were found from 240 English sentences. The errors were categorized into 4 types based on frequency counts: grammatical errors, mechanical errors, lexical errors, and structural errors, respectively. The findings support all of the researcher’s hypothesizes, i.e. 1) the students with low English language experience made more errors than those with high English language experience; 2) all errors in English essay writing of Suratthani Rajabhat University’s students, the interlingual errors are more than the intralingual ones; 3) systemic and structural differences between English (target language) and Thai (mother-tongue language) lead to the errors in English essays writing of Suratthani Rajabhat University’s students.

Keywords: applied linguistics, error analysis, interference, language transfer

Procedia PDF Downloads 622