Search results for: uncertainty and error visualisation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2859

Search results for: uncertainty and error visualisation

2139 Evaluation of Ceres Wheat and Rice Model for Climatic Conditions in Haryana, India

Authors: Mamta Rana, K. K. Singh, Nisha Kumari

Abstract:

The simulation models with its soil-weather-plant atmosphere interacting system are important tools for assessing the crops in changing climate conditions. The CERES-Wheat & Rice vs. 4.6 DSSAT was calibrated and evaluated for one of the major producers of wheat and rice state- Haryana, India. The simulation runs were made under irrigated conditions and three fertilizer applications dose of N-P-K to estimate crop yield and other growth parameters along with the phenological development of the crop. The genetic coefficients derived by iteratively manipulating the relevant coefficients that characterize the phenological process of wheat and rice crop to the best fit match between the simulated and observed anthesis, physological maturity and final grain yield. The model validated by plotting the simulated and remote sensing derived LAI. LAI product from remote sensing provides the edge of spatial, timely and accurate assessment of crop. For validating the yield and yield components, the error percentage between the observed and simulated data was calculated. The analysis shows that the model can be used to simulate crop yield and yield components for wheat and rice cultivar under different management practices. During the validation, the error percentage was less than 10%, indicating the utility of the calibrated model for climate risk assessment in the selected region.

Keywords: simulation model, CERES-wheat and rice model, crop yield, genetic coefficient

Procedia PDF Downloads 305
2138 A Geo DataBase to Investigate the Maximum Distance Error in Quality of Life Studies

Authors: Paolino Di Felice

Abstract:

The background and significance of this study come from papers already appeared in the literature which measured the impact of public services (e.g., hospitals, schools, ...) on the citizens’ needs satisfaction (one of the dimensions of QOL studies) by calculating the distance between the place where they live and the location on the territory of the services. Those studies assume that the citizens' dwelling coincides with the centroid of the polygon that expresses the boundary of the administrative district, within the city, they belong to. Such an assumption “introduces a maximum measurement error equal to the greatest distance between the centroid and the border of the administrative district.”. The case study, this abstract reports about, investigates the implications descending from the adoption of such an approach but at geographical scales greater than the urban one, namely at the three levels of nesting of the Italian administrative units: the (20) regions, the (110) provinces, and the 8,094 municipalities. To carry out this study, it needs to be decided: a) how to store the huge amount of (spatial and descriptive) input data and b) how to process them. The latter aspect involves: b.1) the design of algorithms to investigate the geometry of the boundary of the Italian administrative units; b.2) their coding in a programming language; b.3) their execution and, eventually, b.4) archiving the results in a permanent support. The IT solution we implemented is centered around a (PostgreSQL/PostGIS) Geo DataBase structured in terms of three tables that fit well to the hierarchy of nesting of the Italian administrative units: municipality(id, name, provinceId, istatCode, regionId, geometry) province(id, name, regionId, geometry) region(id, name, geometry). The adoption of the DBMS technology allows us to implement the steps "a)" and "b)" easily. In particular, step "b)" is simplified dramatically by calling spatial operators and spatial built-in User Defined Functions within SQL queries against the Geo DB. The major findings coming from our experiments can be summarized as follows. The approximation that, on the average, descends from assimilating the residence of the citizens with the centroid of the administrative unit of reference is of few kilometers (4.9) at the municipalities level, while it becomes conspicuous at the other two levels (28.9 and 36.1, respectively). Therefore, studies such as those mentioned above can be extended up to the municipal level without affecting the correctness of the interpretation of the results, but not further. The IT framework implemented to carry out the experiments can be replicated for studies referring to the territory of other countries all over the world.

Keywords: quality of life, distance measurement error, Italian administrative units, spatial database

Procedia PDF Downloads 371
2137 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 89
2136 Localization of Buried People Using Received Signal Strength Indication Measurement of Wireless Sensor

Authors: Feng Tao, Han Ye, Shaoyi Liao

Abstract:

City constructions collapse after earthquake and people will be buried under ruins. Search and rescue should be conducted as soon as possible to save them. Therefore, according to the complicated environment, irregular aftershocks and rescue allow of no delay, a kind of target localization method based on RSSI (Received Signal Strength Indication) is proposed in this article. The target localization technology based on RSSI with the features of low cost and low complexity has been widely applied to nodes localization in WSN (Wireless Sensor Networks). Based on the theory of RSSI transmission and the environment impact to RSSI, this article conducts the experiments in five scenes, and multiple filtering algorithms are applied to original RSSI value in order to establish the signal propagation model with minimum test error respectively. Target location can be calculated from the distance, which can be estimated from signal propagation model, through improved centroid algorithm. Result shows that the localization technology based on RSSI is suitable for large-scale nodes localization. Among filtering algorithms, mixed filtering algorithm (average of average, median and Gaussian filtering) performs better than any other single filtering algorithm, and by using the signal propagation model, the minimum error of distance between known nodes and target node in the five scene is about 3.06m.

Keywords: signal propagation model, centroid algorithm, localization, mixed filtering, RSSI

Procedia PDF Downloads 300
2135 Classification of Barley Varieties by Artificial Neural Networks

Authors: Alper Taner, Yesim Benal Oztekin, Huseyin Duran

Abstract:

In this study, an Artificial Neural Network (ANN) was developed in order to classify barley varieties. For this purpose, physical properties of barley varieties were determined and ANN techniques were used. The physical properties of 8 barley varieties grown in Turkey, namely thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain, were determined and it was found that these properties were statistically significant with respect to varieties. As ANN model, three models, N-l, N-2 and N-3 were constructed. The performances of these models were compared. It was determined that the best-fit model was N-1. In the N-1 model, the structure of the model was designed to be 11 input layers, 2 hidden layers and 1 output layer. Thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain were used as input parameter; and varieties as output parameter. R2, Root Mean Square Error and Mean Error for the N-l model were found as 99.99%, 0.00074 and 0.009%, respectively. All results obtained by the N-l model were observed to have been quite consistent with real data. By this model, it would be possible to construct automation systems for classification and cleaning in flourmills.

Keywords: physical properties, artificial neural networks, barley, classification

Procedia PDF Downloads 178
2134 Of an 80 Gbps Passive Optical Network Using Time and Wavelength Division Multiplexing

Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Faizan Khan, Xiaodong Yang

Abstract:

Internet Service Providers are driving endless demands for higher bandwidth and data throughput as new services and applications require higher bandwidth. Users want immediate and accurate data delivery. This article focuses on converting old conventional networks into passive optical networks based on time division and wavelength division multiplexing. The main focus of this research is to use a hybrid of time-division multiplexing and wavelength-division multiplexing to improve network efficiency and performance. In this paper, we design an 80 Gbps Passive Optical Network (PON), which meets the need of the Next Generation PON Stage 2 (NGPON2) proposed in this paper. The hybrid of the Time and Wavelength division multiplexing (TWDM) is said to be the best solution for the implementation of NGPON2, according to Full-Service Access Network (FSAN). To co-exist with or replace the current PON technologies, many wavelengths of the TWDM can be implemented simultaneously. By utilizing 8 pairs of wavelengths that are multiplexed and then transmitted over optical fiber for 40 Kms and on the receiving side, they are distributed among 256 users, which shows that the solution is reliable for implementation with an acceptable data rate. From the results, it can be concluded that the overall performance, Quality Factor, and bandwidth of the network are increased, and the Bit Error rate is minimized by the integration of this approach.

Keywords: bit error rate, fiber to the home, passive optical network, time and wavelength division multiplexing

Procedia PDF Downloads 70
2133 Fire Safety Assessment of At-Risk Groups

Authors: Naser Kazemi Eilaki, Carolyn Ahmer, Ilona Heldal, Bjarne Christian Hagen

Abstract:

Older people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to safe places. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. This research deals with the fire safety of mentioned people's buildings by means of probabilistic methods. For this purpose, fire safety is addressed by modeling the egress of our target group from a hazardous zone to a safe zone. A common type of detached house with a prevalent plan has been chosen for safety analysis, and a limit state function has been developed according to the time-line evacuation model, which is based on a two-zone and smoke development model. An analytical computer model (B-Risk) is used to consider smoke development. Since most of the involved parameters in the fire development model pose uncertainty, an appropriate probability distribution function has been considered for each one of the variables with indeterministic nature. To achieve safety and reliability for the at-risk groups, the fire safety index method has been chosen to define the probability of failure (causalities) and safety index (beta index). An improved harmony search meta-heuristic optimization algorithm has been used to define the beta index. Sensitivity analysis has been done to define the most important and effective parameters for the fire safety of the at-risk group. Results showed an area of openings and intervals to egress exits are more important in buildings, and the safety of people would improve with increasing dimensions of occupant space (building). Fire growth is more critical compared to other parameters in the home without a detector and fire distinguishing system, but in a home equipped with these facilities, it is less important. Type of disabilities has a great effect on the safety level of people who live in the same home layout, and people with visual impairment encounter more risk of capturing compared to visual and movement disabilities.

Keywords: fire safety, at-risk groups, zone model, egress time, uncertainty

Procedia PDF Downloads 103
2132 Impact Position Method Based on Distributed Structure Multi-Agent Coordination with JADE

Authors: YU Kaijun, Liang Dong, Zhang Yarong, Jin Zhenzhou, Yang Zhaobao

Abstract:

For the impact monitoring of distributed structures, the traditional positioning methods are based on the time difference, which includes the four-point arc positioning method and the triangulation positioning method. But in the actual operation, these two methods have errors. In this paper, the Multi-Agent Blackboard Coordination Principle is used to combine the two methods. Fusion steps: (1) The four-point arc locating agent calculates the initial point and records it to the Blackboard Module.(2) The triangulation agent gets its initial parameters by accessing the initial point.(3) The triangulation agent constantly accesses the blackboard module to update its initial parameters, and it also logs its calculated point into the blackboard.(4) When the subsequent calculation point and the initial calculation point are within the allowable error, the whole coordination fusion process is finished. This paper presents a Multi-Agent collaboration method whose agent framework is JADE. The JADE platform consists of several agent containers, with the agent running in each container. Because of the perfect management and debugging tools of the JADE, it is very convenient to deal with complex data in a large structure. Finally, based on the data in Jade, the results show that the impact location method based on Multi-Agent coordination fusion can reduce the error of the two methods.

Keywords: impact monitoring, structural health monitoring(SHM), multi-agent system(MAS), black-board coordination, JADE

Procedia PDF Downloads 178
2131 Robust Control of a Parallel 3-RRR Robotic Manipulator via μ-Synthesis Method

Authors: A. Abbasi Moshaii, M. Soltan Rezaee, M. Mohammadi Moghaddam

Abstract:

Control of some mechanisms is hard because of their complex dynamic equations. If part of the complexity is resulting from uncertainties, an efficient way for solving that is robust control. By this way, the control procedure could be simple and fast and finally, a simple controller can be designed. One kind of these mechanisms is 3-RRR which is a parallel mechanism and has three revolute joints. This paper aims to robust control a 3-RRR planner mechanism and it presents that this could be used for other mechanisms. So, a significant problem in mechanisms control could be solved. The relevant diagrams are drawn and they show the correctness of control process.

Keywords: 3-RRR, dynamic equations, mechanisms control, structural uncertainty

Procedia PDF Downloads 558
2130 Relationship between Electricity Consumption and Economic Growth: Evidence from Nigeria (1971-2012)

Authors: N. E Okoligwe, Okezie A. Ihugba

Abstract:

Few scholars disagrees that electricity consumption is an important supporting factor for economy growth. However, the relationship between electricity consumption and economy growth has different manifestation in different countries according to previous studies. This paper examines the causal relationship between electricity consumption and economic growth for Nigeria. In an attempt to do this, the paper tests the validity of the modernization or depending hypothesis by employing various econometric tools such as Augmented Dickey Fuller (ADF) and Johansen Co-integration test, the Error Correction Mechanism (ECM) and Granger Causality test on time series data from 1971-2012. The Granger causality is found not to run from electricity consumption to real GDP and from GDP to electricity consumption during the year of study. The null hypothesis is accepted at the 5 per cent level of significance where the probability value (0.2251 and 0.8251) is greater than five per cent level of significance because both of them are probably determined by some other factors like; increase in urban population, unemployment rate and the number of Nigerians that benefit from the increase in GDP and increase in electricity demand is not determined by the increase in GDP (income) over the period of study because electricity demand has always been greater than consumption. Consequently; the policy makers in Nigeria should place priority in early stages of reconstruction on building capacity additions and infrastructure development of the electric power sector as this would force the sustainable economic growth in Nigeria.

Keywords: economic growth, electricity consumption, error correction mechanism, granger causality test

Procedia PDF Downloads 310
2129 Research on Pilot Sequence Design Method of Multiple Input Multiple Output Orthogonal Frequency Division Multiplexing System Based on High Power Joint Criterion

Authors: Linyu Wang, Jiahui Ma, Jianhong Xiang, Hanyu Jiang

Abstract:

For the pilot design of the sparse channel estimation model in Multiple Input Multiple Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) systems, the observation matrix constructed according to the matrix cross-correlation criterion, total correlation criterion and other optimization criteria are not optimal, resulting in inaccurate channel estimation and high bit error rate at the receiver. This paper proposes a pilot design method combining high-power sum and high-power variance criteria, which can more accurately estimate the channel. First, the pilot insertion position is designed according to the high-power variance criterion under the condition of equal power. Then, according to the high power sum criterion, the pilot power allocation is converted into a cone programming problem, and the power allocation is carried out. Finally, the optimal pilot is determined by calculating the weighted sum of the high power sum and the high power variance. Compared with the traditional pilot frequency, under the same conditions, the constructed MIMO-OFDM system uses the optimal pilot frequency for channel estimation, and the communication bit error rate performance obtains a gain of 6~7dB.

Keywords: MIMO-OFDM, pilot optimization, compressed sensing, channel estimation

Procedia PDF Downloads 149
2128 The Application of Robotic Surgical Approaches in the Management of Midgut Neuroendocrine Tumours: A Systematic Review

Authors: Jatin Sridhar Naidu, Aryan Arora, Zainab Shafiq, Reza Mirnezami

Abstract:

Background: Robotic-assisted surgery (RAS) promises good outcomes in midgut adenocarcinoma surgery. However, its effectiveness in midgut neuroendocrine tumours (MNETs) is unknown. This study aimed to assess the current use, user interface, and any emerging developments of RAS in MNET treatment using the literature available. Methods: This review was carried out using PRISMA guidelines. MEDLINE, EMBASE, and Web of Science were searched on 22nd October 2022. All studies reporting primary data on robotic surgery in midgut neuroendocrine tumours or carcinoid tumours were included. The midgut was defined to be from the duodenojejunal flexure to the splenic flexure. Methodological quality was assessed using the Joanna Briggs critical appraisal tool. Results: According to our systematic review protocol, nineteen studies were selected. A total of twenty-six patients were identified. RAS was used for right colectomies, right hemicolectomies, ileal resections, caecal resections, intracorporeal anastomoses, and complete mesocolic excisions. It offered an optimal user-interface with enhanced visuals, fine dexterity, and ergonomic work position. Innovative developments in tumour-healthy tissue boundary and vasculature visualisation were reported. Conclusion: RAS for MNETs is safe and feasible, although the evidence base is limited. We recommend large prospective-randomised controlled trials comparing it with laparoscopy and open surgery. Developments in intraoperative contrast dyes and tumour-specific probes are very promising.

Keywords: robotic surgery, colorectal surgery, neuroendocrine neoplasms, midgut neoplasms

Procedia PDF Downloads 88
2127 Usage the Point Analysis Algorithm (SANN) on Drought Analysis

Authors: Khosro Shafie Motlaghi, Amir Reza Salemian

Abstract:

In arid and semi-arid regions like our country Evapotranspiration is the greatestportion of water resource. Therefor knowlege of its changing and other climate parameters plays an important role for planning, development, and management of water resource. In this search the Trend of long changing of Evapotranspiration (ET0), average temprature, monthly rainfall were tested. To dose, all synoptic station s in iran were divided according to the climate with Domarton climate. The present research was done in semi-arid climate of Iran, and in which 14 synoptic with 30 years period of statistics were investigated with 3 methods of minimum square error, Mann Kendoll, and Vald-Volfoytz Evapotranspiration was calculated by using the method of FAO-Penman. The results of investigation in periods of statistic has shown that the process Evapotranspiration parameter of 24 percent of stations is positive, and for 2 percent is negative, and for 47 percent. It was without any Trend. Similary for 22 percent of stations was positive the Trend of parameter of temperature for 19 percent , the trend was negative and for 64 percent, it was without any Trend. The results of rainfall trend has shown that the amount of rainfall in most stations was not considered as a meaningful trend. The result of Mann-kendoll method similar to minimum square error method. regarding the acquired result was can admit that in future years Some regions will face increase of temperature and Evapotranspiration.

Keywords: analysis, algorithm, SANN, ET0

Procedia PDF Downloads 296
2126 Model Averaging for Poisson Regression

Authors: Zhou Jianhong

Abstract:

Model averaging is a desirable approach to deal with model uncertainty, which, however, has rarely been explored for Poisson regression. In this paper, we propose a model averaging procedure based on an unbiased estimator of the expected Kullback-Leibler distance for the Poisson regression. Simulation study shows that the proposed model average estimator outperforms some other commonly used model selection and model average estimators in some situations. Our proposed methods are further applied to a real data example and the advantage of this method is demonstrated again.

Keywords: model averaging, poission regression, Kullback-Leibler distance, statistics

Procedia PDF Downloads 520
2125 Error Analysis of Pronunciation of French by Sinhala Speaking Learners

Authors: Chandeera Gunawardena

Abstract:

The present research analyzes the pronunciation errors encountered by thirty Sinhala speaking learners of French on the assumption that the pronunciation errors were systematic and they reflect the interference of the native language of the learners. The thirty participants were selected using random sampling method. By the time of the study, the subjects were studying French as a foreign language for their Bachelor of Arts Degree at University of Kelaniya, Sri Lanka. The participants were from a homogenous linguistics background. All participants speak the same native language (Sinhala) thus they had completed their secondary education in Sinhala medium and during which they had also learnt French as a foreign language. A battery operated audio tape recorder and a 120-minute blank cassettes were used for recording. A list comprised of 60 words representing all French phonemes was used to diagnose pronunciation difficulties. Before the recording process commenced, the subjects were requested to familiarize themselves with the words through reading them several times. The recording was conducted individually in a quiet classroom and each recording approximately took fifteen minutes. Each subject was required to read at a normal speed. After the completion of recording, the recordings were replayed to identify common errors which were immediately transcribed using the International Phonetic Alphabet. Results show that Sinhala speaking learners face problems with French nasal vowels and French initial consonants clusters. The learners also exhibit errors which occur because of their second language (English) interference.

Keywords: error analysis, pronunciation difficulties, pronunciation errors, Sinhala speaking learners of French

Procedia PDF Downloads 210
2124 Observationally Constrained Estimates of Aerosol Indirect Radiative Forcing over Indian Ocean

Authors: Sofiya Rao, Sagnik Dey

Abstract:

Aerosol-cloud-precipitation interaction continues to be one of the largest sources of uncertainty in quantifying the aerosol climate forcing. The uncertainty is increasing from global to regional scale. This problem remains unresolved due to the large discrepancy in the representation of cloud processes in the climate models. Most of the studies on aerosol-cloud-climate interaction and aerosol-cloud-precipitation over Indian Ocean (like INDOEX, CAIPEEX campaign etc.) are restricted to either particular to one season or particular to one region. Here we developed a theoretical framework to quantify aerosol indirect radiative forcing using Moderate Resolution Imaging Spectroradiometer (MODIS) aerosol and cloud products of 15 years (2000-2015) period over the Indian Ocean. This framework relies on the observationally constrained estimate of the aerosol-induced change in cloud albedo. We partitioned the change in cloud albedo into the change in Liquid Water Path (LWP) and Effective Radius of Clouds (Reff) in response to an aerosol optical depth (AOD). Cloud albedo response to an increase in AOD is most sensitive in the range of LWP between 120-300 gm/m² for a range of Reff varying from 8-24 micrometer, which means aerosols are most sensitive to this range of LWP and Reff. Using this framework, aerosol forcing during a transition from indirect to semi-direct effect is also calculated. The outcome of this analysis shows best results over the Arabian Sea in comparison with the Bay of Bengal and the South Indian Ocean because of heterogeneity in aerosol spices over the Arabian Sea. Over the Arabian Sea during Winter Season the more absorbing aerosols are dominating, during Pre-monsoon dust (coarse mode aerosol particles) are more dominating. In winter and pre-monsoon majorly the aerosol forcing is more dominating while during monsoon and post-monsoon season meteorological forcing is more dominating. Over the South Indian Ocean, more or less same types of aerosol (Sea salt) are present. Over the Arabian Sea the Aerosol Indirect Radiative forcing are varying from -5 ± 4.5 W/m² for winter season while in other seasons it is reducing. The results provide observationally constrained estimates of aerosol indirect forcing in the Indian Ocean which can be helpful in evaluating the climate model performance in the context of such complex interactions.

Keywords: aerosol-cloud-precipitation interaction, aerosol-cloud-climate interaction, indirect radiative forcing, climate model

Procedia PDF Downloads 176
2123 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables

Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez

Abstract:

Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.

Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X

Procedia PDF Downloads 264
2122 An Application of Vector Error Correction Model to Assess Financial Innovation Impact on Economic Growth of Bangladesh

Authors: Md. Qamruzzaman, Wei Jianguo

Abstract:

Over the decade, it is observed that financial development, through financial innovation, not only accelerated development of efficient and effective financial system but also act as a catalyst in the economic development process. In this study, we try to explore insight about how financial innovation causes economic growth in Bangladesh by using Vector Error Correction Model (VECM) for the period of 1990-2014. Test of Cointegration confirms the existence of a long-run association between financial innovation and economic growth. For investigating directional causality, we apply Granger causality test and estimation explore that long-run growth will be affected by capital flow from non-bank financial institutions and inflation in the economy but changes of growth rate do not have any impact on Capital flow in the economy and level of inflation in long-run. Whereas, growth and Market capitalization, as well as market capitalization and capital flow, confirm feedback hypothesis. Variance decomposition suggests that any innovation in the financial sector can cause GDP variation fluctuation in both long run and short run. Financial innovation promotes efficiency and cost in financial transactions in the financial system, can boost economic development process. The study proposed two policy recommendations for further development. First, innovation friendly financial policy should formulate to encourage adaption and diffusion of financial innovation in the financial system. Second, operation of financial market and capital market should be regulated with implementation of rules and regulation to create conducive environment.

Keywords: financial innovation, economic growth, GDP, financial institution, VECM

Procedia PDF Downloads 272
2121 Neural Network Models for Actual Cost and Actual Duration Estimation in Construction Projects: Findings from Greece

Authors: Panagiotis Karadimos, Leonidas Anthopoulos

Abstract:

Predicting the actual cost and duration in construction projects concern a continuous and existing problem for the construction sector. This paper addresses this problem with modern methods and data available from past public construction projects. 39 bridge projects, constructed in Greece, with a similar type of available data were examined. Considering each project’s attributes with the actual cost and the actual duration, correlation analysis is performed and the most appropriate predictive project variables are defined. Additionally, the most efficient subgroup of variables is selected with the use of the WEKA application, through its attribute selection function. The selected variables are used as input neurons for neural network models through correlation analysis. For constructing neural network models, the application FANN Tool is used. The optimum neural network model, for predicting the actual cost, produced a mean squared error with a value of 3.84886e-05 and it was based on the budgeted cost and the quantity of deck concrete. The optimum neural network model, for predicting the actual duration, produced a mean squared error with a value of 5.89463e-05 and it also was based on the budgeted cost and the amount of deck concrete.

Keywords: actual cost and duration, attribute selection, bridge construction, neural networks, predicting models, FANN TOOL, WEKA

Procedia PDF Downloads 134
2120 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever

Authors: Sudha T., Naveen C.

Abstract:

Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.

Keywords: deep learning model, dengue fever, prediction, optimization

Procedia PDF Downloads 65
2119 Performance Analysis of Geophysical Database Referenced Navigation: The Combination of Gravity Gradient and Terrain Using Extended Kalman Filter

Authors: Jisun Lee, Jay Hyoun Kwon

Abstract:

As an alternative way to compensate the INS (inertial navigation system) error in non-GNSS (Global Navigation Satellite System) environment, geophysical database referenced navigation is being studied. In this study, both gravity gradient and terrain data were combined to complement the weakness of sole geophysical data as well as to improve the stability of the positioning. The main process to compensate the INS error using geophysical database was constructed on the basis of the EKF (Extended Kalman Filter). In detail, two type of combination method, centralized and decentralized filter, were applied to check the pros and cons of its algorithm and to find more robust results. The performance of each navigation algorithm was evaluated based on the simulation by supposing that the aircraft flies with precise geophysical DB and sensors above nine different trajectories. Especially, the results were compared to the ones from sole geophysical database referenced navigation to check the improvement due to a combination of the heterogeneous geophysical database. It was found that the overall navigation performance was improved, but not all trajectories generated better navigation result by the combination of gravity gradient with terrain data. Also, it was found that the centralized filter generally showed more stable results. It is because that the way to allocate the weight for the decentralized filter could not be optimized due to the local inconsistency of geophysical data. In the future, switching of geophysical data or combining different navigation algorithm are necessary to obtain more robust navigation results.

Keywords: Extended Kalman Filter, geophysical database referenced navigation, gravity gradient, terrain

Procedia PDF Downloads 349
2118 Novel Animal Drawn Wheel-Axle Mechanism Actuated Knapsack Boom Sprayer

Authors: Ibrahim O. Abdulmalik, Michael C. Amonye, Mahdi Makoyo

Abstract:

Manual knapsack sprayer is the most popular means of farm spraying in Nigeria. It has its limitations. Apart from the human fatigue, which leads to unsteady walking steps, their field capacities are small. They barely cover about 0.2hectare per hour. Their small swath implies that a sizeable farm would take several days to cover. Weather changes are erratic and often it is desired to spray a large farm within hours or few days for even effect, uniformity and to avoid adverse weather interference. It is also often required that a large farm be covered within a short period to avoid re-emergence of weeds before crop emergence. Deployment of many knapsack operators to large farms has not been successful. Human error in taking equally spaced swaths usually result in over dosage of overlaps and in unapplied areas due to error at edges overlaps. Large farm spraying require boom equipment with larger swath. Reduced error in swath overlaps and spraying within the shortest possible time are then assured. Tractor boom sprayers would readily overcome these problems and achieve greater coverage, but they are not available in the country. Tractor hire for cultivation is very costly with the attendant lack of spare parts and specialized technicians for maintenance wherefore farmers find it difficult to engage tractors for cultivation and would avoid considering the employment of a tractor boom sprayer. Animal traction in farming is predominant in Nigeria, especially in the Northern part of the country. Development of boom sprayers drawn by work animals surely implies the maximization of animal utilization in farming. The Hydraulic Equipment Development Institute, Kano, in keeping to its mandate of targeted R&D in hydraulic and pneumatic systems, has developed an Animal Drawn Knapsack Boom Sprayer with four nozzles using the axle mechanism of a two wheeled cart to actuate the piston pump of two knapsack sprayers in line with appropriate technology demand of the country. It is hoped that the introduction of this novel contrivance shall enhance crop protection practice and lead to greater crop and food production in Nigeria.

Keywords: boom, knapsack, farm, sprayer, wheel axle

Procedia PDF Downloads 283
2117 The Influence of Different Flux Patterns on Magnetic Losses in Electric Machine Cores

Authors: Natheer Alatawneh

Abstract:

The finite element analysis of magnetic fields in electromagnetic devices shows that the machine cores experience different flux patterns including alternating and rotating fields. The rotating fields are generated in different configurations range between circular and elliptical with different ratios between the major and minor axis of the flux locus. Experimental measurements on electrical steel exposed to different flux patterns disclose different magnetic losses in the samples under test. Consequently, electric machines require special attention during the cores loss calculation process to consider the flux patterns. In this study, a circular rotational single sheet tester is employed to measure the core losses in electric steel sample of M36G29. The sample was exposed to alternating field, circular field, and elliptical fields with axis ratios of 0.2, 0.4, 0.6 and 0.8. The measured data was implemented on 6-4 switched reluctance motor at three different frequencies of interest to the industry as 60 Hz, 400 Hz, and 1 kHz. The results disclose a high margin of error that may occur during the loss calculations if the flux patterns issue is neglected. The error in different parts of the machine associated with considering the flux patterns can be around 50%, 10%, and 2% at 60Hz, 400Hz, and 1 kHz, respectively. The future work will focus on the optimization of machine geometrical shape which has a primary effect on the flux pattern in order to minimize the magnetic losses in machine cores.

Keywords: alternating core losses, electric machines, finite element analysis, rotational core losses

Procedia PDF Downloads 252
2116 Video Compression Using Contourlet Transform

Authors: Delara Kazempour, Mashallah Abasi Dezfuli, Reza Javidan

Abstract:

Video compression used for channels with limited bandwidth and storage devices has limited storage capabilities. One of the most popular approaches in video compression is the usage of different transforms. Discrete cosine transform is one of the video compression methods that have some problems such as blocking, noising and high distortion inappropriate effect in compression ratio. wavelet transform is another approach is better than cosine transforms in balancing of compression and quality but the recognizing of curve curvature is so limit. Because of the importance of the compression and problems of the cosine and wavelet transforms, the contourlet transform is most popular in video compression. In the new proposed method, we used contourlet transform in video image compression. Contourlet transform can save details of the image better than the previous transforms because this transform is multi-scale and oriented. This transform can recognize discontinuity such as edges. In this approach we lost data less than previous approaches. Contourlet transform finds discrete space structure. This transform is useful for represented of two dimension smooth images. This transform, produces compressed images with high compression ratio along with texture and edge preservation. Finally, the results show that the majority of the images, the parameters of the mean square error and maximum signal-to-noise ratio of the new method based contourlet transform compared to wavelet transform are improved but in most of the images, the parameters of the mean square error and maximum signal-to-noise ratio in the cosine transform is better than the method based on contourlet transform.

Keywords: video compression, contourlet transform, discrete cosine transform, wavelet transform

Procedia PDF Downloads 444
2115 Prediction of PM₂.₅ Concentration in Ulaanbaatar with Deep Learning Models

Authors: Suriya

Abstract:

Rapid socio-economic development and urbanization have led to an increasingly serious air pollution problem in Ulaanbaatar (UB), the capital of Mongolia. PM₂.₅ pollution has become the most pressing aspect of UB air pollution. Therefore, monitoring and predicting PM₂.₅ concentration in UB is of great significance for the health of the local people and environmental management. As of yet, very few studies have used models to predict PM₂.₅ concentrations in UB. Using data from 0:00 on June 1, 2018, to 23:00 on April 30, 2020, we proposed two deep learning models based on Bayesian-optimized LSTM (Bayes-LSTM) and CNN-LSTM. We utilized hourly observed data, including Himawari8 (H8) aerosol optical depth (AOD), meteorology, and PM₂.₅ concentration, as input for the prediction of PM₂.₅ concentrations. The correlation strengths between meteorology, AOD, and PM₂.₅ were analyzed using the gray correlation analysis method; the comparison of the performance improvement of the model by using the AOD input value was tested, and the performance of these models was evaluated using mean absolute error (MAE) and root mean square error (RMSE). The prediction accuracies of Bayes-LSTM and CNN-LSTM deep learning models were both improved when AOD was included as an input parameter. Improvement of the prediction accuracy of the CNN-LSTM model was particularly enhanced in the non-heating season; in the heating season, the prediction accuracy of the Bayes-LSTM model slightly improved, while the prediction accuracy of the CNN-LSTM model slightly decreased. We propose two novel deep learning models for PM₂.₅ concentration prediction in UB, Bayes-LSTM, and CNN-LSTM deep learning models. Pioneering the use of AOD data from H8 and demonstrating the inclusion of AOD input data improves the performance of our two proposed deep learning models.

Keywords: deep learning, AOD, PM2.5, prediction, Ulaanbaatar

Procedia PDF Downloads 48
2114 Track and Trace Solution on Land Certificate Production: Indonesian Land Certificate

Authors: Adrian Rifqi, Febe Napitupulu, Erdi Hermawan, Edwin Putra, Yang Leprilian

Abstract:

This article focuses on the implementation of the production improvement process of the Indonesian land certificate product that printed in Perum Peruri as the state-owned enterprises. Based on the data obtained, there are several complaints from customers of the 2019 land certificate production. The complaints become a negative value to loyal customers of Perum Peruri. Almost all the complaints are referring to ‘defective printouts and the difference between products in packaging and packaging labels both in terms of type and quantity’. To overcome this problem, we intend to make an improvement to the production process that focuses on complaints ‘there is a difference between products in packaging with packaging labels’. Improvements in the land certificate production process are relying on the technology of the scales and QR code on the packaging label. In addition, using the QR code on the packaging label will facilitate the process of tracking product data. With this method, we hope to reduce the error rate between products in packaging with the packaging label both in terms of quantity, type, and product number on the land certificate and error rate of sending land certificates, which will be sent to many places to 0%. With this solution, we also hope to get precise data and real-time reports on the production of land certificates in the near future, so track and trace implementation can be done as the solution of the land certificate production.

Keywords: land certificates, QR code, track and trace, packaging

Procedia PDF Downloads 161
2113 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples

Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges

Abstract:

Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.

Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review

Procedia PDF Downloads 184
2112 Pilot-Assisted Direct-Current Biased Optical Orthogonal Frequency Division Multiplexing Visible Light Communication System

Authors: Ayad A. Abdulkafi, Shahir F. Nawaf, Mohammed K. Hussein, Ibrahim K. Sileh, Fouad A. Abdulkafi

Abstract:

Visible light communication (VLC) is a new approach of optical wireless communication proposed to support the congested radio frequency (RF) spectrum. VLC systems are combined with orthogonal frequency division multiplexing (OFDM) to achieve high rate transmission and high spectral efficiency. In this paper, we investigate the Pilot-Assisted Channel Estimation for DC biased Optical OFDM (PACE-DCO-OFDM) systems to reduce the effects of the distortion on the transmitted signal. Least-square (LS) and linear minimum mean-squared error (LMMSE) estimators are implemented in MATLAB/Simulink to enhance the bit-error-rate (BER) of PACE-DCO-OFDM. Results show that DCO-OFDM system based on PACE scheme has achieved better BER performance compared to conventional system without pilot assisted channel estimation. Simulation results show that the proposed PACE-DCO-OFDM based on LMMSE algorithm can more accurately estimate the channel and achieves better BER performance when compared to the LS based PACE-DCO-OFDM and the traditional system without PACE. For the same signal to noise ratio (SNR) of 25 dB, the achieved BER is about 5×10-4 for LMMSE-PACE and 4.2×10-3 with LS-PACE while it is about 2×10-1 for system without PACE scheme.

Keywords: channel estimation, OFDM, pilot-assist, VLC

Procedia PDF Downloads 180
2111 High Accuracy Analytic Approximations for Modified Bessel Functions I₀(x)

Authors: Pablo Martin, Jorge Olivares, Fernando Maass

Abstract:

A method to obtain analytic approximations for special function of interest in engineering and physics is described here. Each approximate function will be valid for every positive value of the variable and accuracy will be high and increasing with the number of parameters to determine. The general technique will be shown through an application to the modified Bessel function of order zero, I₀(x). The form and the calculation of the parameters are performed with the simultaneous use of the power series and asymptotic expansion. As in Padé method rational functions are used, but now they are combined with other elementary functions as; fractional powers, hyperbolic, trigonometric and exponential functions, and others. The elementary function is determined, considering that the approximate function should be a bridge between the power series and the asymptotic expansion. In the case of the I₀(x) function two analytic approximations have been already determined. The simplest one is (1+x²/4)⁻¹/⁴(1+0.24273x²) cosh(x)/(1+0.43023x²). The parameters of I₀(x) were determined using the leading term of the asymptotic expansion and two coefficients of the power series, and the maximum relative error is 0.05. In a second case, two terms of the asymptotic expansion were used and 4 of the power series and the maximum relative error is 0.001 at x≈9.5. Approximations with much higher accuracy will be also shown. In conclusion a new technique is described to obtain analytic approximations to some functions of interest in sciences, such that they have a high accuracy, they are valid for every positive value of the variable, they can be integrated and differentiated as the usual, functions, and furthermore they can be calculated easily even with a regular pocket calculator.

Keywords: analytic approximations, mathematical-physics applications, quasi-rational functions, special functions

Procedia PDF Downloads 251
2110 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model

Authors: Nureni O. Adeboye, Dawud A. Agunbiade

Abstract:

This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.

Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity

Procedia PDF Downloads 141