Search results for: estimation after selection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4107

Search results for: estimation after selection

3477 Travel Time Estimation of Public Transport Networks Based on Commercial Incidence Areas in Quito Historic Center

Authors: M. Fernanda Salgado, Alfonso Tierra, David S. Sandoval, Wilbert G. Aguilar

Abstract:

Public transportation buses usually vary the speed depending on the places with the number of passengers. They require having efficient travel planning, a plan that will help them choose the fast route. Initially, an estimation tool is necessary to determine the travel time of each route, clearly establishing the possibilities. In this work, we give a practical solution that makes use of a concept that defines as areas of commercial incidence. These areas are based on the hypothesis that in the commercial places there is a greater flow of people and therefore the buses remain more time in the stops. The areas have one or more segments of routes, which have an incidence factor that allows to estimate the times. In addition, initial results are presented that verify the hypotheses and that promise adequately the travel times. In a future work, we take this approach to make an efficient travel planning system.

Keywords: commercial incidence, planning, public transport, speed travel, travel time

Procedia PDF Downloads 231
3476 Towards a Smart Irrigation System Based on Wireless Sensor Networks

Authors: Loubna Hamami, Bouchaib Nassereddine

Abstract:

Due to the evolution of technologies, the need to observe and manage hostile environments, and reduction in size, wireless sensor networks (WSNs) are becoming essential and implicated in the most fields of life. WSNs enable us to change the style of living, working and interacting with the physical environment. The agricultural sector is one of such sectors where WSNs are successfully used to get various benefits. For successful agricultural production, the irrigation system is one of the most important factors, and it plays a tactical role in the process of agriculture domain. However, it is considered as the largest consumer of freshwater. Besides, the scarcity of water, the drought, the waste of the limited available water resources are among the critical issues that touch the almost sectors, notably agricultural services. These facts are leading all governments around the world to rethink about saving water and reducing the volume of water used; this requires the development of irrigation practices in order to have a complete and independent system that is more efficient in the management of irrigation. Consequently, the selection of WSNs in irrigation system has been a benefit for developing the agriculture sector. In this work, we propose a prototype for a complete and intelligent irrigation system based on wireless sensor networks and we present and discuss the design of this prototype. This latter aims at saving water, energy and time. The proposed prototype controls water system for irrigation by monitoring the soil temperature, soil moisture and weather conditions for estimation of water requirements of each plant.

Keywords: precision irrigation, sensor, wireless sensor networks, water resources

Procedia PDF Downloads 139
3475 Item-Trait Pattern Recognition of Replenished Items in Multidimensional Computerized Adaptive Testing

Authors: Jianan Sun, Ziwen Ye

Abstract:

Multidimensional computerized adaptive testing (MCAT) is a popular research topic in psychometrics. It is important for practitioners to clearly know the item-trait patterns of administered items when a test like MCAT is operated. Item-trait pattern recognition refers to detecting which latent traits in a psychological test are measured by each of the specified items. If the item-trait patterns of the replenished items in MCAT item pool are well detected, the interpretability of the items can be improved, which can further promote the abilities of the examinees who attending the MCAT to be accurately estimated. This research explores to solve the item-trait pattern recognition problem of the replenished items in MCAT item pool from the perspective of statistical variable selection. The popular multidimensional item response theory model, multidimensional two-parameter logistic model, is assumed to fit the response data of MCAT. The proposed method uses the least absolute shrinkage and selection operator (LASSO) to detect item-trait patterns of replenished items based on the essential information of item responses and ability estimates of examinees collected from a designed MCAT procedure. Several advantages of the proposed method are outlined. First, the proposed method does not strictly depend on the relative order between the replenished items and the selected operational items, so it allows the replenished items to be mixed into the operational items in reasonable order such as considering content constraints or other test requirements. Second, the LASSO used in this research improves the interpretability of the multidimensional replenished items in MCAT. Third, the proposed method can exert the advantage of shrinkage method idea for variable selection, so it can help to check item quality and key dimension features of replenished items and saves more costs of time and labors in response data collection than traditional factor analysis method. Moreover, the proposed method makes sure the dimensions of replenished items are recognized to be consistent with the dimensions of operational items in MCAT item pool. Simulation studies are conducted to investigate the performance of the proposed method under different conditions for varying dimensionality of item pool, latent trait correlation, item discrimination, test lengths and item selection criteria in MCAT. Results show that the proposed method can accurately detect the item-trait patterns of the replenished items in the two-dimensional and the three-dimensional item pool. Selecting enough operational items from the item pool consisting of high discriminating items by Bayesian A-optimality in MCAT can improve the recognition accuracy of item-trait patterns of replenished items for the proposed method. The pattern recognition accuracy for the conditions with correlated traits is better than those with independent traits especially for the item pool consisting of comparatively low discriminating items. To sum up, the proposed data-driven method based on the LASSO can accurately and efficiently detect the item-trait patterns of replenished items in MCAT.

Keywords: item-trait pattern recognition, least absolute shrinkage and selection operator, multidimensional computerized adaptive testing, variable selection

Procedia PDF Downloads 116
3474 Hybrid Robust Estimation via Median Filter and Wavelet Thresholding with Automatic Boundary Correction

Authors: Alsaidi M. Altaher, Mohd Tahir Ismail

Abstract:

Wavelet thresholding has been a power tool in curve estimation and data analysis. In the presence of outliers this non parametric estimator can not suppress the outliers involved. This study proposes a new two-stage combined method based on the use of the median filter as primary step before applying wavelet thresholding. After suppressing the outliers in a signal through the median filter, the classical wavelet thresholding is then applied for removing the remaining noise. We use automatic boundary corrections; using a low order polynomial model or local polynomial model as a more realistic rule to correct the bias at the boundary region; instead of using the classical assumptions such periodic or symmetric. A simulation experiment has been conducted to evaluate the numerical performance of the proposed method. Results show strong evidences that the proposed method is extremely effective in terms of correcting the boundary bias and eliminating outlier’s sensitivity.

Keywords: boundary correction, median filter, simulation, wavelet thresholding

Procedia PDF Downloads 419
3473 A New Method to Estimate the Low Income Proportion: Monte Carlo Simulations

Authors: Encarnación Álvarez, Rosa M. García-Fernández, Juan F. Muñoz

Abstract:

Estimation of a proportion has many applications in economics and social studies. A common application is the estimation of the low income proportion, which gives the proportion of people classified as poor into a population. In this paper, we present this poverty indicator and propose to use the logistic regression estimator for the problem of estimating the low income proportion. Various sampling designs are presented. Assuming a real data set obtained from the European Survey on Income and Living Conditions, Monte Carlo simulation studies are carried out to analyze the empirical performance of the logistic regression estimator under the various sampling designs considered in this paper. Results derived from Monte Carlo simulation studies indicate that the logistic regression estimator can be more accurate than the customary estimator under the various sampling designs considered in this paper. The stratified sampling design can also provide more accurate results.

Keywords: poverty line, risk of poverty, auxiliary variable, ratio method

Procedia PDF Downloads 444
3472 A Comparative Analysis Approach Based on Fuzzy AHP, TOPSIS and PROMETHEE for the Selection Problem of GSCM Solutions

Authors: Omar Boutkhoum, Mohamed Hanine, Abdessadek Bendarag

Abstract:

Sustainable economic growth is nowadays driving firms to extend toward the adoption of many green supply chain management (GSCM) solutions. However, the evaluation and selection of these solutions is a matter of concern that needs very serious decisions, involving complexity owing to the presence of various associated factors. To resolve this problem, a comparative analysis approach based on multi-criteria decision-making methods is proposed for adequate evaluation of sustainable supply chain management solutions. In the present paper, we propose an integrated decision-making model based on FAHP (Fuzzy Analytic Hierarchy Process), TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) and PROMETHEE (Preference Ranking Organisation METHod for Enrichment Evaluations) to contribute to a better understanding and development of new sustainable strategies for industrial organizations. Due to the varied importance of the selected criteria, FAHP is used to identify the evaluation criteria and assign the importance weights for each criterion, while TOPSIS and PROMETHEE methods employ these weighted criteria as inputs to evaluate and rank the alternatives. The main objective is to provide a comparative analysis based on TOPSIS and PROMETHEE processes to help make sound and reasoned decisions related to the selection problem of GSCM solution.

Keywords: GSCM solutions, multi-criteria analysis, decision support system, TOPSIS, FAHP, PROMETHEE

Procedia PDF Downloads 150
3471 Regional Variations in Spouse Selection Patterns of Women in India

Authors: Nivedita Paul

Abstract:

Marriages in India are part and parcel of kinship and cultural practices. Marriage practices differ in India because of cross-regional diversities in social relations which itself has evolved as a result of causal relationship between space and culture. As the place is important for the formation of culture and other social structures, therefore there is regional differentiation in cultural practices and marital customs. Based on the cultural practices some scholars have divided India into North and South kinship regions where women in the North get married early and have lesser autonomy compared to women in the South where marriages are mostly consanguineous. But, the emergence of new modes and alternative strategies such as matrimonial advertisements becoming popular, as well as the increase in women’s literacy and work force participation, matchmaking process in India has changed to some extent. The present study uses data from Indian Human Development Survey II (2011-12) which is a nationally representative multitopic survey that covers 41,554 households. Currently married women of age group 15-49 in their first marriage; whose year of marriage is from the 1970s to 2000s have been taken for the study. Based on spouse selection experiences, the sample of women has been divided into three marriage categories-self, semi and family arranged. Women in self-arranged or love marriage is the sole decision maker in choosing the partner, in semi-arranged marriage or arranged marriage with consent both parents and women together take the decision, whereas in family arranged or arranged marriage without consent only parents take the decision. The main aim of the study is to show the spatial and regional variations in spouse selection decision making. The basis for regionalization has been taken from Irawati Karve’s pioneering work on kinship studies in India called Kinship Organization in India. India is divided into four kinship regions-North, Central, South and East. Since this work was formulated in 1953, some of the states have experienced changes due to modernization; hence these have been regrouped. After mapping spouse selection patterns using GIS software, it is found that the northern region has mostly family arranged marriages (around 64.6%), the central zone shows a mixed pattern since family arranged marriages are less than north but more than south and semi-arranged marriages are more than north but less than south. The southern zone has the dominance of semi-arranged marriages (around 55%) whereas the eastern zone has more of semi-arranged marriage (around 53%) but there is also a high percentage of self-arranged marriage (around 42%). Thus, arranged marriage is the dominant form of marriage in all four regions, but with a difference in the degree of the involvement of the female and her parents and relatives.

Keywords: spouse selection, consent, kinship, regional pattern

Procedia PDF Downloads 154
3470 Faults Diagnosis by Thresholding and Decision tree with Neuro-Fuzzy System

Authors: Y. Kourd, D. Lefebvre

Abstract:

The monitoring of industrial processes is required to ensure operating conditions of industrial systems through automatic detection and isolation of faults. This paper proposes a method of fault diagnosis based on a neuro-fuzzy hybrid structure. This hybrid structure combines the selection of threshold and decision tree. The validation of this method is obtained with the DAMADICS benchmark. In the first phase of the method, a model will be constructed that represents the normal state of the system to fault detection. Signatures of the faults are obtained with residuals analysis and selection of appropriate thresholds. These signatures provide groups of non-separable faults. In the second phase, we build faulty models to see the flaws in the system that cannot be isolated in the first phase. In the latest phase we construct the tree that isolates these faults.

Keywords: decision tree, residuals analysis, ANFIS, fault diagnosis

Procedia PDF Downloads 613
3469 Estimation of Energy Losses of Photovoltaic Systems in France Using Real Monitoring Data

Authors: Mohamed Amhal, Jose Sayritupac

Abstract:

Photovoltaic (PV) systems have risen as one of the modern renewable energy sources that are used in wide ranges to produce electricity and deliver it to the electrical grid. In parallel, monitoring systems have been deployed as a key element to track the energy production and to forecast the total production for the next days. The reliability of the PV energy production has become a crucial point in the analysis of PV systems. A deeper understanding of each phenomenon that causes a gain or a loss of energy is needed to better design, operate and maintain the PV systems. This work analyzes the current losses distribution in PV systems starting from the available solar energy, going through the DC side and AC side, to the delivery point. Most of the phenomena linked to energy losses and gains are considered and modeled, based on real time monitoring data and datasheets of the PV system components. An analysis of the order of magnitude of each loss is compared to the current literature and commercial software. To date, the analysis of PV systems performance based on a breakdown structure of energy losses and gains is not covered enough in the literature, except in some software where the concept is very common. The cutting-edge of the current analysis is the implementation of software tools for energy losses estimation in PV systems based on several energy losses definitions and estimation technics. The developed tools have been validated and tested on some PV plants in France, which are operating for years. Among the major findings of the current study: First, PV plants in France show very low rates of soiling and aging. Second, the distribution of other losses is comparable to the literature. Third, all losses reported are correlated to operational and environmental conditions. For future work, an extended analysis on further PV plants in France and abroad will be performed.

Keywords: energy gains, energy losses, losses distribution, monitoring, photovoltaic, photovoltaic systems

Procedia PDF Downloads 157
3468 SNR Classification Using Multiple CNNs

Authors: Thinh Ngo, Paul Rad, Brian Kelley

Abstract:

Noise estimation is essential in today wireless systems for power control, adaptive modulation, interference suppression and quality of service. Deep learning (DL) has already been applied in the physical layer for modulation and signal classifications. Unacceptably low accuracy of less than 50% is found to undermine traditional application of DL classification for SNR prediction. In this paper, we use divide-and-conquer algorithm and classifier fusion method to simplify SNR classification and therefore enhances DL learning and prediction. Specifically, multiple CNNs are used for classification rather than a single CNN. Each CNN performs a binary classification of a single SNR with two labels: less than, greater than or equal. Together, multiple CNNs are combined to effectively classify over a range of SNR values from −20 ≤ SNR ≤ 32 dB.We use pre-trained CNNs to predict SNR over a wide range of joint channel parameters including multiple Doppler shifts (0, 60, 120 Hz), power-delay profiles, and signal-modulation types (QPSK,16QAM,64-QAM). The approach achieves individual SNR prediction accuracy of 92%, composite accuracy of 70% and prediction convergence one order of magnitude faster than that of traditional estimation.

Keywords: classification, CNN, deep learning, prediction, SNR

Procedia PDF Downloads 125
3467 Integrative System of GDP, Emissions, Health Services and Population Health in Vietnam: Dynamic Panel Data Estimation

Authors: Ha Hai Duong, Amnon Levy Livermore, Kankesu Jayanthakumaran, Oleg Yerokhin

Abstract:

The issues of economic development, the environment and human health have been investigated since 1990s. Previous researchers have found different empirical evidences of the relationship between income and environmental pollution, health as determinant of economic growth, and the effects of income and environmental pollution on health in various regions of the world. This paper concentrates on integrative relationship analysis of GDP, carbon dioxide emissions, and health services and population health in context of Vietnam. We applied the dynamic generalized method of moments (GMM) estimation on datasets of Vietnam’s sixty-three provinces for the years 2000-2010. Our results show the significant positive effect of GDP on emissions and the dependence of population health on emissions and health services. We find the significant relationship between population health and GDP. Additionally, health services are significantly affected by population health and GDP. Finally, the population size too is other important determinant of both emissions and GDP.

Keywords: economic development, emissions, environmental pollution, health

Procedia PDF Downloads 608
3466 The Acquisition of Case in Biological Domain Based on Text Mining

Authors: Shen Jian, Hu Jie, Qi Jin, Liu Wei Jie, Chen Ji Yi, Peng Ying Hong

Abstract:

In order to settle the problem of acquiring case in biological related to design problems, a biometrics instance acquisition method based on text mining is presented. Through the construction of corpus text vector space and knowledge mining, the feature selection, similarity measure and case retrieval method of text in the field of biology are studied. First, we establish a vector space model of the corpus in the biological field and complete the preprocessing steps. Then, the corpus is retrieved by using the vector space model combined with the functional keywords to obtain the biological domain examples related to the design problems. Finally, we verify the validity of this method by taking the example of text.

Keywords: text mining, vector space model, feature selection, biologically inspired design

Procedia PDF Downloads 249
3465 Uncertainty Estimation in Neural Networks through Transfer Learning

Authors: Ashish James, Anusha James

Abstract:

The impressive predictive performance of deep learning techniques on a wide range of tasks has led to its widespread use. Estimating the confidence of these predictions is paramount for improving the safety and reliability of such systems. However, the uncertainty estimates provided by neural networks (NNs) tend to be overconfident and unreasonable. Ensemble of NNs typically produce good predictions but uncertainty estimates tend to be inconsistent. Inspired by these, this paper presents a framework that can quantitatively estimate the uncertainties by leveraging the advances in transfer learning through slight modification to the existing training pipelines. This promising algorithm is developed with an intention of deployment in real world problems which already boast a good predictive performance by reusing those pretrained models. The idea is to capture the behavior of the trained NNs for the base task by augmenting it with the uncertainty estimates from a supplementary network. A series of experiments with known and unknown distributions show that the proposed approach produces well calibrated uncertainty estimates with high quality predictions.

Keywords: uncertainty estimation, neural networks, transfer learning, regression

Procedia PDF Downloads 116
3464 Investigating the Impact of Task Demand and Duration on Passage of Time Judgements and Duration Estimates

Authors: Jesika A. Walker, Mohammed Aswad, Guy Lacroix, Denis Cousineau

Abstract:

There is a fundamental disconnect between the experience of time passing and the chronometric units by which time is quantified. Specifically, there appears to be no relationship between the passage of time judgments (PoTJs) and verbal duration estimates at short durations (e.g., < 2000 milliseconds). When a duration is longer than several minutes, however, evidence suggests that a slower feeling of time passing is predictive of overestimation. Might the length of a task moderate the relation between PoTJs and duration estimates? Similarly, the estimation paradigm (prospective vs. retrospective) and the mental effort demanded of a task (task demand) have both been found to influence duration estimates. However, only a handful of experiments have investigated these effects for tasks of long durations, and the results have been mixed. Thus, might the length of a task also moderate the effects of the estimation paradigm and task demand on duration estimates? To investigate these questions, 273 participants performed either an easy or difficult visual and memory search task for either eight or 58 minutes, under prospective or retrospective instructions. Afterward, participants provided a duration estimate in minutes, followed by a PoTJ on a Likert scale (1 = very slow, 7 = very fast). A 2 (prospective vs. retrospective) × 2 (eight minutes vs. 58 minutes) × 2 (high vs. low difficulty) between-subjects ANOVA revealed a two-way interaction between task demand and task duration on PoTJs, p = .02. Specifically, time felt faster in the more challenging task, but only in the eight-minute condition, p < .01. Duration estimates were transformed into RATIOs (estimate/actual duration) to standardize estimates across durations. An ANOVA revealed a two-way interaction between estimation paradigm and task duration, p = .03. Specifically, participants overestimated the task more if they were given prospective instructions, but only in the eight-minute task. Surprisingly, there was no effect of task difficulty on duration estimates. Thus, the demands of a task may influence ‘feeling of time’ and ‘estimation time’ differently, contributing to the existing theory that these two forms of time judgement rely on separate underlying cognitive mechanisms. Finally, a significant main effect of task duration was found for both PoTJs and duration estimates (ps < .001). Participants underestimated the 58-minute task (m = 42.5 minutes) and overestimated the eight-minute task (m = 10.7 minutes). Yet, they reported the 58-minute task as passing significantly slower on a Likert scale (m = 2.5) compared to the eight-minute task (m = 4.1). In fact, a significant correlation was found between PoTJ and duration estimation (r = .27, p <.001). This experiment thus provides evidence for a compensatory effect at longer durations, in which people underestimate a ‘slow feeling condition and overestimate a ‘fast feeling condition. The results are discussed in relation to heuristics that might alter the relationship between these two variables when conditions range from several minutes up to almost an hour.

Keywords: duration estimates, long durations, passage of time judgements, task demands

Procedia PDF Downloads 117
3463 Exploring the Importance of Different Product Cues on the Selection for Chocolate from the Consumer Perspective

Authors: Ezeni Brzovska, Durdana Ozretic-Dosen

Abstract:

The purpose of this paper is to deepen the understanding of the product cues that influence purchase decision for a specific product category – chocolate, and to identify demographic differences in the buying behavior. ANOVA was employed for analyzing the significance level for nine product cues, and the survey showed statistically significant differences among different age and gender groups, and between respondents with different levels of education. From the theoretical perspective, the study adds to the existing knowledge by contributing with the research results from the new environment (Southeast Europe, Macedonia), which has been neglected so far. Establishing the level of significance for the product cues that affect buying behavior in the chocolate consumption context might help managers to improve marketing decision-making, and better meet consumer needs through identifying opportunities for packaging innovations and/or personalization toward different target groups.

Keywords: chocolate consumption context, chocolate selection, demographic characteristics, product cues

Procedia PDF Downloads 237
3462 Supply Chain Risk Management (SCRM): A Simplified Alternative for Implementing SCRM for Small and Medium Enterprises

Authors: Paul W. Murray, Marco Barajas

Abstract:

Recent changes in supply chains, especially globalization and collaboration, have created new risks for enterprises of all sizes. A variety of complex frameworks, often based on enterprise risk management strategies have been presented under the heading of Supply Chain Risk Management (SCRM). The literature on promotes the benefits of a robust SCRM strategy; however, implementing SCRM is difficult and resource demanding for Large Enterprises (LEs), and essentially out of reach for Small and Medium Enterprises (SMEs). This research debunks the idea that SCRM is necessary for all enterprises and instead proposes a simple and effective Vendor Selection Template (VST). Empirical testing and a survey of supply chain practitioners provide a measure of validation to the VST. The resulting VSTis a valuable contribution because is easy to use, provides practical results, and is sufficiently flexible to be universally applied to SMEs.

Keywords: multiple regression analysis, supply chain management, risk assessment, vendor selection

Procedia PDF Downloads 451
3461 Optical Variability of Faint Quasars

Authors: Kassa Endalamaw Rewnu

Abstract:

The variability properties of a quasar sample, spectroscopically complete to magnitude J = 22.0, are investigated on a time baseline of 2 years using three different photometric bands (U, J and F). The original sample was obtained using a combination of different selection criteria: colors, slitless spectroscopy and variability, based on a time baseline of 1 yr. The main goals of this work are two-fold: first, to derive the percentage of variable quasars on a relatively short time baseline; secondly, to search for new quasar candidates missed by the other selection criteria; and, thus, to estimate the completeness of the spectroscopic sample. In order to achieve these goals, we have extracted all the candidate variable objects from a sample of about 1800 stellar or quasi-stellar objects with limiting magnitude J = 22.50 over an area of about 0.50 deg2. We find that > 65% of all the objects selected as possible variables are either confirmed quasars or quasar candidates on the basis of their colors. This percentage increases even further if we exclude from our lists of variable candidates a number of objects equal to that expected on the basis of `contamination' induced by our photometric errors. The percentage of variable quasars in the spectroscopic sample is also high, reaching about 50%. On the basis of these results, we can estimate that the incompleteness of the original spectroscopic sample is < 12%. We conclude that variability analysis of data with small photometric errors can be successfully used as an efficient and independent (or at least auxiliary) selection method in quasar surveys, even when the time baseline is relatively short. Finally, when corrected for the different intrinsic time lags corresponding to a fixed observed time baseline, our data do not show a statistically significant correlation between variability and either absolute luminosity or redshift.

Keywords: nuclear activity, galaxies, active quasars, variability

Procedia PDF Downloads 66
3460 Poster : Incident Signals Estimation Based on a Modified MCA Learning Algorithm

Authors: Rashid Ahmed , John N. Avaritsiotis

Abstract:

Many signal subspace-based approaches have already been proposed for determining the fixed Direction of Arrival (DOA) of plane waves impinging on an array of sensors. Two procedures for DOA estimation based neural networks are presented. First, Principal Component Analysis (PCA) is employed to extract the maximum eigenvalue and eigenvector from signal subspace to estimate DOA. Second, minor component analysis (MCA) is a statistical method of extracting the eigenvector associated with the smallest eigenvalue of the covariance matrix. In this paper, we will modify a Minor Component Analysis (MCA(R)) learning algorithm to enhance the convergence, where a convergence is essential for MCA algorithm towards practical applications. The learning rate parameter is also presented, which ensures fast convergence of the algorithm, because it has direct effect on the convergence of the weight vector and the error level is affected by this value. MCA is performed to determine the estimated DOA. Preliminary results will be furnished to illustrate the convergences results achieved.

Keywords: Direction of Arrival, neural networks, Principle Component Analysis, Minor Component Analysis

Procedia PDF Downloads 436
3459 Sensor Fault-Tolerant Model Predictive Control for Linear Parameter Varying Systems

Authors: Yushuai Wang, Feng Xu, Junbo Tan, Xueqian Wang, Bin Liang

Abstract:

In this paper, a sensor fault-tolerant control (FTC) scheme using robust model predictive control (RMPC) and set theoretic fault detection and isolation (FDI) is extended to linear parameter varying (LPV) systems. First, a group of set-valued observers are designed for passive fault detection (FD) and the observer gains are obtained through minimizing the size of invariant set of state estimation-error dynamics. Second, an input set for fault isolation (FI) is designed offline through set theory for actively isolating faults after FD. Third, an RMPC controller based on state estimation for LPV systems is designed to control the system in the presence of disturbance and measurement noise and tolerate faults. Besides, an FTC algorithm is proposed to maintain the plant operate in the corresponding mode when the fault occurs. Finally, a numerical example is used to show the effectiveness of the proposed results.

Keywords: fault detection, linear parameter varying, model predictive control, set theory

Procedia PDF Downloads 232
3458 Airborne SAR Data Analysis for Impact of Doppler Centroid on Image Quality and Registration Accuracy

Authors: Chhabi Nigam, S. Ramakrishnan

Abstract:

This paper brings out the analysis of the airborne Synthetic Aperture Radar (SAR) data to study the impact of Doppler centroid on Image quality and geocoding accuracy from the perspective of Stripmap mode of data acquisition. Although in Stripmap mode of data acquisition radar beam points at 90 degrees broad side (side looking), shift in the Doppler centroid is invariable due to platform motion. In-accurate estimation of Doppler centroid leads to poor image quality and image miss-registration. The effect of Doppler centroid is analyzed in this paper using multiple sets of data collected from airborne platform. Occurrences of ghost (ambiguous) targets and their power levels have been analyzed that impacts appropriate choice of PRF. Effect of aircraft attitudes (roll, pitch and yaw) on the Doppler centroid is also analyzed with the collected data sets. Various stages of the RDA (Range Doppler Algorithm) algorithm used for image formation in Stripmap mode, range compression, Doppler centroid estimation, azimuth compression, range cell migration correction are analyzed to find the performance limits and the dependence of the imaging geometry on the final image. The ability of Doppler centroid estimation to enhance the imaging accuracy for registration are also illustrated in this paper. The paper also tries to bring out the processing of low squint SAR data, the challenges and the performance limits imposed by the imaging geometry and the platform dynamics on the final image quality metrics. Finally, the effect on various terrain types, including land, water and bright scatters is also presented.

Keywords: ambiguous target, Doppler Centroid, image registration, Airborne SAR

Procedia PDF Downloads 206
3457 Composite Laminate and Thin-Walled Beam Correlations for Aircraft Wing Box Design

Authors: S. J. M. Mohd Saleh, S. Guo

Abstract:

Composite materials have become an important option for the primary structure of aircraft due to their design flexibility and ability to improve the overall performance. At present, the option for composite usage in aircraft component is largely based on experience, knowledge, benchmarking and partly market driven. An inevitable iterative design during the design stage and validation process will increase the development time and cost. This paper aims at presenting the correlation between laminate and composite thin-wall beam structure, which contains the theoretical and numerical investigations on stiffness estimation of composite aerostructures with applications to aircraft wings. Classical laminate theory and thin-walled beam theory were applied to define the correlation between 1-dimensional composite laminate and 2-dimensional composite beam structure, respectively. Then FE model was created to represent the 3-dimensional structure. A detailed study on stiffness matrix of composite laminates has been carried out to understand the effects of stacking sequence on the coupling between extension, shear, bending and torsional deformation of wing box structures for 1-dimensional, 2-dimensional and 3-dimensional structures. Relationships amongst composite laminates and composite wing box structures of the same material have been developed in this study. These correlations will be guidelines for the design engineers to predict the stiffness of the wing box structure during the material selection process and laminate design stage.

Keywords: aircraft design, aircraft structures, classical lamination theory, composite structures, laminate theory, structural design, thin-walled beam theory, wing box design

Procedia PDF Downloads 216
3456 Estimation of Fouling in a Cross-Flow Heat Exchanger Using Artificial Neural Network Approach

Authors: Rania Jradi, Christophe Marvillet, Mohamed Razak Jeday

Abstract:

One of the most frequently encountered problems in industrial heat exchangers is fouling, which degrades the thermal and hydraulic performances of these types of equipment, leading thus to failure if undetected. And it occurs due to the accumulation of undesired material on the heat transfer surface. So, it is necessary to know about the heat exchanger fouling dynamics to plan mitigation strategies, ensuring a sustainable and safe operation. This paper proposes an Artificial Neural Network (ANN) approach to estimate the fouling resistance in a cross-flow heat exchanger by the collection of the operating data of the phosphoric acid concentration loop. The operating data of 361 was used to validate the proposed model. The ANN attains AARD= 0.048%, MSE= 1.811x10⁻¹¹, RMSE= 4.256x 10⁻⁶ and r²=99.5 % of accuracy which confirms that it is a credible and valuable approach for industrialists and technologists who are faced with the drawbacks of fouling in heat exchangers.

Keywords: cross-flow heat exchanger, fouling, estimation, phosphoric acid concentration loop, artificial neural network approach

Procedia PDF Downloads 191
3455 A Framework for Evaluating the QoS and Cost of Web Services Based on Its Functional Performance

Authors: M. Mohemmed Sha, T. Manesh, A. Ahmed Mohamed Mustaq

Abstract:

In this corporate world, the technology of Web services has grown rapidly and its significance for the development of web based applications gradually rises over time. The success of Business to Business integration rely on finding novel partners and their services in a global business environment. But the selection of the most suitable Web service from the list of services with the identical functionality is more vital. The satisfaction level of the customer and the provider’s reputation of the Web service are primarily depending on the range it reaches the customer’s requirements. In most cases the customer of the Web service feels that he is spending for the service which is undelivered. This is because the customer always thinks that the real functionality of the web service is not reached. This will lead to change of the service frequently. In this paper, a framework is proposed to evaluate the Quality of Service (QoS) and its cost that makes the optimal correlation between each other. Also this research work proposes some management decision against the functional deviancy of the web service that are guaranteed at time of selection.

Keywords: web service, service level agreement, quality of a service, cost of a service, QoS, CoS, SOA, WSLA, WsRF

Procedia PDF Downloads 401
3454 A Spatial Hypergraph Based Semi-Supervised Band Selection Method for Hyperspectral Imagery Semantic Interpretation

Authors: Akrem Sellami, Imed Riadh Farah

Abstract:

Hyperspectral imagery (HSI) typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image. Hence, a pixel in HSI is a high-dimensional vector of intensities with a large spectral range and a high spectral resolution. Therefore, the semantic interpretation is a challenging task of HSI analysis. We focused in this paper on object classification as HSI semantic interpretation. However, HSI classification still faces some issues, among which are the following: The spatial variability of spectral signatures, the high number of spectral bands, and the high cost of true sample labeling. Therefore, the high number of spectral bands and the low number of training samples pose the problem of the curse of dimensionality. In order to resolve this problem, we propose to introduce the process of dimensionality reduction trying to improve the classification of HSI. The presented approach is a semi-supervised band selection method based on spatial hypergraph embedding model to represent higher order relationships with different weights of the spatial neighbors corresponding to the centroid of pixel. This semi-supervised band selection has been developed to select useful bands for object classification. The presented approach is evaluated on AVIRIS and ROSIS HSIs and compared to other dimensionality reduction methods. The experimental results demonstrate the efficacy of our approach compared to many existing dimensionality reduction methods for HSI classification.

Keywords: dimensionality reduction, hyperspectral image, semantic interpretation, spatial hypergraph

Procedia PDF Downloads 299
3453 Multi-Objective Discrete Optimization of External Thermal Insulation Composite Systems in Terms of Thermal and Embodied Energy Performance

Authors: Berfin Yildiz

Abstract:

These days, increasing global warming effects, limited amount of energy resources, etc., necessitates the awareness that must be present in every profession group. The architecture and construction sectors are responsible for both the embodied and operational energy of the materials. This responsibility has led designers to seek alternative solutions for energy-efficient material selection. The choice of energy-efficient material requires consideration of the entire life cycle, including the building's production, use, and disposal energy. The aim of this study is to investigate the method of material selection of external thermal insulation composite systems (ETICS). Embodied and in-use energy values of material alternatives were used for the evaluation in this study. The operational energy is calculated according to the u-value calculation method defined in the TS 825 (Thermal Insulation Requirements) standard for Turkey, and the embodied energy is calculated based on the manufacturer's Energy Performance Declaration (EPD). ETICS consists of a wall, adhesive, insulation, lining, mechanical, mesh, and exterior finishing materials. In this study, lining, mechanical, and mesh materials were ignored because EPD documents could not be obtained. The material selection problem is designed as a hypothetical volume area (5x5x3m) and defined as a multi-objective discrete optimization problem for external thermal insulation composite systems. Defining the problem as a discrete optimization problem is important in order to choose between materials of various thicknesses and sizes. Since production and use energy values, which are determined as optimization objectives in the study, are often conflicting values, material selection is defined as a multi-objective optimization problem, and it is aimed to obtain many solution alternatives by using Hypervolume (HypE) algorithm. The enrollment process started with 100 individuals and continued for 50 generations. According to the obtained results, it was observed that autoclaved aerated concrete and Ponce block as wall material, glass wool, as insulation material gave better results.

Keywords: embodied energy, multi-objective discrete optimization, performative design, thermal insulation

Procedia PDF Downloads 123
3452 BER of the Leaky Feeder under Rayleigh Fading Multichannel Reception with Imperfect Phase Estimation

Authors: Hasan Farahneh, Xavier Fernando

Abstract:

Leaky Feeder (LF) has been a proven technology for many decades and its promises broadband wireless access in short range but being overlooked until now. The LF is a natural MIMO transceiver ideal for micro and pico cells. In this work, the LF is considered as a linear antenna array MultiInput-Single-Output (MISO) and derive the average bit error rate (BER) in Rayleigh fading channel considering ideal and independent paths (iid) which consider there is no correlation and mutual coupling between transmit antennas (slots) or receiver antenna considering QPSK modulation with imperfect phase estimation. We consider maximal ratio transmission (MRT) at the transmit end and maximal ratio combining (MRC) at the receiving end. Analytical expressions are derived for the BER with radiating cable transmitters. The effects of slot spacing and carrier frequency on the BER are also studied. Numerical evaluations show the radiating cable transmitter offer much lower BER than a single antenna transmitter with same SNR.

Keywords: leaky feeder, BER, QPSK, rayleigh fading, channel gain, phase mismatch

Procedia PDF Downloads 368
3451 Process for Analyzing Information Security Risks Associated with the Incorporation of Online Dispute Resolution Systems in the Context of Conciliation in Colombia

Authors: Jefferson Camacho Mejia, Jenny Paola Forero Pachon, Luis Carlos Gomez Florez

Abstract:

The innumerable possibilities offered by the use of Information Technology (IT) in the development of different socio-economic activities has made a change in the social paradigm and the emergence of the so-called information and knowledge society. The Colombian government, aware of this reality, has been promoting the use of IT as part of the E-government strategy adopted in the country. However, it is well known that the use of IT implies the existence of certain threats that put the security of information in the digital environment at risk. One of the priorities of the Colombian government is to improve access to alternative justice through IT, in particular, access to Alternative Dispute Resolution (ADR): conciliation, arbitration and friendly composition; by means of which it is sought that the citizens directly resolve their differences. To this end, a trend has been identified in the use of Online Dispute Resolution (ODR) systems, which extend the benefits of ADR to the digital environment through the use of IT. This article presents a process for the analysis of information security risks associated with the incorporation of ODR systems in the context of conciliation in Colombia, based on four fundamental stages identified in the literature: (I) Identification of assets, (II) Identification of threats and vulnerabilities (III) Estimation of the impact and 4) Estimation of risk levels. The methodological design adopted for this research was the grounded theory, since it involves interactions that are applied to a specific context and from the perspective of diverse participants. As a result of this investigation, the activities to be followed are defined to carry out an analysis of information security risks, in the context of the conciliation in Colombia supported by ODR systems, thus contributing to the estimation of the risks to make possible its subsequent treatment.

Keywords: alternative dispute resolution, conciliation, information security, online dispute resolution systems, process, risk analysis

Procedia PDF Downloads 227
3450 A Game of Information in Defense/Attack Strategies: Case of Poisson Attacks

Authors: Asma Ben Yaghlane, Mohamed Naceur Azaiez

Abstract:

In this paper, we briefly introduce the concept of Poisson attacks in the case of defense/attack strategies where attacks are assumed to be continuous. We suggest a game model in which the attacker will combine both criteria of a sufficient confidence level of a successful attack and a reasonably small size of the estimation error in order to launch an attack. Here, estimation error arises from assessing the system failure upon attack using aggregate data at the system level. The corresponding error is referred to as aggregation error. On the other hand, the defender will attempt to deter attack by making one or both criteria inapplicable. The defender will build his/her strategy by both strengthening the targeted system and increasing the size of error. We will formulate the defender problem based on appropriate optimization models. The attacker will opt for a Bayesian updating in assessing the impact on the improvement made by the defender. Then, the attacker will evaluate the feasibility of the attack before making the decision of whether or not to launch it. We will provide illustrations to better explain the process.

Keywords: attacker, defender, game theory, information

Procedia PDF Downloads 453
3449 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 73
3448 Method for Selecting and Prioritising Smart Services in Manufacturing Companies

Authors: Till Gramberg, Max Kellner, Erwin Gross

Abstract:

This paper presents a comprehensive investigation into the topic of smart services and IIoT-Platforms, focusing on their selection and prioritization in manufacturing organizations. First, a literature review is conducted to provide a basic understanding of the current state of research in the area of smart services. Based on discussed and established definitions, a definition approach for this paper is developed. In addition, value propositions for smart services are identified based on the literature and expert interviews. Furthermore, the general requirements for the provision of smart services are presented. Subsequently, existing approaches for the selection and development of smart services are identified and described. In order to determine the requirements for the selection of smart services, expert opinions from successful companies that have already implemented smart services are collected through semi-structured interviews. Based on the results, criteria for the evaluation of existing methods are derived. The existing methods are then evaluated according to the identified criteria. Furthermore, a novel method for the selection of smart services in manufacturing companies is developed, taking into account the identified criteria and the existing approaches. The developed concept for the method is verified in expert interviews. The method includes a collection of relevant smart services identified in the literature. The actual relevance of the use cases in the industrial environment was validated in an online survey. The required data and sensors are assigned to the smart service use cases. The value proposition of the use cases is evaluated in an expert workshop using different indicators. Based on this, a comparison is made between the identified value proposition and the required data, leading to a prioritization process. The prioritization process follows an established procedure for evaluating technical decision-making processes. In addition to the technical requirements, the prioritization process includes other evaluation criteria such as the economic benefit, the conformity of the new service offering with the company strategy, or the customer retention enabled by the smart service. Finally, the method is applied and validated in an industrial environment. The results of these experiments are critically reflected upon and an outlook on future developments in the area of smart services is given. This research contributes to a deeper understanding of the selection and prioritization process as well as the technical considerations associated with smart service implementation in manufacturing organizations. The proposed method serves as a valuable guide for decision makers, helping them to effectively select the most appropriate smart services for their specific organizational needs.

Keywords: smart services, IIoT, industrie 4.0, IIoT-platform, big data

Procedia PDF Downloads 69