Search results for: time estimation
19015 Determination of Tide Height Using Global Navigation Satellite Systems (GNSS)
Authors: Faisal Alsaaq
Abstract:
Hydrographic surveys have traditionally relied on the availability of tide information for the reduction of sounding observations to a common datum. In most cases, tide information is obtained from tide gauge observations and/or tide predictions over space and time using local, regional or global tide models. While the latter often provides a rather crude approximation, the former relies on tide gauge stations that are spatially restricted, and often have sparse and limited distribution. A more recent method that is increasingly being used is Global Navigation Satellite System (GNSS) positioning which can be utilised to monitor height variations of a vessel or buoy, thus providing information on sea level variations during the time of a hydrographic survey. However, GNSS heights obtained under the dynamic environment of a survey vessel are affected by “non-tidal” processes such as wave activity and the attitude of the vessel (roll, pitch, heave and dynamic draft). This research seeks to examine techniques that separate the tide signal from other non-tidal signals that may be contained in GNSS heights. This requires an investigation of the processes involved and their temporal, spectral and stochastic properties in order to apply suitable recovery techniques of tide information. In addition, different post-mission and near real-time GNSS positioning techniques will be investigated with focus on estimation of height at ocean. Furthermore, the study will investigate the possibility to transfer the chart datums at the location of tide gauges.Keywords: hydrography, GNSS, datum, tide gauge
Procedia PDF Downloads 26219014 Allocating Channels and Flow Estimation at Flood Prone Area in Desert, Example from AlKharj City, Saudi Arabia
Authors: Farhan Aljuaidi
Abstract:
The rapid expansion of Alkarj city, Saudi Arabia, towards the outlet of Wadi AlAin is critical for the planners and decision makers. Nowadays, two major projects such as Salman bin Abdulaziz University compound and new industrial area are developed in this flood prone area where no channels are clear and identified. The main contribution of this study is to divert the flow away from these vital projects by reconstructing new channels. To do so, Lidar data were used to generate contour lines for the actual elevation of the highways and local roads. These data were analyzed and compared to the contour lines derived from the topographical maps 1:50.000. The magnitude of the expected flow was estimated using Snyder's Model based on the morphometric data acquired by DEM of the catchment area. The results indicate that maximum discharge peak reaches 2694,3 m3/sec, the mean is 303,7 m3/sec and the minimum is 74,3 m3/sec. The runoff was estimated at 252,2. 610 m3/s, the mean is 41,5. 610 m3/s and the minimum is 12,4. 610 m3/s.Keywords: Desert flood, Saudi Arabia, Snyder's Model, flow estimation
Procedia PDF Downloads 30919013 Influence of Parameters of Modeling and Data Distribution for Optimal Condition on Locally Weighted Projection Regression Method
Authors: Farhad Asadi, Mohammad Javad Mollakazemi, Aref Ghafouri
Abstract:
Recent research in neural networks science and neuroscience for modeling complex time series data and statistical learning has focused mostly on learning from high input space and signals. Local linear models are a strong choice for modeling local nonlinearity in data series. Locally weighted projection regression is a flexible and powerful algorithm for nonlinear approximation in high dimensional signal spaces. In this paper, different learning scenario of one and two dimensional data series with different distributions are investigated for simulation and further noise is inputted to data distribution for making different disordered distribution in time series data and for evaluation of algorithm in locality prediction of nonlinearity. Then, the performance of this algorithm is simulated and also when the distribution of data is high or when the number of data is less the sensitivity of this approach to data distribution and influence of important parameter of local validity in this algorithm with different data distribution is explained.Keywords: local nonlinear estimation, LWPR algorithm, online training method, locally weighted projection regression method
Procedia PDF Downloads 50219012 Hybrid Robust Estimation via Median Filter and Wavelet Thresholding with Automatic Boundary Correction
Authors: Alsaidi M. Altaher, Mohd Tahir Ismail
Abstract:
Wavelet thresholding has been a power tool in curve estimation and data analysis. In the presence of outliers this non parametric estimator can not suppress the outliers involved. This study proposes a new two-stage combined method based on the use of the median filter as primary step before applying wavelet thresholding. After suppressing the outliers in a signal through the median filter, the classical wavelet thresholding is then applied for removing the remaining noise. We use automatic boundary corrections; using a low order polynomial model or local polynomial model as a more realistic rule to correct the bias at the boundary region; instead of using the classical assumptions such periodic or symmetric. A simulation experiment has been conducted to evaluate the numerical performance of the proposed method. Results show strong evidences that the proposed method is extremely effective in terms of correcting the boundary bias and eliminating outlier’s sensitivity.Keywords: boundary correction, median filter, simulation, wavelet thresholding
Procedia PDF Downloads 42819011 A New Method to Estimate the Low Income Proportion: Monte Carlo Simulations
Authors: Encarnación Álvarez, Rosa M. García-Fernández, Juan F. Muñoz
Abstract:
Estimation of a proportion has many applications in economics and social studies. A common application is the estimation of the low income proportion, which gives the proportion of people classified as poor into a population. In this paper, we present this poverty indicator and propose to use the logistic regression estimator for the problem of estimating the low income proportion. Various sampling designs are presented. Assuming a real data set obtained from the European Survey on Income and Living Conditions, Monte Carlo simulation studies are carried out to analyze the empirical performance of the logistic regression estimator under the various sampling designs considered in this paper. Results derived from Monte Carlo simulation studies indicate that the logistic regression estimator can be more accurate than the customary estimator under the various sampling designs considered in this paper. The stratified sampling design can also provide more accurate results.Keywords: poverty line, risk of poverty, auxiliary variable, ratio method
Procedia PDF Downloads 45619010 A Study of Fatigue Life Estimation of a Modular Unmanned Aerial Vehicle by Developing a Structural Health Monitoring System
Authors: Zain Ul Hassan, Muhammad Zain Ul Abadin, Muhammad Zubair Khan
Abstract:
Unmanned aerial vehicles (UAVs) have now become of predominant importance for various operations, and an immense amount of work is going on in this specific category. The structural stability and life of these UAVs is key factor that should be considered while deploying them to different intelligent operations as their failure leads to loss of sensitive real-time data and cost. This paper presents an applied research on the development of a structural health monitoring system for a UAV designed and fabricated by deploying modular approach. Firstly, a modular UAV has been designed which allows to dismantle and to reassemble the components of the UAV without effecting the whole assembly of UAV. This novel approach makes the vehicle very sustainable and decreases its maintenance cost to a significant value by making possible to replace only the part leading to failure. Then the SHM for the designed architecture of the UAV had been specified as a combination of wings integrated with strain gauges, on-board data logger, bridge circuitry and the ground station. For the research purpose sensors have only been attached to the wings being the most load bearing part and as per analysis was done on ANSYS. On the basis of analysis of the load time spectrum obtained by the data logger during flight, fatigue life of the respective component has been predicted using fracture mechanics techniques of Rain Flow Method and Miner’s Rule. Thus allowing us to monitor the health of a specified component time to time aiding to avoid any failure.Keywords: fracture mechanics, rain flow method, structural health monitoring system, unmanned aerial vehicle
Procedia PDF Downloads 29419009 Optimizing Protection of Medieval Glass Mosaic
Authors: J. Valach, S. Pospisil, S. Kuznecov
Abstract:
The paper deals with experimental estimation of future environmental load on medieval mosaic of Last Judgement on entrance to St. Vitus cathedral on Prague castle. The mosaic suffers from seasonal changes of weather pattern, as well as rains, their acidity, deposition of dust and sooth particles from polluted air and also from freeze-thaw cycles. These phenomena influence state of the mosaic. The mosaic elements, tesserae are mostly made from glass prone to weathering. To estimate future procedure of the best maintenance, relation between various weather scenarios and their effect on the mosaic was investigated. At the same time local method for evaluation of protective coating was developed. Together both methods will contribute to better care for the mosaic and also visitors aesthetical experience.Keywords: environmental load, cultural heritage, glass mosaic, protection
Procedia PDF Downloads 28019008 A Hazard Rate Function for the Time of Ruin
Authors: Sule Sahin, Basak Bulut Karageyik
Abstract:
This paper introduces a hazard rate function for the time of ruin to calculate the conditional probability of ruin for very small intervals. We call this function the force of ruin (FoR). We obtain the expected time of ruin and conditional expected time of ruin from the exact finite time ruin probability with exponential claim amounts. Then we introduce the FoR which gives the conditional probability of ruin and the condition is that ruin has not occurred at time t. We analyse the behavior of the FoR function for different initial surpluses over a specific time interval. We also obtain FoR under the excess of loss reinsurance arrangement and examine the effect of reinsurance on the FoR.Keywords: conditional time of ruin, finite time ruin probability, force of ruin, reinsurance
Procedia PDF Downloads 40519007 SNR Classification Using Multiple CNNs
Authors: Thinh Ngo, Paul Rad, Brian Kelley
Abstract:
Noise estimation is essential in today wireless systems for power control, adaptive modulation, interference suppression and quality of service. Deep learning (DL) has already been applied in the physical layer for modulation and signal classifications. Unacceptably low accuracy of less than 50% is found to undermine traditional application of DL classification for SNR prediction. In this paper, we use divide-and-conquer algorithm and classifier fusion method to simplify SNR classification and therefore enhances DL learning and prediction. Specifically, multiple CNNs are used for classification rather than a single CNN. Each CNN performs a binary classification of a single SNR with two labels: less than, greater than or equal. Together, multiple CNNs are combined to effectively classify over a range of SNR values from −20 ≤ SNR ≤ 32 dB.We use pre-trained CNNs to predict SNR over a wide range of joint channel parameters including multiple Doppler shifts (0, 60, 120 Hz), power-delay profiles, and signal-modulation types (QPSK,16QAM,64-QAM). The approach achieves individual SNR prediction accuracy of 92%, composite accuracy of 70% and prediction convergence one order of magnitude faster than that of traditional estimation.Keywords: classification, CNN, deep learning, prediction, SNR
Procedia PDF Downloads 13319006 Integrative System of GDP, Emissions, Health Services and Population Health in Vietnam: Dynamic Panel Data Estimation
Authors: Ha Hai Duong, Amnon Levy Livermore, Kankesu Jayanthakumaran, Oleg Yerokhin
Abstract:
The issues of economic development, the environment and human health have been investigated since 1990s. Previous researchers have found different empirical evidences of the relationship between income and environmental pollution, health as determinant of economic growth, and the effects of income and environmental pollution on health in various regions of the world. This paper concentrates on integrative relationship analysis of GDP, carbon dioxide emissions, and health services and population health in context of Vietnam. We applied the dynamic generalized method of moments (GMM) estimation on datasets of Vietnam’s sixty-three provinces for the years 2000-2010. Our results show the significant positive effect of GDP on emissions and the dependence of population health on emissions and health services. We find the significant relationship between population health and GDP. Additionally, health services are significantly affected by population health and GDP. Finally, the population size too is other important determinant of both emissions and GDP.Keywords: economic development, emissions, environmental pollution, health
Procedia PDF Downloads 62519005 Hybridized Approach for Distance Estimation Using K-Means Clustering
Authors: Ritu Vashistha, Jitender Kumar
Abstract:
Clustering using the K-means algorithm is a very common way to understand and analyze the obtained output data. When a similar object is grouped, this is called the basis of Clustering. There is K number of objects and C number of cluster in to single cluster in which k is always supposed to be less than C having each cluster to be its own centroid but the major problem is how is identify the cluster is correct based on the data. Formulation of the cluster is not a regular task for every tuple of row record or entity but it is done by an iterative process. Each and every record, tuple, entity is checked and examined and similarity dissimilarity is examined. So this iterative process seems to be very lengthy and unable to give optimal output for the cluster and time taken to find the cluster. To overcome the drawback challenge, we are proposing a formula to find the clusters at the run time, so this approach can give us optimal results. The proposed approach uses the Euclidian distance formula as well melanosis to find the minimum distance between slots as technically we called clusters and the same approach we have also applied to Ant Colony Optimization(ACO) algorithm, which results in the production of two and multi-dimensional matrix.Keywords: ant colony optimization, data clustering, centroids, data mining, k-means
Procedia PDF Downloads 12819004 Uncertainty Estimation in Neural Networks through Transfer Learning
Authors: Ashish James, Anusha James
Abstract:
The impressive predictive performance of deep learning techniques on a wide range of tasks has led to its widespread use. Estimating the confidence of these predictions is paramount for improving the safety and reliability of such systems. However, the uncertainty estimates provided by neural networks (NNs) tend to be overconfident and unreasonable. Ensemble of NNs typically produce good predictions but uncertainty estimates tend to be inconsistent. Inspired by these, this paper presents a framework that can quantitatively estimate the uncertainties by leveraging the advances in transfer learning through slight modification to the existing training pipelines. This promising algorithm is developed with an intention of deployment in real world problems which already boast a good predictive performance by reusing those pretrained models. The idea is to capture the behavior of the trained NNs for the base task by augmenting it with the uncertainty estimates from a supplementary network. A series of experiments with known and unknown distributions show that the proposed approach produces well calibrated uncertainty estimates with high quality predictions.Keywords: uncertainty estimation, neural networks, transfer learning, regression
Procedia PDF Downloads 13519003 Fast Prediction Unit Partition Decision and Accelerating the Algorithm Using Cudafor Intra and Inter Prediction of HEVC
Authors: Qiang Zhang, Chun Yuan
Abstract:
Since the PU (Prediction Unit) decision process is the most time consuming part of the emerging HEVC (High Efficient Video Coding) standardin intra and inter frame coding, this paper proposes the fast PU decision algorithm and speed up the algorithm using CUDA (Compute Unified Device Architecture). In intra frame coding, the fast PU decision algorithm uses the texture features to skip intra-frame prediction or terminal the intra-frame prediction for smaller PU size. In inter frame coding of HEVC, the fast PU decision algorithm takes use of the similarity of its own two Nx2N size PU's motion vectors and the hierarchical structure of CU (Coding Unit) partition to skip some modes of PU partition, so as to reduce the motion estimation times. The accelerate algorithm using CUDA is based on the fast PU decision algorithm which uses the GPU to make the motion search and the gradient computation could be parallel computed. The proposed algorithm achieves up to 57% time saving compared to the HM 10.0 with little rate-distortion losses (0.043dB drop and 1.82% bitrate increase on average).Keywords: HEVC, PU decision, inter prediction, intra prediction, CUDA, parallel
Procedia PDF Downloads 39919002 Poster : Incident Signals Estimation Based on a Modified MCA Learning Algorithm
Authors: Rashid Ahmed , John N. Avaritsiotis
Abstract:
Many signal subspace-based approaches have already been proposed for determining the fixed Direction of Arrival (DOA) of plane waves impinging on an array of sensors. Two procedures for DOA estimation based neural networks are presented. First, Principal Component Analysis (PCA) is employed to extract the maximum eigenvalue and eigenvector from signal subspace to estimate DOA. Second, minor component analysis (MCA) is a statistical method of extracting the eigenvector associated with the smallest eigenvalue of the covariance matrix. In this paper, we will modify a Minor Component Analysis (MCA(R)) learning algorithm to enhance the convergence, where a convergence is essential for MCA algorithm towards practical applications. The learning rate parameter is also presented, which ensures fast convergence of the algorithm, because it has direct effect on the convergence of the weight vector and the error level is affected by this value. MCA is performed to determine the estimated DOA. Preliminary results will be furnished to illustrate the convergences results achieved.Keywords: Direction of Arrival, neural networks, Principle Component Analysis, Minor Component Analysis
Procedia PDF Downloads 45119001 Sensor Fault-Tolerant Model Predictive Control for Linear Parameter Varying Systems
Authors: Yushuai Wang, Feng Xu, Junbo Tan, Xueqian Wang, Bin Liang
Abstract:
In this paper, a sensor fault-tolerant control (FTC) scheme using robust model predictive control (RMPC) and set theoretic fault detection and isolation (FDI) is extended to linear parameter varying (LPV) systems. First, a group of set-valued observers are designed for passive fault detection (FD) and the observer gains are obtained through minimizing the size of invariant set of state estimation-error dynamics. Second, an input set for fault isolation (FI) is designed offline through set theory for actively isolating faults after FD. Third, an RMPC controller based on state estimation for LPV systems is designed to control the system in the presence of disturbance and measurement noise and tolerate faults. Besides, an FTC algorithm is proposed to maintain the plant operate in the corresponding mode when the fault occurs. Finally, a numerical example is used to show the effectiveness of the proposed results.Keywords: fault detection, linear parameter varying, model predictive control, set theory
Procedia PDF Downloads 25219000 Airborne SAR Data Analysis for Impact of Doppler Centroid on Image Quality and Registration Accuracy
Authors: Chhabi Nigam, S. Ramakrishnan
Abstract:
This paper brings out the analysis of the airborne Synthetic Aperture Radar (SAR) data to study the impact of Doppler centroid on Image quality and geocoding accuracy from the perspective of Stripmap mode of data acquisition. Although in Stripmap mode of data acquisition radar beam points at 90 degrees broad side (side looking), shift in the Doppler centroid is invariable due to platform motion. In-accurate estimation of Doppler centroid leads to poor image quality and image miss-registration. The effect of Doppler centroid is analyzed in this paper using multiple sets of data collected from airborne platform. Occurrences of ghost (ambiguous) targets and their power levels have been analyzed that impacts appropriate choice of PRF. Effect of aircraft attitudes (roll, pitch and yaw) on the Doppler centroid is also analyzed with the collected data sets. Various stages of the RDA (Range Doppler Algorithm) algorithm used for image formation in Stripmap mode, range compression, Doppler centroid estimation, azimuth compression, range cell migration correction are analyzed to find the performance limits and the dependence of the imaging geometry on the final image. The ability of Doppler centroid estimation to enhance the imaging accuracy for registration are also illustrated in this paper. The paper also tries to bring out the processing of low squint SAR data, the challenges and the performance limits imposed by the imaging geometry and the platform dynamics on the final image quality metrics. Finally, the effect on various terrain types, including land, water and bright scatters is also presented.Keywords: ambiguous target, Doppler Centroid, image registration, Airborne SAR
Procedia PDF Downloads 21818999 Chemometric Estimation of Phytochemicals Affecting the Antioxidant Potential of Lettuce
Authors: Milica Karadzic, Lidija Jevric, Sanja Podunavac-Kuzmanovic, Strahinja Kovacevic, Aleksandra Tepic-Horecki, Zdravko Sumic
Abstract:
In this paper, the influence of six different phytochemical content (phenols, carotenoids, chlorophyll a, chlorophyll b, chlorophyll a + b and vitamin C) on antioxidant potential of Murai and Levistro lettuce varieties was evaluated. Variable selection was made by generalized pair correlation method (GPCM) as a novel ranking method. This method is used for the discrimination between two variables that almost equal correlate to a dependent variable. Fisher’s conditional exact and McNemar’s test were carried out. Established multiple linear (MLR) models were statistically evaluated. As the best phytochemicals for the antioxidant potential prediction, chlorophyll a, chlorophyll a + b and total carotenoids content stand out. This was confirmed through both GPCM and MLR, predictive ability of obtained MLR can be used for antioxidant potential estimation for similar lettuce samples. This article is based upon work from the project of the Provincial Secretariat for Science and Technological Development of Vojvodina (No. 114-451-347/2015-02).Keywords: antioxidant activity, generalized pair correlation method, lettuce, regression analysis
Procedia PDF Downloads 38718998 Estimation of Fouling in a Cross-Flow Heat Exchanger Using Artificial Neural Network Approach
Authors: Rania Jradi, Christophe Marvillet, Mohamed Razak Jeday
Abstract:
One of the most frequently encountered problems in industrial heat exchangers is fouling, which degrades the thermal and hydraulic performances of these types of equipment, leading thus to failure if undetected. And it occurs due to the accumulation of undesired material on the heat transfer surface. So, it is necessary to know about the heat exchanger fouling dynamics to plan mitigation strategies, ensuring a sustainable and safe operation. This paper proposes an Artificial Neural Network (ANN) approach to estimate the fouling resistance in a cross-flow heat exchanger by the collection of the operating data of the phosphoric acid concentration loop. The operating data of 361 was used to validate the proposed model. The ANN attains AARD= 0.048%, MSE= 1.811x10⁻¹¹, RMSE= 4.256x 10⁻⁶ and r²=99.5 % of accuracy which confirms that it is a credible and valuable approach for industrialists and technologists who are faced with the drawbacks of fouling in heat exchangers.Keywords: cross-flow heat exchanger, fouling, estimation, phosphoric acid concentration loop, artificial neural network approach
Procedia PDF Downloads 19718997 Statistical Inferences for GQARCH-It\^{o} - Jumps Model Based on The Realized Range Volatility
Authors: Fu Jinyu, Lin Jinguan
Abstract:
This paper introduces a novel approach that unifies two types of models: one is the continuous-time jump-diffusion used to model high-frequency data, and the other is discrete-time GQARCH employed to model low-frequency financial data by embedding the discrete GQARCH structure with jumps in the instantaneous volatility process. This model is named “GQARCH-It\^{o} -Jumps mode.” We adopt the realized range-based threshold estimation for high-frequency financial data rather than the realized return-based volatility estimators, which entail the loss of intra-day information of the price movement. Meanwhile, a quasi-likelihood function for the low-frequency GQARCH structure with jumps is developed for the parametric estimate. The asymptotic theories are mainly established for the proposed estimators in the case of finite activity jumps. Moreover, simulation studies are implemented to check the finite sample performance of the proposed methodology. Specifically, it is demonstrated that how our proposed approaches can be practically used on some financial data.Keywords: It\^{o} process, GQARCH, leverage effects, threshold, realized range-based volatility estimator, quasi-maximum likelihood estimate
Procedia PDF Downloads 15518996 Adequacy of Advanced Earthquake Intensity Measures for Estimation of Damage under Seismic Excitation with Arbitrary Orientation
Authors: Konstantinos G. Kostinakis, Manthos K. Papadopoulos, Asimina M. Athanatopoulou
Abstract:
An important area of research in seismic risk analysis is the evaluation of expected seismic damage of structures under a specific earthquake ground motion. Several conventional intensity measures of ground motion have been used to estimate their damage potential to structures. Yet, none of them was proved to be able to predict adequately the seismic damage of any structural system. Therefore, alternative advanced intensity measures which take into account not only ground motion characteristics but also structural information have been proposed. The adequacy of a number of advanced earthquake intensity measures in prediction of structural damage of 3D R/C buildings under seismic excitation which attacks the building with arbitrary incident angle is investigated in the present paper. To achieve this purpose, a symmetric in plan and an asymmetric 5-story R/C building are studied. The two buildings are subjected to 20 bidirectional earthquake ground motions. The two horizontal accelerograms of each ground motion are applied along horizontal orthogonal axes forming 72 different angles with the structural axes. The response is computed by non-linear time history analysis. The structural damage is expressed in terms of the maximum interstory drift as well as the overall structural damage index. The values of the aforementioned seismic damage measures determined for incident angle 0° as well as their maximum values over all seismic incident angles are correlated with 9 structure-specific ground motion intensity measures. The research identified certain intensity measures which exhibited strong correlation with the seismic damage of the two buildings. However, their adequacy for estimation of the structural damage depends on the response parameter adopted. Furthermore, it was confirmed that the widely used spectral acceleration at the fundamental period of the structure is a good indicator of the expected earthquake damage level.Keywords: damage indices, non-linear response, seismic excitation angle, structure-specific intensity measures
Procedia PDF Downloads 49318995 A Partially Accelerated Life Test Planning with Competing Risks and Linear Degradation Path under Tampered Failure Rate Model
Authors: Fariba Azizi, Firoozeh Haghighi, Viliam Makis
Abstract:
In this paper, we propose a method to model the relationship between failure time and degradation for a simple step stress test where underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to shorten failure time of products and a tampered failure rate (TFR) model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates (MLEs) of the model parameters are obtained through an expectation-maximization (EM) algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real example is analyzed to illustrate the application of the proposed methods.Keywords: cause of failure, linear degradation path, reliability function, expectation-maximization algorithm, intensity, masked data
Procedia PDF Downloads 33318994 BER of the Leaky Feeder under Rayleigh Fading Multichannel Reception with Imperfect Phase Estimation
Authors: Hasan Farahneh, Xavier Fernando
Abstract:
Leaky Feeder (LF) has been a proven technology for many decades and its promises broadband wireless access in short range but being overlooked until now. The LF is a natural MIMO transceiver ideal for micro and pico cells. In this work, the LF is considered as a linear antenna array MultiInput-Single-Output (MISO) and derive the average bit error rate (BER) in Rayleigh fading channel considering ideal and independent paths (iid) which consider there is no correlation and mutual coupling between transmit antennas (slots) or receiver antenna considering QPSK modulation with imperfect phase estimation. We consider maximal ratio transmission (MRT) at the transmit end and maximal ratio combining (MRC) at the receiving end. Analytical expressions are derived for the BER with radiating cable transmitters. The effects of slot spacing and carrier frequency on the BER are also studied. Numerical evaluations show the radiating cable transmitter offer much lower BER than a single antenna transmitter with same SNR.Keywords: leaky feeder, BER, QPSK, rayleigh fading, channel gain, phase mismatch
Procedia PDF Downloads 38118993 Process for Analyzing Information Security Risks Associated with the Incorporation of Online Dispute Resolution Systems in the Context of Conciliation in Colombia
Authors: Jefferson Camacho Mejia, Jenny Paola Forero Pachon, Luis Carlos Gomez Florez
Abstract:
The innumerable possibilities offered by the use of Information Technology (IT) in the development of different socio-economic activities has made a change in the social paradigm and the emergence of the so-called information and knowledge society. The Colombian government, aware of this reality, has been promoting the use of IT as part of the E-government strategy adopted in the country. However, it is well known that the use of IT implies the existence of certain threats that put the security of information in the digital environment at risk. One of the priorities of the Colombian government is to improve access to alternative justice through IT, in particular, access to Alternative Dispute Resolution (ADR): conciliation, arbitration and friendly composition; by means of which it is sought that the citizens directly resolve their differences. To this end, a trend has been identified in the use of Online Dispute Resolution (ODR) systems, which extend the benefits of ADR to the digital environment through the use of IT. This article presents a process for the analysis of information security risks associated with the incorporation of ODR systems in the context of conciliation in Colombia, based on four fundamental stages identified in the literature: (I) Identification of assets, (II) Identification of threats and vulnerabilities (III) Estimation of the impact and 4) Estimation of risk levels. The methodological design adopted for this research was the grounded theory, since it involves interactions that are applied to a specific context and from the perspective of diverse participants. As a result of this investigation, the activities to be followed are defined to carry out an analysis of information security risks, in the context of the conciliation in Colombia supported by ODR systems, thus contributing to the estimation of the risks to make possible its subsequent treatment.Keywords: alternative dispute resolution, conciliation, information security, online dispute resolution systems, process, risk analysis
Procedia PDF Downloads 23918992 Impacts of Climate Change on Food Grain Yield and Its Variability across Seasons and Altitudes in Odisha
Authors: Dibakar Sahoo, Sridevi Gummadi
Abstract:
The focus of the study is to empirically analyse the climatic impacts on foodgrain yield and its variability across seasons and altitudes in Odisha, one of the most vulnerable states in India. The study uses Just-Pope Stochastic Production function by using two-step Feasible Generalized Least Square (FGLS): mean equation estimation and variance equation estimation. The study uses the panel data on foodgrain yield, rainfall and temperature for 13 districts during the period 1984-2013. The study considers four seasons: winter (December-February), summer (March-May), Rainy (June-September) and autumn (October-November). The districts under consideration have been categorized under three altitude regions such as low (< 70 masl), middle (153-305 masl) and high (>305 masl) altitudes. The results show that an increase in the standard deviations of monthly rainfall during rainy and autumn seasons have an adversely significant impact on the mean yield of foodgrains in Odisha. The summer temperature has beneficial effects by significantly increasing mean yield as the summer season is associated with harvesting stage of Rabi crops. The changing pattern of temperature has increasing effect on the yield variability of foodgrains during the summer season, whereas it has a decreasing effect on yield variability of foodgrains during the Rainy season. Moreover, the positive expected signs of trend variable in both mean and variance equation suggests that foodgrain yield and its variability increases with time. On the other hand, a change in mean levels of rainfall and temperature during different seasons has heterogeneous impacts either harmful or beneficial depending on the altitudes. These findings imply that adaptation strategies should be tailor-made to minimize the adverse impacts of climate change and variability for sustainable development across seasons and altitudes in Odisha agriculture.Keywords: altitude, adaptation strategies, climate change, foodgrain
Procedia PDF Downloads 24218991 A Game of Information in Defense/Attack Strategies: Case of Poisson Attacks
Authors: Asma Ben Yaghlane, Mohamed Naceur Azaiez
Abstract:
In this paper, we briefly introduce the concept of Poisson attacks in the case of defense/attack strategies where attacks are assumed to be continuous. We suggest a game model in which the attacker will combine both criteria of a sufficient confidence level of a successful attack and a reasonably small size of the estimation error in order to launch an attack. Here, estimation error arises from assessing the system failure upon attack using aggregate data at the system level. The corresponding error is referred to as aggregation error. On the other hand, the defender will attempt to deter attack by making one or both criteria inapplicable. The defender will build his/her strategy by both strengthening the targeted system and increasing the size of error. We will formulate the defender problem based on appropriate optimization models. The attacker will opt for a Bayesian updating in assessing the impact on the improvement made by the defender. Then, the attacker will evaluate the feasibility of the attack before making the decision of whether or not to launch it. We will provide illustrations to better explain the process.Keywords: attacker, defender, game theory, information
Procedia PDF Downloads 46818990 Model Based Fault Diagnostic Approach for Limit Switches
Authors: Zafar Mahmood, Surayya Naz, Nazir Shah Khattak
Abstract:
The degree of freedom relates to our capability to observe or model the energy paths within the system. Higher the number of energy paths being modeled leaves to us a higher degree of freedom, but increasing the time and modeling complexity rendering it useless for today’s world’s need for minimum time to market. Since the number of residuals that can be uniquely isolated are dependent on the number of independent outputs of the system, increasing the number of sensors required. The examples of discrete position sensors that may be used to form an array include limit switches, Hall effect sensors, optical sensors, magnetic sensors, etc. Their mechanical design can usually be tailored to fit in the transitional path of an STME in a variety of mechanical configurations. The case studies into multi-sensor system were carried out and actual data from sensors is used to test this generic framework. It is being investigated, how the proper modeling of limit switches as timing sensors, could lead to unified and neutral residual space while keeping the implementation cost reasonably low.Keywords: low-cost limit sensors, fault diagnostics, Single Throw Mechanical Equipment (STME), parameter estimation, parity-space
Procedia PDF Downloads 61718989 In vivo Estimation of Mutation Rate of the Aleutian Mink Disease Virus
Authors: P.P. Rupasinghe, A.H. Farid
Abstract:
The Aleutian mink disease virus (AMDV, Carnivore amdoparvovirus 1) causes persistent infection, plasmacytosis, and formation and deposition of immune complexes in various organs in adult mink, leading to glomerulonephritis, arteritis and sometimes death. The disease has no cure nor an effective vaccine, and identification and culling of mink positive for anti-AMDV antibodies have not been successful in controlling the infection in many countries. The failure to eradicate the virus from infected farms may be caused by keeping false-negative individuals on the farm, virus transmission from wild animals, or neighboring farms. The identification of sources of infection, which can be performed by comparing viral sequences, is important in the success of viral eradication programs. High mutation rates could cause inaccuracies when viral sequences are used to trace back an infection to its origin. There is no published information on the mutation rate of AMDV either in vivo or in vitro. The in vivo estimation is the most accurate method, but it is difficult to perform because of the inherent technical complexities, namely infecting live animals, the unknown numbers of viral generations (i.e., infection cycles), the removal of deleterious mutations over time and genetic drift. The objective of this study was to determine the mutation rate of AMDV on which no information was available. A homogenate was prepared from the spleen of one naturally infected American mink (Neovison vison) from Nova Scotia, Canada (parental template). The near full-length genome of this isolate (91.6%, 4,143 bp) was bidirectionally sequenced. A group of black mink was inoculated with this homogenate (descendant mink). Spleen sampled were collected from 10 descendant mink after 16 weeks post-inoculation (wpi) and from anther 10 mink after 176 wpi, and their near-full length genomes were bi-directionally sequenced. Sequences of these mink were compared with each other and with the sequence of the parental template. The number of nucleotide substitutions at 176 wpi was 3.1 times greater than that at 16 wpi (113 vs 36) whereas the estimates of mutation rate at 176 wpi was 3.1 times lower than that at 176 wpi (2.85×10-3 vs 9.13×10-4 substitutions/ site/ year), showing a decreasing trend in the mutation rate per unit of time. Although there is no report on in vivo estimate of the mutation rate of DNA viruses in animals using the same method which was used in the current study, these estimates are at the higher range of reported values for DNA viruses determined by various techniques. These high estimates are logical based on the wide range of diversity and pathogenicity of AMDV isolates. The results suggest that increases in the number of nucleotide substitutions over time and subsequent divergence make it difficult to accurately trace back AMDV isolates to their origin when several years elapsed between the two samplings.Keywords: Aleutian mink disease virus, American mink, mutation rate, nucleotide substitution
Procedia PDF Downloads 12418988 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 14318987 Thermodynamics of Aqueous Solutions of Organic Molecule and Electrolyte: Use Cloud Point to Obtain Better Estimates of Thermodynamic Parameters
Authors: Jyoti Sahu, Vinay A. Juvekar
Abstract:
Electrolytes are often used to bring about salting-in and salting-out of organic molecules and polymers (e.g. polyethylene glycols/proteins) from the aqueous solutions. For quantification of these phenomena, a thermodynamic model which can accurately predict activity coefficient of electrolyte as a function of temperature is needed. The thermodynamics models available in the literature contain a large number of empirical parameters. These parameters are estimated using lower/upper critical solution temperature of the solution in the electrolyte/organic molecule at different temperatures. Since the number of parameters is large, inaccuracy can bethe creep in during their estimation, which can affect the reliability of prediction beyond the range in which these parameters are estimated. Cloud point of solution is related to its free energy through temperature and composition derivative. Hence, the Cloud point measurement can be used for accurate estimation of the temperature and composition dependence of parameters in the model for free energy. Hence, if we use a two pronged procedure in which we first use cloud point of solution to estimate some of the parameters of the thermodynamic model and determine the rest using osmotic coefficient data, we gain on two counts. First, since the parameters, estimated in each of the two steps, are fewer, we achieve higher accuracy of estimation. The second and more important gain is that the resulting model parameters are more sensitive to temperature. This is crucial when we wish to use the model outside temperatures window within which the parameter estimation is sought. The focus of the present work is to prove this proposition. We have used electrolyte (NaCl/Na2CO3)-water-organic molecule (Iso-propanol/ethanol) as the model system. The model of Robinson-Stokes-Glukauf is modified by incorporating the temperature dependent Flory-Huggins interaction parameters. The Helmholtz free energy expression contains, in addition to electrostatic and translational entropic contributions, three Flory-Huggins pairwise interaction contributions viz., and (w-water, p-polymer, s-salt). These parameters depend both on temperature and concentrations. The concentration dependence is expressed in the form of a quadratic expression involving the volume fractions of the interacting species. The temperature dependence is expressed in the form .To obtain the temperature-dependent interaction parameters for organic molecule-water and electrolyte-water systems, Critical solution temperature of electrolyte -water-organic molecules is measured using cloud point measuring apparatus The temperature and composition dependent interaction parameters for electrolyte-water-organic molecule are estimated through measurement of cloud point of solution. The model is used to estimate critical solution temperature (CST) of electrolyte water-organic molecules solution. We have experimentally determined the critical solution temperature of different compositions of electrolyte-water-organic molecule solution and compared the results with the estimates based on our model. The two sets of values show good agreement. On the other hand when only osmotic coefficients are used for estimation of the free energy model, CST predicted using the resulting model show poor agreement with the experiments. Thus, the importance of the CST data in the estimation of parameters of the thermodynamic model is confirmed through this work.Keywords: concentrated electrolytes, Debye-Hückel theory, interaction parameters, Robinson-Stokes-Glueckauf model, Flory-Huggins model, critical solution temperature
Procedia PDF Downloads 39118986 Evaluation of Reliability Indices Using Monte Carlo Simulation Accounting Time to Switch
Authors: Sajjad Asefi, Hossein Afrakhte
Abstract:
This paper presents the evaluation of reliability indices of an electrical distribution system using Monte Carlo simulation technique accounting Time To Switch (TTS) for each section. In this paper, the distribution system has been assumed by accounting random repair time omission. For simplicity, we have assumed the reliability analysis to be based on exponential law. Each segment has a specified rate of failure (λ) and repair time (r) which will give us the mean up time and mean down time of each section in distribution system. After calculating the modified mean up time (MUT) in years, mean down time (MDT) in hours and unavailability (U) in h/year, TTS have been added to the time which the system is not available, i.e. MDT. In this paper, we have assumed the TTS to be a random variable with Log-Normal distribution.Keywords: distribution system, Monte Carlo simulation, reliability, repair time, time to switch (TTS)
Procedia PDF Downloads 425