Search results for: travel time estimation
19152 Uncertainty Estimation in Neural Networks through Transfer Learning
Authors: Ashish James, Anusha James
Abstract:
The impressive predictive performance of deep learning techniques on a wide range of tasks has led to its widespread use. Estimating the confidence of these predictions is paramount for improving the safety and reliability of such systems. However, the uncertainty estimates provided by neural networks (NNs) tend to be overconfident and unreasonable. Ensemble of NNs typically produce good predictions but uncertainty estimates tend to be inconsistent. Inspired by these, this paper presents a framework that can quantitatively estimate the uncertainties by leveraging the advances in transfer learning through slight modification to the existing training pipelines. This promising algorithm is developed with an intention of deployment in real world problems which already boast a good predictive performance by reusing those pretrained models. The idea is to capture the behavior of the trained NNs for the base task by augmenting it with the uncertainty estimates from a supplementary network. A series of experiments with known and unknown distributions show that the proposed approach produces well calibrated uncertainty estimates with high quality predictions.Keywords: uncertainty estimation, neural networks, transfer learning, regression
Procedia PDF Downloads 13519151 Fast Prediction Unit Partition Decision and Accelerating the Algorithm Using Cudafor Intra and Inter Prediction of HEVC
Authors: Qiang Zhang, Chun Yuan
Abstract:
Since the PU (Prediction Unit) decision process is the most time consuming part of the emerging HEVC (High Efficient Video Coding) standardin intra and inter frame coding, this paper proposes the fast PU decision algorithm and speed up the algorithm using CUDA (Compute Unified Device Architecture). In intra frame coding, the fast PU decision algorithm uses the texture features to skip intra-frame prediction or terminal the intra-frame prediction for smaller PU size. In inter frame coding of HEVC, the fast PU decision algorithm takes use of the similarity of its own two Nx2N size PU's motion vectors and the hierarchical structure of CU (Coding Unit) partition to skip some modes of PU partition, so as to reduce the motion estimation times. The accelerate algorithm using CUDA is based on the fast PU decision algorithm which uses the GPU to make the motion search and the gradient computation could be parallel computed. The proposed algorithm achieves up to 57% time saving compared to the HM 10.0 with little rate-distortion losses (0.043dB drop and 1.82% bitrate increase on average).Keywords: HEVC, PU decision, inter prediction, intra prediction, CUDA, parallel
Procedia PDF Downloads 39919150 Poster : Incident Signals Estimation Based on a Modified MCA Learning Algorithm
Authors: Rashid Ahmed , John N. Avaritsiotis
Abstract:
Many signal subspace-based approaches have already been proposed for determining the fixed Direction of Arrival (DOA) of plane waves impinging on an array of sensors. Two procedures for DOA estimation based neural networks are presented. First, Principal Component Analysis (PCA) is employed to extract the maximum eigenvalue and eigenvector from signal subspace to estimate DOA. Second, minor component analysis (MCA) is a statistical method of extracting the eigenvector associated with the smallest eigenvalue of the covariance matrix. In this paper, we will modify a Minor Component Analysis (MCA(R)) learning algorithm to enhance the convergence, where a convergence is essential for MCA algorithm towards practical applications. The learning rate parameter is also presented, which ensures fast convergence of the algorithm, because it has direct effect on the convergence of the weight vector and the error level is affected by this value. MCA is performed to determine the estimated DOA. Preliminary results will be furnished to illustrate the convergences results achieved.Keywords: Direction of Arrival, neural networks, Principle Component Analysis, Minor Component Analysis
Procedia PDF Downloads 45119149 Sensor Fault-Tolerant Model Predictive Control for Linear Parameter Varying Systems
Authors: Yushuai Wang, Feng Xu, Junbo Tan, Xueqian Wang, Bin Liang
Abstract:
In this paper, a sensor fault-tolerant control (FTC) scheme using robust model predictive control (RMPC) and set theoretic fault detection and isolation (FDI) is extended to linear parameter varying (LPV) systems. First, a group of set-valued observers are designed for passive fault detection (FD) and the observer gains are obtained through minimizing the size of invariant set of state estimation-error dynamics. Second, an input set for fault isolation (FI) is designed offline through set theory for actively isolating faults after FD. Third, an RMPC controller based on state estimation for LPV systems is designed to control the system in the presence of disturbance and measurement noise and tolerate faults. Besides, an FTC algorithm is proposed to maintain the plant operate in the corresponding mode when the fault occurs. Finally, a numerical example is used to show the effectiveness of the proposed results.Keywords: fault detection, linear parameter varying, model predictive control, set theory
Procedia PDF Downloads 25219148 Airborne SAR Data Analysis for Impact of Doppler Centroid on Image Quality and Registration Accuracy
Authors: Chhabi Nigam, S. Ramakrishnan
Abstract:
This paper brings out the analysis of the airborne Synthetic Aperture Radar (SAR) data to study the impact of Doppler centroid on Image quality and geocoding accuracy from the perspective of Stripmap mode of data acquisition. Although in Stripmap mode of data acquisition radar beam points at 90 degrees broad side (side looking), shift in the Doppler centroid is invariable due to platform motion. In-accurate estimation of Doppler centroid leads to poor image quality and image miss-registration. The effect of Doppler centroid is analyzed in this paper using multiple sets of data collected from airborne platform. Occurrences of ghost (ambiguous) targets and their power levels have been analyzed that impacts appropriate choice of PRF. Effect of aircraft attitudes (roll, pitch and yaw) on the Doppler centroid is also analyzed with the collected data sets. Various stages of the RDA (Range Doppler Algorithm) algorithm used for image formation in Stripmap mode, range compression, Doppler centroid estimation, azimuth compression, range cell migration correction are analyzed to find the performance limits and the dependence of the imaging geometry on the final image. The ability of Doppler centroid estimation to enhance the imaging accuracy for registration are also illustrated in this paper. The paper also tries to bring out the processing of low squint SAR data, the challenges and the performance limits imposed by the imaging geometry and the platform dynamics on the final image quality metrics. Finally, the effect on various terrain types, including land, water and bright scatters is also presented.Keywords: ambiguous target, Doppler Centroid, image registration, Airborne SAR
Procedia PDF Downloads 21819147 A Robust Optimization Model for the Single-Depot Capacitated Location-Routing Problem
Authors: Abdolsalam Ghaderi
Abstract:
In this paper, the single-depot capacitated location-routing problem under uncertainty is presented. The problem aims to find the optimal location of a single depot and the routing of vehicles to serve the customers when the parameters may change under different circumstances. This problem has many applications, especially in the area of supply chain management and distribution systems. To get closer to real-world situations, travel time of vehicles, the fixed cost of vehicles usage and customers’ demand are considered as a source of uncertainty. A combined approach including robust optimization and stochastic programming was presented to deal with the uncertainty in the problem at hand. For this purpose, a mixed integer programming model is developed and a heuristic algorithm based on Variable Neighborhood Search(VNS) is presented to solve the model. Finally, the computational results are presented and future research directions are discussed.Keywords: location-routing problem, robust optimization, stochastic programming, variable neighborhood search
Procedia PDF Downloads 27019146 Genetic Algorithms Based ACPS Safety
Authors: Emine Laarouchi, Daniela Cancila, Laurent Soulier, Hakima Chaouchi
Abstract:
Cyber-Physical Systems as drones proved their efficiency for supporting emergency applications. For these particular applications, travel time and autonomous navigation algorithms are of paramount importance, especially when missions are performed in urban environments with high obstacle density. In this context, however, safety properties are not properly addressed. Our ambition is to optimize the system safety level under autonomous navigation systems, by preserving performance of the CPS. At this aim, we introduce genetic algorithms in the autonomous navigation process of the drone to better infer its trajectory considering the possible obstacles. We first model the wished safety requirements through a cost function and then seek to optimize it though genetics algorithms (GA). The main advantage in the use of GA is to consider different parameters together, for example, the level of battery for navigation system selection. Our tests show that the GA introduction in the autonomous navigation systems minimize the risk of safety lossless. Finally, although our simulation has been tested for autonomous drones, our approach and results could be extended for other autonomous navigation systems such as autonomous cars, robots, etc.Keywords: safety, unmanned aerial vehicles , CPS, ACPS, drones, path planning, genetic algorithms
Procedia PDF Downloads 18119145 The Traveling Behavior and Needs for Tourist Support Facilities of Inbound Tourists Visiting Ratanakosin Island
Authors: Sakul Jariyachamsit
Abstract:
The objectives of this research were to study the behaviour of inbound tourist who visited Ratanakosin Island and to study their needs concerning support facilities. The independent variables included gender, age, levels of education, occupation, and income while the dependent variables were classified into two groups: tourists’ behaviour variables and tourists’ need of supporting facilities. A simple random sampling method was utilized to get 225 respondents. The majority of respondents were both male and female in the same proportion but most were between 21-30 years old. Most were married with a graduated degree. The average income of the respondents was between $20,000-30,000. The findings revealed that the majority of respondents came to Thailand for the first time and spent about 8 days in Thailand and preferred to travel in small groups. Their decision to come to Thailand was influenced by word of mouth. When they first thought of Thailand, they thought of Thai food. In terms of the needs for tourists around the Ratanakosin Island, and ranked in importance, are as follows: a tourist centre, somebody who can speak English, a trustable agency, police patrol, and the availability of maps and brochures.Keywords: Rattanakosin Island, tourist, travelling behaviour, media engineering
Procedia PDF Downloads 35719144 Chemometric Estimation of Phytochemicals Affecting the Antioxidant Potential of Lettuce
Authors: Milica Karadzic, Lidija Jevric, Sanja Podunavac-Kuzmanovic, Strahinja Kovacevic, Aleksandra Tepic-Horecki, Zdravko Sumic
Abstract:
In this paper, the influence of six different phytochemical content (phenols, carotenoids, chlorophyll a, chlorophyll b, chlorophyll a + b and vitamin C) on antioxidant potential of Murai and Levistro lettuce varieties was evaluated. Variable selection was made by generalized pair correlation method (GPCM) as a novel ranking method. This method is used for the discrimination between two variables that almost equal correlate to a dependent variable. Fisher’s conditional exact and McNemar’s test were carried out. Established multiple linear (MLR) models were statistically evaluated. As the best phytochemicals for the antioxidant potential prediction, chlorophyll a, chlorophyll a + b and total carotenoids content stand out. This was confirmed through both GPCM and MLR, predictive ability of obtained MLR can be used for antioxidant potential estimation for similar lettuce samples. This article is based upon work from the project of the Provincial Secretariat for Science and Technological Development of Vojvodina (No. 114-451-347/2015-02).Keywords: antioxidant activity, generalized pair correlation method, lettuce, regression analysis
Procedia PDF Downloads 38719143 Estimation of Fouling in a Cross-Flow Heat Exchanger Using Artificial Neural Network Approach
Authors: Rania Jradi, Christophe Marvillet, Mohamed Razak Jeday
Abstract:
One of the most frequently encountered problems in industrial heat exchangers is fouling, which degrades the thermal and hydraulic performances of these types of equipment, leading thus to failure if undetected. And it occurs due to the accumulation of undesired material on the heat transfer surface. So, it is necessary to know about the heat exchanger fouling dynamics to plan mitigation strategies, ensuring a sustainable and safe operation. This paper proposes an Artificial Neural Network (ANN) approach to estimate the fouling resistance in a cross-flow heat exchanger by the collection of the operating data of the phosphoric acid concentration loop. The operating data of 361 was used to validate the proposed model. The ANN attains AARD= 0.048%, MSE= 1.811x10⁻¹¹, RMSE= 4.256x 10⁻⁶ and r²=99.5 % of accuracy which confirms that it is a credible and valuable approach for industrialists and technologists who are faced with the drawbacks of fouling in heat exchangers.Keywords: cross-flow heat exchanger, fouling, estimation, phosphoric acid concentration loop, artificial neural network approach
Procedia PDF Downloads 19819142 Statistical Inferences for GQARCH-It\^{o} - Jumps Model Based on The Realized Range Volatility
Authors: Fu Jinyu, Lin Jinguan
Abstract:
This paper introduces a novel approach that unifies two types of models: one is the continuous-time jump-diffusion used to model high-frequency data, and the other is discrete-time GQARCH employed to model low-frequency financial data by embedding the discrete GQARCH structure with jumps in the instantaneous volatility process. This model is named “GQARCH-It\^{o} -Jumps mode.” We adopt the realized range-based threshold estimation for high-frequency financial data rather than the realized return-based volatility estimators, which entail the loss of intra-day information of the price movement. Meanwhile, a quasi-likelihood function for the low-frequency GQARCH structure with jumps is developed for the parametric estimate. The asymptotic theories are mainly established for the proposed estimators in the case of finite activity jumps. Moreover, simulation studies are implemented to check the finite sample performance of the proposed methodology. Specifically, it is demonstrated that how our proposed approaches can be practically used on some financial data.Keywords: It\^{o} process, GQARCH, leverage effects, threshold, realized range-based volatility estimator, quasi-maximum likelihood estimate
Procedia PDF Downloads 15619141 Adequacy of Advanced Earthquake Intensity Measures for Estimation of Damage under Seismic Excitation with Arbitrary Orientation
Authors: Konstantinos G. Kostinakis, Manthos K. Papadopoulos, Asimina M. Athanatopoulou
Abstract:
An important area of research in seismic risk analysis is the evaluation of expected seismic damage of structures under a specific earthquake ground motion. Several conventional intensity measures of ground motion have been used to estimate their damage potential to structures. Yet, none of them was proved to be able to predict adequately the seismic damage of any structural system. Therefore, alternative advanced intensity measures which take into account not only ground motion characteristics but also structural information have been proposed. The adequacy of a number of advanced earthquake intensity measures in prediction of structural damage of 3D R/C buildings under seismic excitation which attacks the building with arbitrary incident angle is investigated in the present paper. To achieve this purpose, a symmetric in plan and an asymmetric 5-story R/C building are studied. The two buildings are subjected to 20 bidirectional earthquake ground motions. The two horizontal accelerograms of each ground motion are applied along horizontal orthogonal axes forming 72 different angles with the structural axes. The response is computed by non-linear time history analysis. The structural damage is expressed in terms of the maximum interstory drift as well as the overall structural damage index. The values of the aforementioned seismic damage measures determined for incident angle 0° as well as their maximum values over all seismic incident angles are correlated with 9 structure-specific ground motion intensity measures. The research identified certain intensity measures which exhibited strong correlation with the seismic damage of the two buildings. However, their adequacy for estimation of the structural damage depends on the response parameter adopted. Furthermore, it was confirmed that the widely used spectral acceleration at the fundamental period of the structure is a good indicator of the expected earthquake damage level.Keywords: damage indices, non-linear response, seismic excitation angle, structure-specific intensity measures
Procedia PDF Downloads 49319140 A Partially Accelerated Life Test Planning with Competing Risks and Linear Degradation Path under Tampered Failure Rate Model
Authors: Fariba Azizi, Firoozeh Haghighi, Viliam Makis
Abstract:
In this paper, we propose a method to model the relationship between failure time and degradation for a simple step stress test where underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to shorten failure time of products and a tampered failure rate (TFR) model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates (MLEs) of the model parameters are obtained through an expectation-maximization (EM) algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real example is analyzed to illustrate the application of the proposed methods.Keywords: cause of failure, linear degradation path, reliability function, expectation-maximization algorithm, intensity, masked data
Procedia PDF Downloads 33419139 BER of the Leaky Feeder under Rayleigh Fading Multichannel Reception with Imperfect Phase Estimation
Authors: Hasan Farahneh, Xavier Fernando
Abstract:
Leaky Feeder (LF) has been a proven technology for many decades and its promises broadband wireless access in short range but being overlooked until now. The LF is a natural MIMO transceiver ideal for micro and pico cells. In this work, the LF is considered as a linear antenna array MultiInput-Single-Output (MISO) and derive the average bit error rate (BER) in Rayleigh fading channel considering ideal and independent paths (iid) which consider there is no correlation and mutual coupling between transmit antennas (slots) or receiver antenna considering QPSK modulation with imperfect phase estimation. We consider maximal ratio transmission (MRT) at the transmit end and maximal ratio combining (MRC) at the receiving end. Analytical expressions are derived for the BER with radiating cable transmitters. The effects of slot spacing and carrier frequency on the BER are also studied. Numerical evaluations show the radiating cable transmitter offer much lower BER than a single antenna transmitter with same SNR.Keywords: leaky feeder, BER, QPSK, rayleigh fading, channel gain, phase mismatch
Procedia PDF Downloads 38119138 Process for Analyzing Information Security Risks Associated with the Incorporation of Online Dispute Resolution Systems in the Context of Conciliation in Colombia
Authors: Jefferson Camacho Mejia, Jenny Paola Forero Pachon, Luis Carlos Gomez Florez
Abstract:
The innumerable possibilities offered by the use of Information Technology (IT) in the development of different socio-economic activities has made a change in the social paradigm and the emergence of the so-called information and knowledge society. The Colombian government, aware of this reality, has been promoting the use of IT as part of the E-government strategy adopted in the country. However, it is well known that the use of IT implies the existence of certain threats that put the security of information in the digital environment at risk. One of the priorities of the Colombian government is to improve access to alternative justice through IT, in particular, access to Alternative Dispute Resolution (ADR): conciliation, arbitration and friendly composition; by means of which it is sought that the citizens directly resolve their differences. To this end, a trend has been identified in the use of Online Dispute Resolution (ODR) systems, which extend the benefits of ADR to the digital environment through the use of IT. This article presents a process for the analysis of information security risks associated with the incorporation of ODR systems in the context of conciliation in Colombia, based on four fundamental stages identified in the literature: (I) Identification of assets, (II) Identification of threats and vulnerabilities (III) Estimation of the impact and 4) Estimation of risk levels. The methodological design adopted for this research was the grounded theory, since it involves interactions that are applied to a specific context and from the perspective of diverse participants. As a result of this investigation, the activities to be followed are defined to carry out an analysis of information security risks, in the context of the conciliation in Colombia supported by ODR systems, thus contributing to the estimation of the risks to make possible its subsequent treatment.Keywords: alternative dispute resolution, conciliation, information security, online dispute resolution systems, process, risk analysis
Procedia PDF Downloads 23919137 Statistical and Land Planning Study of Tourist Arrivals in Greece during 2005-2016
Authors: Dimitra Alexiou
Abstract:
During the last 10 years, in spite of the economic crisis, the number of tourists arriving in Greece has increased, particularly during the tourist season from April to October. In this paper, the number of annual tourist arrivals is studied to explore their preferences with regard to the month of travel, the selected destinations, as well the amount of money spent. The collected data are processed with statistical methods, yielding numerical and graphical results. From the computation of statistical parameters and the forecasting with exponential smoothing, useful conclusions are arrived at that can be used by the Greek tourism authorities, as well as by tourist organizations, for planning purposes for the coming years. The results of this paper and the computed forecast can also be used for decision making by private tourist enterprises that are investing in Greece. With regard to the statistical methods, the method of Simple Exponential Smoothing of time series of data is employed. The search for a best forecast for 2017 and 2018 provides the value of the smoothing coefficient. For all statistical computations and graphics Microsoft Excel is used.Keywords: tourism, statistical methods, exponential smoothing, land spatial planning, economy
Procedia PDF Downloads 26519136 Impacts of Climate Change on Food Grain Yield and Its Variability across Seasons and Altitudes in Odisha
Authors: Dibakar Sahoo, Sridevi Gummadi
Abstract:
The focus of the study is to empirically analyse the climatic impacts on foodgrain yield and its variability across seasons and altitudes in Odisha, one of the most vulnerable states in India. The study uses Just-Pope Stochastic Production function by using two-step Feasible Generalized Least Square (FGLS): mean equation estimation and variance equation estimation. The study uses the panel data on foodgrain yield, rainfall and temperature for 13 districts during the period 1984-2013. The study considers four seasons: winter (December-February), summer (March-May), Rainy (June-September) and autumn (October-November). The districts under consideration have been categorized under three altitude regions such as low (< 70 masl), middle (153-305 masl) and high (>305 masl) altitudes. The results show that an increase in the standard deviations of monthly rainfall during rainy and autumn seasons have an adversely significant impact on the mean yield of foodgrains in Odisha. The summer temperature has beneficial effects by significantly increasing mean yield as the summer season is associated with harvesting stage of Rabi crops. The changing pattern of temperature has increasing effect on the yield variability of foodgrains during the summer season, whereas it has a decreasing effect on yield variability of foodgrains during the Rainy season. Moreover, the positive expected signs of trend variable in both mean and variance equation suggests that foodgrain yield and its variability increases with time. On the other hand, a change in mean levels of rainfall and temperature during different seasons has heterogeneous impacts either harmful or beneficial depending on the altitudes. These findings imply that adaptation strategies should be tailor-made to minimize the adverse impacts of climate change and variability for sustainable development across seasons and altitudes in Odisha agriculture.Keywords: altitude, adaptation strategies, climate change, foodgrain
Procedia PDF Downloads 24219135 A Game of Information in Defense/Attack Strategies: Case of Poisson Attacks
Authors: Asma Ben Yaghlane, Mohamed Naceur Azaiez
Abstract:
In this paper, we briefly introduce the concept of Poisson attacks in the case of defense/attack strategies where attacks are assumed to be continuous. We suggest a game model in which the attacker will combine both criteria of a sufficient confidence level of a successful attack and a reasonably small size of the estimation error in order to launch an attack. Here, estimation error arises from assessing the system failure upon attack using aggregate data at the system level. The corresponding error is referred to as aggregation error. On the other hand, the defender will attempt to deter attack by making one or both criteria inapplicable. The defender will build his/her strategy by both strengthening the targeted system and increasing the size of error. We will formulate the defender problem based on appropriate optimization models. The attacker will opt for a Bayesian updating in assessing the impact on the improvement made by the defender. Then, the attacker will evaluate the feasibility of the attack before making the decision of whether or not to launch it. We will provide illustrations to better explain the process.Keywords: attacker, defender, game theory, information
Procedia PDF Downloads 46819134 Model Based Fault Diagnostic Approach for Limit Switches
Authors: Zafar Mahmood, Surayya Naz, Nazir Shah Khattak
Abstract:
The degree of freedom relates to our capability to observe or model the energy paths within the system. Higher the number of energy paths being modeled leaves to us a higher degree of freedom, but increasing the time and modeling complexity rendering it useless for today’s world’s need for minimum time to market. Since the number of residuals that can be uniquely isolated are dependent on the number of independent outputs of the system, increasing the number of sensors required. The examples of discrete position sensors that may be used to form an array include limit switches, Hall effect sensors, optical sensors, magnetic sensors, etc. Their mechanical design can usually be tailored to fit in the transitional path of an STME in a variety of mechanical configurations. The case studies into multi-sensor system were carried out and actual data from sensors is used to test this generic framework. It is being investigated, how the proper modeling of limit switches as timing sensors, could lead to unified and neutral residual space while keeping the implementation cost reasonably low.Keywords: low-cost limit sensors, fault diagnostics, Single Throw Mechanical Equipment (STME), parameter estimation, parity-space
Procedia PDF Downloads 61719133 In vivo Estimation of Mutation Rate of the Aleutian Mink Disease Virus
Authors: P.P. Rupasinghe, A.H. Farid
Abstract:
The Aleutian mink disease virus (AMDV, Carnivore amdoparvovirus 1) causes persistent infection, plasmacytosis, and formation and deposition of immune complexes in various organs in adult mink, leading to glomerulonephritis, arteritis and sometimes death. The disease has no cure nor an effective vaccine, and identification and culling of mink positive for anti-AMDV antibodies have not been successful in controlling the infection in many countries. The failure to eradicate the virus from infected farms may be caused by keeping false-negative individuals on the farm, virus transmission from wild animals, or neighboring farms. The identification of sources of infection, which can be performed by comparing viral sequences, is important in the success of viral eradication programs. High mutation rates could cause inaccuracies when viral sequences are used to trace back an infection to its origin. There is no published information on the mutation rate of AMDV either in vivo or in vitro. The in vivo estimation is the most accurate method, but it is difficult to perform because of the inherent technical complexities, namely infecting live animals, the unknown numbers of viral generations (i.e., infection cycles), the removal of deleterious mutations over time and genetic drift. The objective of this study was to determine the mutation rate of AMDV on which no information was available. A homogenate was prepared from the spleen of one naturally infected American mink (Neovison vison) from Nova Scotia, Canada (parental template). The near full-length genome of this isolate (91.6%, 4,143 bp) was bidirectionally sequenced. A group of black mink was inoculated with this homogenate (descendant mink). Spleen sampled were collected from 10 descendant mink after 16 weeks post-inoculation (wpi) and from anther 10 mink after 176 wpi, and their near-full length genomes were bi-directionally sequenced. Sequences of these mink were compared with each other and with the sequence of the parental template. The number of nucleotide substitutions at 176 wpi was 3.1 times greater than that at 16 wpi (113 vs 36) whereas the estimates of mutation rate at 176 wpi was 3.1 times lower than that at 176 wpi (2.85×10-3 vs 9.13×10-4 substitutions/ site/ year), showing a decreasing trend in the mutation rate per unit of time. Although there is no report on in vivo estimate of the mutation rate of DNA viruses in animals using the same method which was used in the current study, these estimates are at the higher range of reported values for DNA viruses determined by various techniques. These high estimates are logical based on the wide range of diversity and pathogenicity of AMDV isolates. The results suggest that increases in the number of nucleotide substitutions over time and subsequent divergence make it difficult to accurately trace back AMDV isolates to their origin when several years elapsed between the two samplings.Keywords: Aleutian mink disease virus, American mink, mutation rate, nucleotide substitution
Procedia PDF Downloads 12519132 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 14319131 Thermodynamics of Aqueous Solutions of Organic Molecule and Electrolyte: Use Cloud Point to Obtain Better Estimates of Thermodynamic Parameters
Authors: Jyoti Sahu, Vinay A. Juvekar
Abstract:
Electrolytes are often used to bring about salting-in and salting-out of organic molecules and polymers (e.g. polyethylene glycols/proteins) from the aqueous solutions. For quantification of these phenomena, a thermodynamic model which can accurately predict activity coefficient of electrolyte as a function of temperature is needed. The thermodynamics models available in the literature contain a large number of empirical parameters. These parameters are estimated using lower/upper critical solution temperature of the solution in the electrolyte/organic molecule at different temperatures. Since the number of parameters is large, inaccuracy can bethe creep in during their estimation, which can affect the reliability of prediction beyond the range in which these parameters are estimated. Cloud point of solution is related to its free energy through temperature and composition derivative. Hence, the Cloud point measurement can be used for accurate estimation of the temperature and composition dependence of parameters in the model for free energy. Hence, if we use a two pronged procedure in which we first use cloud point of solution to estimate some of the parameters of the thermodynamic model and determine the rest using osmotic coefficient data, we gain on two counts. First, since the parameters, estimated in each of the two steps, are fewer, we achieve higher accuracy of estimation. The second and more important gain is that the resulting model parameters are more sensitive to temperature. This is crucial when we wish to use the model outside temperatures window within which the parameter estimation is sought. The focus of the present work is to prove this proposition. We have used electrolyte (NaCl/Na2CO3)-water-organic molecule (Iso-propanol/ethanol) as the model system. The model of Robinson-Stokes-Glukauf is modified by incorporating the temperature dependent Flory-Huggins interaction parameters. The Helmholtz free energy expression contains, in addition to electrostatic and translational entropic contributions, three Flory-Huggins pairwise interaction contributions viz., and (w-water, p-polymer, s-salt). These parameters depend both on temperature and concentrations. The concentration dependence is expressed in the form of a quadratic expression involving the volume fractions of the interacting species. The temperature dependence is expressed in the form .To obtain the temperature-dependent interaction parameters for organic molecule-water and electrolyte-water systems, Critical solution temperature of electrolyte -water-organic molecules is measured using cloud point measuring apparatus The temperature and composition dependent interaction parameters for electrolyte-water-organic molecule are estimated through measurement of cloud point of solution. The model is used to estimate critical solution temperature (CST) of electrolyte water-organic molecules solution. We have experimentally determined the critical solution temperature of different compositions of electrolyte-water-organic molecule solution and compared the results with the estimates based on our model. The two sets of values show good agreement. On the other hand when only osmotic coefficients are used for estimation of the free energy model, CST predicted using the resulting model show poor agreement with the experiments. Thus, the importance of the CST data in the estimation of parameters of the thermodynamic model is confirmed through this work.Keywords: concentrated electrolytes, Debye-Hückel theory, interaction parameters, Robinson-Stokes-Glueckauf model, Flory-Huggins model, critical solution temperature
Procedia PDF Downloads 39119130 Evaluation of Reliability Indices Using Monte Carlo Simulation Accounting Time to Switch
Authors: Sajjad Asefi, Hossein Afrakhte
Abstract:
This paper presents the evaluation of reliability indices of an electrical distribution system using Monte Carlo simulation technique accounting Time To Switch (TTS) for each section. In this paper, the distribution system has been assumed by accounting random repair time omission. For simplicity, we have assumed the reliability analysis to be based on exponential law. Each segment has a specified rate of failure (λ) and repair time (r) which will give us the mean up time and mean down time of each section in distribution system. After calculating the modified mean up time (MUT) in years, mean down time (MDT) in hours and unavailability (U) in h/year, TTS have been added to the time which the system is not available, i.e. MDT. In this paper, we have assumed the TTS to be a random variable with Log-Normal distribution.Keywords: distribution system, Monte Carlo simulation, reliability, repair time, time to switch (TTS)
Procedia PDF Downloads 42719129 Electrical Machine Winding Temperature Estimation Using Stateful Long Short-Term Memory Networks (LSTM) and Truncated Backpropagation Through Time (TBPTT)
Authors: Yujiang Wu
Abstract:
As electrical machine (e-machine) power density re-querulents become more stringent in vehicle electrification, mounting a temperature sensor for e-machine stator windings becomes increasingly difficult. This can lead to higher manufacturing costs, complicated harnesses, and reduced reliability. In this paper, we propose a deep-learning method for predicting electric machine winding temperature, which can either replace the sensor entirely or serve as a backup to the existing sensor. We compare the performance of our method, the stateful long short-term memory networks (LSTM) with truncated backpropagation through time (TBTT), with that of linear regression, as well as stateless LSTM with/without residual connection. Our results demonstrate the strength of combining stateful LSTM and TBTT in tackling nonlinear time series prediction problems with long sequence lengths. Additionally, in industrial applications, high-temperature region prediction accuracy is more important because winding temperature sensing is typically used for derating machine power when the temperature is high. To evaluate the performance of our algorithm, we developed a temperature-stratified MSE. We propose a simple but effective data preprocessing trick to improve the high-temperature region prediction accuracy. Our experimental results demonstrate the effectiveness of our proposed method in accurately predicting winding temperature, particularly in high-temperature regions, while also reducing manufacturing costs and improving reliability.Keywords: deep learning, electrical machine, functional safety, long short-term memory networks (LSTM), thermal management, time series prediction
Procedia PDF Downloads 9919128 Prioritizing Roads Safety Based on the Quasi-Induced Exposure Method and Utilization of the Analytical Hierarchy Process
Authors: Hamed Nafar, Sajad Rezaei, Hamid Behbahani
Abstract:
Safety analysis of the roads through the accident rates which is one of the widely used tools has been resulted from the direct exposure method which is based on the ratio of the vehicle-kilometers traveled and vehicle-travel time. However, due to some fundamental flaws in its theories and difficulties in gaining access to the data required such as traffic volume, distance and duration of the trip, and various problems in determining the exposure in a specific time, place, and individual categories, there is a need for an algorithm for prioritizing the road safety so that with a new exposure method, the problems of the previous approaches would be resolved. In this way, an efficient application may lead to have more realistic comparisons and the new method would be applicable to a wider range of time, place, and individual categories. Therefore, an algorithm was introduced to prioritize the safety of roads using the quasi-induced exposure method and utilizing the analytical hierarchy process. For this research, 11 provinces of Iran were chosen as case study locations. A rural accidents database was created for these provinces, the validity of quasi-induced exposure method for Iran’s accidents database was explored, and the involvement ratio for different characteristics of the drivers and the vehicles was measured. Results showed that the quasi-induced exposure method was valid in determining the real exposure in the provinces under study. Results also showed a significant difference in the prioritization based on the new and traditional approaches. This difference mostly would stem from the perspective of the quasi-induced exposure method in determining the exposure, opinion of experts, and the quantity of accidents data. Overall, the results for this research showed that prioritization based on the new approach is more comprehensive and reliable compared to the prioritization in the traditional approach which is dependent on various parameters including the driver-vehicle characteristics.Keywords: road safety, prioritizing, Quasi-induced exposure, Analytical Hierarchy Process
Procedia PDF Downloads 33819127 Exploring Influence Range of Tainan City Using Electronic Toll Collection Big Data
Authors: Chen Chou, Feng-Tyan Lin
Abstract:
Big Data has been attracted a lot of attentions in many fields for analyzing research issues based on a large number of maternal data. Electronic Toll Collection (ETC) is one of Intelligent Transportation System (ITS) applications in Taiwan, used to record starting point, end point, distance and travel time of vehicle on the national freeway. This study, taking advantage of ETC big data, combined with urban planning theory, attempts to explore various phenomena of inter-city transportation activities. ETC, one of government's open data, is numerous, complete and quick-update. One may recall that living area has been delimited with location, population, area and subjective consciousness. However, these factors cannot appropriately reflect what people’s movement path is in daily life. In this study, the concept of "Living Area" is replaced by "Influence Range" to show dynamic and variation with time and purposes of activities. This study uses data mining with Python and Excel, and visualizes the number of trips with GIS to explore influence range of Tainan city and the purpose of trips, and discuss living area delimited in current. It dialogues between the concepts of "Central Place Theory" and "Living Area", presents the new point of view, integrates the application of big data, urban planning and transportation. The finding will be valuable for resource allocation and land apportionment of spatial planning.Keywords: Big Data, ITS, influence range, living area, central place theory, visualization
Procedia PDF Downloads 27919126 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring
Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti
Abstract:
Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement
Procedia PDF Downloads 12319125 Performance Evaluation of a Minimum Mean Square Error-Based Physical Sidelink Share Channel Receiver under Fading Channel
Authors: Yang Fu, Jaime Rodrigo Navarro, Jose F. Monserrat, Faiza Bouchmal, Oscar Carrasco Quilis
Abstract:
Cellular Vehicle to Everything (C-V2X) is considered a promising solution for future autonomous driving. From Release 16 to Release 17, the Third Generation Partnership Project (3GPP) has introduced the definitions and services for 5G New Radio (NR) V2X. Experience from previous generations has shown that establishing a simulator for C-V2X communications is an essential preliminary step to achieve reliable and stable communication links. This paper proposes a complete framework of a link-level simulator based on the 3GPP specifications for the Physical Sidelink Share Channel (PSSCH) of the 5G NR Physical Layer (PHY). In this framework, several algorithms in the receiver part, i.e., sliding window in channel estimation and Minimum Mean Square Error (MMSE)-based equalization, are developed. Finally, the performance of the developed PSSCH receiver is validated through extensive simulations under different assumptions.Keywords: C-V2X, channel estimation, link-level simulator, sidelink, 3GPP
Procedia PDF Downloads 13019124 Economic Evaluation Offshore Wind Project under Uncertainly and Risk Circumstances
Authors: Sayed Amir Hamzeh Mirkheshti
Abstract:
Offshore wind energy as a strategic renewable energy, has been growing rapidly due to availability, abundance and clean nature of it. On the other hand, budget of this project is incredibly higher in comparison with other renewable energies and it takes more duration. Accordingly, precise estimation of time and cost is needed in order to promote awareness in the developers and society and to convince them to develop this kind of energy despite its difficulties. Occurrence risks during on project would cause its duration and cost constantly changed. Therefore, to develop offshore wind power, it is critical to consider all potential risks which impacted project and to simulate their impact. Hence, knowing about these risks could be useful for the selection of most influencing strategies such as avoidance, transition, and act in order to decrease their probability and impact. This paper presents an evaluation of the feasibility of 500 MV offshore wind project in the Persian Gulf and compares its situation with uncertainty resources and risk. The purpose of this study is to evaluate time and cost of offshore wind project under risk circumstances and uncertain resources by using Monte Carlo simulation. We analyzed each risk and activity along with their distribution function and their effect on the project.Keywords: wind energy project, uncertain resources, risks, Monte Carlo simulation
Procedia PDF Downloads 35219123 The Economics of Ecosystem Services and Biodiversity: Valuing Ecotourism-Local Perspectives to Global Discourses-Stakeholders’ Analysis
Authors: Diptimayee Nayak
Abstract:
Ecotourism has been recognised as a popular component of alternative tourism, which claims to guard host local environment and economy. This concept of ecological tourism (eco-tourism) has become more meaningful in evaluating the recreational function and services of any pristine ecosystem in context of ‘The Economics of Ecosystem and Biodiversity (TEEB)’. This ecotourism is said to be a local solution to the global problem of conserving ecosystems and optimising the utilisations of their services. This paper takes a case of recreational services of an Indian protected area ecosystems ‘Bhitarakanika mangrove protected area’ discussing how ecotourism is functioning taking the perspectives of different stakeholders. Specific stakeholders are taken for analysis, viz., tourists and local people, as they are believed to be the major beneficiaries of ecotourism. The stakeholders’ analysis is evaluated on the basis of travel cost techniques (by using truncated Poisson distribution model) for tourists and descriptive and analytical tools for local people. The evaluation of stakeholders’ analysis of ecotourism has gained its impetus after the formulation of Ecotourism guidelines by the Ministry of Environment and Forest (MoEF), Government of India. The paper concludes that ecotourism issues and challenges are site-specific and region-specific; without critically focussing challenges of ecotourism faced at local level the discourses of ecotourism at global level cannot be tackled. Mere integration and replication of policies at global level to be followed at local level will not be successful (top down policies). Rather mainstreaming the decision making process at local level with the global policy stature helps to solve global issues to a bigger extent (bottom up).Keywords: ecosystem services, ecotourism, TEEB, economic valuation, stakeholders, travel cost techniques
Procedia PDF Downloads 249