Search results for: sieve extremum estimates
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 743

Search results for: sieve extremum estimates

533 Mass Flux and Forensic Assessment: Informed Remediation Decision Making at One of Canada’s Most Polluted Sites

Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer

Abstract:

Sydney Harbour, Nova Scotia, Canada has long been subject to effluent and atmospheric inputs of contaminants, including thousands of tons of PAHs from a large coking and steel plant which operated in Sydney for nearly a century. Contaminants comprised of coal tar residues which were discharged from coking ovens into a small tidal tributary, which became known as the Sydney Tar Ponds (STPs), and subsequently discharged into Sydney Harbour. An Environmental Impact Statement concluded that mobilization of contaminated sediments posed unacceptable ecological risks, therefore immobilizing contaminants in the STPs using solidification and stabilization was identified as a primary source control remediation option to mitigate against continued transport of contaminated sediments from the STPs into Sydney Harbour. Recent developments in contaminant mass flux techniques focus on understanding “mobile” vs. “immobile” contaminants at remediation sites. Forensic source evaluations are also increasingly used for understanding origins of PAH contaminants in soils or sediments. Flux and forensic source evaluation-informed remediation decision-making uses this information to develop remediation end point goals aimed at reducing off-site exposure and managing potential ecological risk. This study included reviews of previous flux studies, calculating current mass flux estimates and a forensic assessment using PAH fingerprint techniques, during remediation of one of Canada’s most polluted sites at the STPs. Historically, the STPs was thought to be the major source of PAH contamination in Sydney Harbour with estimated discharges of nearly 800 kg/year of PAHs. However, during three years of remediation monitoring only 17-97 kg/year of PAHs were discharged from the STPs, which was also corroborated by an independent PAH flux study during the first year of remediation which estimated 119 kg/year. The estimated mass efflux of PAHs from the STPs during remediation was in stark contrast to ~2000 kg loading thought necessary to cause a short term increase in harbour sediment PAH concentrations. These mass flux estimates during remediation were also between three to eight times lower than PAHs discharged from the STPs a decade prior to remediation, when at the same time, government studies demonstrated on-going reduction in PAH concentrations in harbour sediments. Flux results were also corroborated using forensic source evaluations using PAH fingerprint techniques which found a common source of PAHs for urban soils, marine and aquatic sediments in and around Sydney. Coal combustion (from historical coking) and coal dust transshipment (from current coal transshipment facilities), are likely the principal source of PAHs in these media and not migration of PAH laden sediments from the STPs during a large scale remediation project.

Keywords: contaminated sediment, mass flux, forensic source evaluations, remediation

Procedia PDF Downloads 222
532 Determination of Genetic Markers, Microsatellites Type, Liked to Milk Production Traits in Goats

Authors: Mohamed Fawzy Elzarei, Yousef Mohammed Al-Dakheel, Ali Mohamed Alseaf

Abstract:

Modern molecular techniques, like single marker analysis for linked traits to these markers, can provide us with rapid and accurate genetic results. In the last two decades of the last century, the applications of molecular techniques were reached a faraway point in cattle, sheep, and pig. In goats, especially in our region, the application of molecular techniques is still far from other species. As reported by many researchers, microsatellites marker is one of the suitable markers for lie studies. The single marker linked to traits of interest is one technique allowed us to early select animals without the necessity for mapping the entire genome. Simplicity, applicability, and low cost of this technique gave this technique a wide range of applications in many areas of genetics and molecular biology. Also, this technique provides a useful approach for evaluating genetic differentiation, particularly in populations that are poorly known genetically. The expected breeding value (EBV) and yield deviation (YD) are considered as the most parameters used for studying the linkage between quantitative characteristics and molecular markers, since these values are raw data corrected for the non-genetic factors. A total of 17 microsatellites markers (from chromosomes 6, 14, 18, 20 and 23) were used in this study to search for areas that could be responsible for genetic variability for some milk traits and search of chromosomal regions that explain part of the phenotypic variance. Results of single-marker analyses were used to identify the linkage between microsatellite markers and variation in EBVs of these traits, Milk yield, Protein percentage, Fat percentage, Litter size and weight at birth, and litter size and weight at weaning. The estimates of the parameters from forward and backward solutions using stepwise regression procedure on milk yield trait, only two markers, OARCP9 and AGLA29, showed a highly significant effect (p≤0.01) in backward and forward solutions. The forward solution for different equations conducted that R2 of these equations were highly depending on only two partials regressions coefficient (βi,) for these markers. For the milk protein trait, four marker showed significant effect BMS2361, CSSM66 (p≤0.01), BMS2626, and OARCP9 (p≤0.05). By the other way, four markers (MCM147, BM1225, INRA006, andINRA133) showed highly significant effect (p≤0.01) in both backward and forward solutions in association with milk fat trait. For both litter size at birth and at weaning traits, only one marker (BM143(p≤0.01) and RJH1 (p≤0.05), respectively) showed a significant effect in backward and forward solutions. The estimates of the parameters from forward and backward solution using stepwise regression procedure on litter weight at birth (LWB) trait only one marker (MCM147) showed highly significant effect (p≤0.01) and two marker (ILSTS011, CSSM66) showed a significant effect (p≤0.05) in backward and forward solutions.

Keywords: microsatellites marker, estimated breeding value, stepwise regression, milk traits

Procedia PDF Downloads 66
531 A Survey on Quasi-Likelihood Estimation Approaches for Longitudinal Set-ups

Authors: Naushad Mamode Khan

Abstract:

The Com-Poisson (CMP) model is one of the most popular discrete generalized linear models (GLMS) that handles both equi-, over- and under-dispersed data. In longitudinal context, an integer-valued autoregressive (INAR(1)) process that incorporates covariate specification has been developed to model longitudinal CMP counts. However, the joint likelihood CMP function is difficult to specify and thus restricts the likelihood based estimating methodology. The joint generalized quasilikelihood approach (GQL-I) was instead considered but is rather computationally intensive and may not even estimate the regression effects due to a complex and frequently ill conditioned covariance structure. This paper proposes a new GQL approach for estimating the regression parameters (GQLIII) that are based on a single score vector representation. The performance of GQL-III is compared with GQL-I and separate marginal GQLs (GQL-II) through some simulation experiments and is proved to yield equally efficient estimates as GQL-I and is far more computationally stable.

Keywords: longitudinal, com-Poisson, ill-conditioned, INAR(1), GLMS, GQL

Procedia PDF Downloads 338
530 Random Access in IoT Using Naïve Bayes Classification

Authors: Alhusein Almahjoub, Dongyu Qiu

Abstract:

This paper deals with the random access procedure in next-generation networks and presents the solution to reduce total service time (TST) which is one of the most important performance metrics in current and future internet of things (IoT) based networks. The proposed solution focuses on the calculation of optimal transmission probability which maximizes the success probability and reduces TST. It uses the information of several idle preambles in every time slot, and based on it, it estimates the number of backlogged IoT devices using Naïve Bayes estimation which is a type of supervised learning in the machine learning domain. The estimation of backlogged devices is necessary since optimal transmission probability depends on it and the eNodeB does not have information about it. The simulations are carried out in MATLAB which verify that the proposed solution gives excellent performance.

Keywords: random access, LTE/LTE-A, 5G, machine learning, Naïve Bayes estimation

Procedia PDF Downloads 127
529 Opportunities and Challenges of Omni Channel Retailing in the Emerging Market

Authors: Salma Ahmed, Anil Kumar

Abstract:

This paper develops and estimates a model for understanding the drivers and barriers for Omni-Channel retail. This study serves as one of the first attempt to empirically test the effect of various factors on Omni-channel retail. Omni-channel is relative new and evolving, we hypothesize three drivers: (1) Innovative sales and marketing opportunities, (2) channel migration, (3) Cross channel synergies; and three barriers: (1) Integrated sales and marketing operations, (2) Visibility and synchronization (3) Integration and Technology challenges. The findings from the study strongly support that Omni-channel effects exist between cross channel synergy and channel migration. However, it partially supports innovative sales and marketing operations. We also found the variables which we identified as barriers to Omni-channel retail have a strong impact on Omni-channel retail.

Keywords: retailing, multichannel, Omni-channel, emerging market

Procedia PDF Downloads 514
528 Does "R and D" Investment Drive Economic Growth? Evidence from Africa

Authors: Boopen Seetanah, R. V. Sannassee, Sheereen Fauzel, Robin Nunkoo

Abstract:

The bulk of research on the impact of research and development (R&D) has been carried out in developed economies where the intensity of R&D expenditure has been relatively high and stable for many years. However, there is a paucity of similar studies in developing countries. In this paper, we provide empirical estimates of the impact of R&D investment on economic growth in a developing African economy (Mauritius) where R&D expenditure intensity has been low initially, but rising, albeit moderately in recent years. Using a dynamic time series analysis over the period 1980 to 2014 in a Vector Autoregressive framework, R & D is shown to have a positive and significant effect on the economic progress of the island, although the impact is considerably less when compared to both other ingredients of growth and also to reported elasticities fromdeveloped economies . Interestingly, there is evidence of bicausality between R & D and growth. furthermore, R & D positively impacts on both domestic and foreign investment, suggesting the possibilities of indirect effects.

Keywords: R & D, VECM, Africa, Mauritius

Procedia PDF Downloads 410
527 Stochastic Approach for Technical-Economic Viability Analysis of Electricity Generation Projects with Natural Gas Pressure Reduction Turbines

Authors: Roberto M. G. Velásquez, Jonas R. Gazoli, Nelson Ponce Jr, Valério L. Borges, Alessandro Sete, Fernanda M. C. Tomé, Julian D. Hunt, Heitor C. Lira, Cristiano L. de Souza, Fabio T. Bindemann, Wilmar Wounnsoscky

Abstract:

Nowadays, society is working toward reducing energy losses and greenhouse gas emissions, as well as seeking clean energy sources, as a result of the constant increase in energy demand and emissions. Energy loss occurs in the gas pressure reduction stations at the delivery points in natural gas distribution systems (city gates). Installing pressure reduction turbines (PRT) parallel to the static reduction valves at the city gates enhances the energy efficiency of the system by recovering the enthalpy of the pressurized natural gas, obtaining in the pressure-lowering process shaft work and generating electrical power. Currently, the Brazilian natural gas transportation network has 9,409 km in extension, while the system has 16 national and 3 international natural gas processing plants, including more than 143 delivery points to final consumers. Thus, the potential of installing PRT in Brazil is 66 MW of power, which could yearly avoid the emission of 235,800 tons of CO2 and generate 333 GWh/year of electricity. On the other hand, an economic viability analysis of these energy efficiency projects is commonly carried out based on estimates of the project's cash flow obtained from several variables forecast. Usually, the cash flow analysis is performed using representative values of these variables, obtaining a deterministic set of financial indicators associated with the project. However, in most cases, these variables cannot be predicted with sufficient accuracy, resulting in the need to consider, to a greater or lesser degree, the risk associated with the calculated financial return. This paper presents an approach applied to the technical-economic viability analysis of PRTs projects that explicitly considers the uncertainties associated with the input parameters for the financial model, such as gas pressure at the delivery point, amount of energy generated by TRP, the future price of energy, among others, using sensitivity analysis techniques, scenario analysis, and Monte Carlo methods. In the latter case, estimates of several financial risk indicators, as well as their empirical probability distributions, can be obtained. This is a methodology for the financial risk analysis of PRT projects. The results of this paper allow a more accurate assessment of the potential PRT project's financial feasibility in Brazil. This methodology will be tested at the Cuiabá thermoelectric plant, located in the state of Mato Grosso, Brazil, and can be applied to study the potential in other countries.

Keywords: pressure reduction turbine, natural gas pressure drop station, energy efficiency, electricity generation, monte carlo methods

Procedia PDF Downloads 92
526 Mean Monthly Rainfall Prediction at Benina Station Using Artificial Neural Networks

Authors: Hasan G. Elmazoghi, Aisha I. Alzayani, Lubna S. Bentaher

Abstract:

Rainfall is a highly non-linear phenomena, which requires application of powerful supervised data mining techniques for its accurate prediction. In this study the Artificial Neural Network (ANN) technique is used to predict the mean monthly historical rainfall data collected from BENINA station in Benghazi for 31 years, the period of “1977-2006” and the results are compared against the observed values. The specific objective to achieve this goal was to determine the best combination of weather variables to be used as inputs for the ANN model. Several statistical parameters were calculated and an uncertainty analysis for the results is also presented. The best ANN model is then applied to the data of one year (2007) as a case study in order to evaluate the performance of the model. Simulation results reveal that application of ANN technique is promising and can provide reliable estimates of rainfall.

Keywords: neural networks, rainfall, prediction, climatic variables

Procedia PDF Downloads 463
525 Racial Bias by Prosecutors: Evidence from Random Assignment

Authors: CarlyWill Sloan

Abstract:

Racial disparities in criminal justice outcomes are well-documented. However, there is little evidence on the extent to which racial bias by prosecutors is responsible for these disparities. This paper tests for racial bias in conviction by prosecutors. To identify effects, this paper leverages as good as random variation in prosecutor race using detailed administrative data on the case assignment process and case outcomes in New York County, New York. This paper shows that the assignment of an opposite-race prosecutor leads to a 5 percentage point (~ 8 percent) increase in the likelihood of conviction for property crimes. There is no evidence of effects for other types of crimes. Additional results indicate decreased dismissals by opposite-race prosecutors likely drive my property crime estimates.

Keywords: criminal justice, discrimination, prosecutors, racial disparities

Procedia PDF Downloads 176
524 Laboratory Investigation of the Pavement Condition in Lebanon: Implementation of Reclaimed Asphalt Pavement in the Base Course and Asphalt Layer

Authors: Marinelle El-Khoury, Lina Bouhaya, Nivine Abbas, Hassan Sleiman

Abstract:

The road network in the north of Lebanon is a prime example of the lack of pavement design and execution in Lebanon.  These roads show major distresses and hence, should be tested and evaluated. The aim of this research is to investigate and determine the deficiencies in road surface design in Lebanon, and to propose an environmentally friendly asphalt mix design. This paper consists of several parts: (i) evaluating pavement performance and structural behavior, (ii) identifying the distresses using visual examination followed by laboratory tests, (iii) deciding the optimal solution where rehabilitation or reconstruction is required and finally, (iv) identifying a sustainable method, which uses recycled material in the proposed mix. The asphalt formula contains Reclaimed Asphalt Pavement (RAP) in the base course layer and in the asphalt layer. Visual inspection of the roads in Tripoli shows that these roads face a high level of distress severity. Consequently, the pavement should be reconstructed rather than simply rehabilitated. Coring was done to determine the pavement layer thickness. The results were compared to the American Association of State Highway and Transportation Officials (AASHTO) design methodology and showed that the existing asphalt thickness is lower than the required asphalt thickness. Prior to the pavement reconstruction, the road materials were tested according to the American Society for Testing and Materials (ASTM) specification to identify whether the materials are suitable. Accordingly, the ASTM tests that were performed on the base course are Sieve analysis, Atterberg limits, modified proctor, Los Angeles, and California Bearing Ratio (CBR) tests. Results show a CBR value higher than 70%. Hence, these aggregates could be used as a base course layer. The asphalt layer was also tested and the results of the Marshall flow and stability tests meet the ASTM specifications. In the last section, an environmentally friendly mix was proposed. An optimal RAP percentage of 30%, which produced a well graded base course and asphalt mix, was determined through a series of trials.

Keywords: asphalt mix, reclaimed asphalt pavement, California bearing ratio, sustainability

Procedia PDF Downloads 102
523 Earthquake Risk Assessment Using Out-of-Sequence Thrust Movement

Authors: Rajkumar Ghosh

Abstract:

Earthquakes are natural disasters that pose a significant risk to human life and infrastructure. Effective earthquake mitigation measures require a thorough understanding of the dynamics of seismic occurrences, including thrust movement. Traditionally, estimating thrust movement has relied on typical techniques that may not capture the full complexity of these events. Therefore, investigating alternative approaches, such as incorporating out-of-sequence thrust movement data, could enhance earthquake mitigation strategies. This review aims to provide an overview of the applications of out-of-sequence thrust movement in earthquake mitigation. By examining existing research and studies, the objective is to understand how precise estimation of thrust movement can contribute to improving structural design, analyzing infrastructure risk, and developing early warning systems. The study demonstrates how to estimate out-of-sequence thrust movement using multiple data sources, including GPS measurements, satellite imagery, and seismic recordings. By analyzing and synthesizing these diverse datasets, researchers can gain a more comprehensive understanding of thrust movement dynamics during seismic occurrences. The review identifies potential advantages of incorporating out-of-sequence data in earthquake mitigation techniques. These include improving the efficiency of structural design, enhancing infrastructure risk analysis, and developing more accurate early warning systems. By considering out-of-sequence thrust movement estimates, researchers and policymakers can make informed decisions to mitigate the impact of earthquakes. This study contributes to the field of seismic monitoring and earthquake risk assessment by highlighting the benefits of incorporating out-of-sequence thrust movement data. By broadening the scope of analysis beyond traditional techniques, researchers can enhance their knowledge of earthquake dynamics and improve the effectiveness of mitigation measures. The study collects data from various sources, including GPS measurements, satellite imagery, and seismic recordings. These datasets are then analyzed using appropriate statistical and computational techniques to estimate out-of-sequence thrust movement. The review integrates findings from multiple studies to provide a comprehensive assessment of the topic. The study concludes that incorporating out-of-sequence thrust movement data can significantly enhance earthquake mitigation measures. By utilizing diverse data sources, researchers and policymakers can gain a more comprehensive understanding of seismic dynamics and make informed decisions. However, challenges exist, such as data quality difficulties, modelling uncertainties, and computational complications. To address these obstacles and improve the accuracy of estimates, further research and advancements in methodology are recommended. Overall, this review serves as a valuable resource for researchers, engineers, and policymakers involved in earthquake mitigation, as it encourages the development of innovative strategies based on a better understanding of thrust movement dynamics.

Keywords: earthquake, out-of-sequence thrust, disaster, human life

Procedia PDF Downloads 53
522 Estimation of Location and Scale Parameters of Extended Exponential Distribution Based on Record Statistics

Authors: E. Krishna

Abstract:

An Extended form of exponential distribution using Marshall and Olkin method is introduced.The location scale family of these distributions is considered. For location scale free family, exact expressions for single and product moments of upper record statistics are derived. The mean, variance and covariance of record values are computed for various values of the shape parameter. Using these the BLUE's of location and scale parameters are derived.The variances and covariance of estimates are obtained.Through Monte Carlo simulation the con dence intervals for location and scale parameters are constructed.The Best liner unbiased Predictor (BLUP) of future records are also discussed.

Keywords: BLUE, BLUP, con dence interval, Marshall-Olkin distribution, Monte Carlo simulation, prediction of future records, record statistics

Procedia PDF Downloads 401
521 A Deep Learning Based Integrated Model For Spatial Flood Prediction

Authors: Vinayaka Gude Divya Sampath

Abstract:

The research introduces an integrated prediction model to assess the susceptibility of roads in a future flooding event. The model consists of deep learning algorithm for forecasting gauge height data and Flood Inundation Mapper (FIM) for spatial flooding. An optimal architecture for Long short-term memory network (LSTM) was identified for the gauge located on Tangipahoa River at Robert, LA. Dropout was applied to the model to evaluate the uncertainty associated with the predictions. The estimates are then used along with FIM to identify the spatial flooding. Further geoprocessing in ArcGIS provides the susceptibility values for different roads. The model was validated based on the devastating flood of August 2016. The paper discusses the challenges for generalization the methodology for other locations and also for various types of flooding. The developed model can be used by the transportation department and other emergency response organizations for effective disaster management.

Keywords: deep learning, disaster management, flood prediction, urban flooding

Procedia PDF Downloads 124
520 Analysis of the Aquifer Vulnerability of a Miopliocene Arid Area Using Drastic and SI Models

Authors: H. Majour, L. Djabri

Abstract:

Many methods in the groundwater vulnerability have been developed in the world (methods like PRAST, DRIST, APRON/ARAA, PRASTCHIM, GOD). In this study, our choice dealt with two recent complementary methods using category mapping of index with weighting criteria (Point County Systems Model MSCP) namely the standard DRASTIC method and SI (Susceptibility Index). At present, these two methods are the most used for the mapping of the intrinsic vulnerability of groundwater. Two classes of groundwater vulnerability in the Biskra sandy aquifer were identified by the DRASTIC method (average and high) and the SI method (very high and high). Integrated analysis has revealed that the high class is predominant for the DRASTIC method whereas for that of SI the preponderance is for the very high class. Furthermore, we notice that the method SI estimates better the vulnerability for the pollution in nitrates, with a rate of 85 % between the concentrations in nitrates of groundwater and the various established classes of vulnerability, against 75 % for the DRASTIC method. By including the land use parameter, the SI method produced more realistic results.

Keywords: DRASTIC, SI, GIS, Biskra sandy aquifer, Algeria

Procedia PDF Downloads 468
519 Measuring Banking Risk

Authors: Mike Tsionas

Abstract:

The paper develops new indices of financial stability based on an explicit model of expected utility maximization by financial institutions subject to the classical technology restrictions of neoclassical production theory. The model can be estimated using standard econometric techniques, like GMM for dynamic panel data and latent factor analysis for the estimation of co-variance matrices. An explicit functional form for the utility function is not needed and we show how measures of risk aversion and prudence (downside risk aversion) can be derived and estimated from the model. The model is estimated using data for Eurozone countries and we focus particularly on (i) the use of the modeling approach as an “early warning mechanism”, (ii) the bank- and country-specific estimates of risk aversion and prudence (downside risk aversion), and (iii) the derivation of a generalized measure of risk that relies on loan-price uncertainty.

Keywords: financial stability, banking, expected utility maximization, sub-prime crisis, financial crisis, eurozone, PIIGS

Procedia PDF Downloads 322
518 The Social Origin Pay Gap in the UK Household Longitudinal Study

Authors: Michael Vallely

Abstract:

This paper uses data from waves 1 to 10 (2009-2019) of the UK Household Longitudinal Study to examine the social origin pay gap in the UK labour market. We find that regardless of how we proxy social origin, whether it be using the dominance approach, total parental occupation, parental education, total parental education, or the higher parental occupation and higher parental education, the results have one thing in common; in all cases, we observe a significant social origin pay gap for those from the lower social origins with the largest pay gap observed for those from the ‘lowest’ social origin. The results may indicate that when we consider the occupational status and education of both parents, previous estimates of social origin pay gaps and the number of individuals affected may have been underestimated. We also observe social origin pay gaps within educational attainment groups, such as degree holders, and within professional and managerial occupations. Therefore, this paper makes a valuable contribution to the social origin pay gap literature as it provides empirical evidence of a social origin pay gap using a large-scale UK dataset and challenges the argument that education is the great ‘social leveller’.

Keywords: social class, social origin, pay gaps, wage inequality

Procedia PDF Downloads 120
517 Reduced Power Consumption by Randomization for DSI3

Authors: David Levy

Abstract:

The newly released Distributed System Interface 3 (DSI3) Bus Standard specification defines 3 modulation levels from which 16 valid symbols are coded. This structure creates power consumption variations depending on the transmitted data of a factor of more than 2 between minimum and maximum. The power generation unit has to consider therefore the worst case maximum consumption all the time and be built accordingly. This paper proposes a method to reduce both the average current consumption and worst case current consumption. The transmitter randomizes the data using several pseudo-random sequences. It then estimates the energy consumption of the generated frames and selects to transmit the one which consumes the least. The transmitter also prepends the index of the pseudo-random sequence, which is not randomized, to allow the receiver to recover the original data using the correct sequence. We show that in the case that the frame occupies most of the DSI3 synchronization period, we achieve average power consumption reduction by up to 13% and the worst case power consumption is reduced by 17.7%.

Keywords: DSI3, energy, power consumption, randomization

Procedia PDF Downloads 513
516 Volatility Transmission between Oil Price and Stock Return of Emerging and Developed Countries

Authors: Algia Hammami, Abdelfatteh Bouri

Abstract:

In this work, our objective is to study the transmission of volatility between oil and stock markets in developed (USA, Germany, Italy, France and Japan) and emerging countries (Tunisia, Thailand, Brazil, Argentina, and Jordan) for the period 1998-2015. Our methodology consists of analyzing the monthly data by the GARCH-BEKK model to capture the effect in terms of volatility in the variation of the oil price on the different stock market. The empirical results in the emerging countries indicate that the relationships are unidirectional from the stock market to the oil market. For the developed countries, we find that the transmission of volatility is unidirectional from the oil market to stock market. For the USA and Italy, we find no transmission between the two markets. The transmission is bi-directional only in Thailand. Following our estimates, we also noticed that the emerging countries influence almost the same extent as the developed countries, while at the transmission of volatility there a bid difference. The GARCH-BEKK model is more effective than the others versions to minimize the risk of an oil-stock portfolio.

Keywords: GARCH, oil prices, stock market, volatility transmission

Procedia PDF Downloads 414
515 Role of Macro and Technical Indicators in Equity Risk Premium Prediction: A Principal Component Analysis Approach

Authors: Naveed Ul Hassan, Bilal Aziz, Maryam Mushtaq, Imran Ameen Khan

Abstract:

Equity risk premium (ERP) is the stock return in excess of risk free return. Even though it is an essential topic of finance but still there is no common consensus upon its forecasting. For forecasting ERP, apart from the macroeconomic variables attention is devoted to technical indicators as well. For this purpose, set of 14 technical and 14 macro-economic variables is selected and all forecasts are generated based on a standard predictive regression framework, where ERP is regressed on a constant and a lag of a macroeconomic variable or technical indicator. The comparative results showed that technical indicators provide better indications about ERP estimates as compared to macro-economic variables. The relative strength of ERP predictability is also investigated by using National Bureau of Economic Research (NBER) data of business cycle expansion and recessions and found that ERP predictability is more than twice for recessions as compared to expansions.

Keywords: equity risk premium, forecasting, macroeconomic indicators, technical indicators

Procedia PDF Downloads 284
514 Cascaded Neural Network for Internal Temperature Forecasting in Induction Motor

Authors: Hidir S. Nogay

Abstract:

In this study, two systems were created to predict interior temperature in induction motor. One of them consisted of a simple ANN model which has two layers, ten input parameters and one output parameter. The other one consisted of eight ANN models connected each other as cascaded. Cascaded ANN system has 17 inputs. Main reason of cascaded system being used in this study is to accomplish more accurate estimation by increasing inputs in the ANN system. Cascaded ANN system is compared with simple conventional ANN model to prove mentioned advantages. Dataset was obtained from experimental applications. Small part of the dataset was used to obtain more understandable graphs. Number of data is 329. 30% of the data was used for testing and validation. Test data and validation data were determined for each ANN model separately and reliability of each model was tested. As a result of this study, it has been understood that the cascaded ANN system produced more accurate estimates than conventional ANN model.

Keywords: cascaded neural network, internal temperature, inverter, three-phase induction motor

Procedia PDF Downloads 325
513 An Efficient Separation for Convolutive Mixtures

Authors: Salah Al-Din I. Badran, Samad Ahmadi, Dylan Menzies, Ismail Shahin

Abstract:

This paper describes a new efficient blind source separation method; in this method we use a non-uniform filter bank and a new structure with different sub-bands. This method provides a reduced permutation and increased convergence speed comparing to the full-band algorithm. Recently, some structures have been suggested to deal with two problems: reducing permutation and increasing the speed of convergence of the adaptive algorithm for correlated input signals. The permutation problem is avoided with the use of adaptive filters of orders less than the full-band adaptive filter, which operate at a sampling rate lower than the sampling rate of the input signal. The decomposed signals by analysis bank filter are less correlated in each sub-band than the input signal at full-band, and can promote better rates of convergence.

Keywords: Blind source separation, estimates, full-band, mixtures, sub-band

Procedia PDF Downloads 428
512 Model-Based Control for Piezoelectric-Actuated Systems Using Inverse Prandtl-Ishlinskii Model and Particle Swarm Optimization

Authors: Jin-Wei Liang, Hung-Yi Chen, Lung Lin

Abstract:

In this paper feedforward controller is designed to eliminate nonlinear hysteresis behaviors of a piezoelectric stack actuator (PSA) driven system. The control design is based on inverse Prandtl-Ishlinskii (P-I) hysteresis model identified using particle swarm optimization (PSO) technique. Based on the identified P-I model, both the inverse P-I hysteresis model and feedforward controller can be determined. Experimental results obtained using the inverse P-I feedforward control are compared with their counterparts using hysteresis estimates obtained from the identified Bouc-Wen model. Effectiveness of the proposed feedforward control scheme is demonstrated. To improve control performance feedback compensation using traditional PID scheme is adopted to integrate with the feedforward controller.

Keywords: the Bouc-Wen hysteresis model, particle swarm optimization, Prandtl-Ishlinskii model, automation engineering

Procedia PDF Downloads 495
511 Estimating Lost Digital Video Frames Using Unidirectional and Bidirectional Estimation Based on Autoregressive Time Model

Authors: Navid Daryasafar, Nima Farshidfar

Abstract:

In this article, we make attempt to hide error in video with an emphasis on the time-wise use of autoregressive (AR) models. To resolve this problem, we assume that all information in one or more video frames is lost. Then, lost frames are estimated using analogous Pixels time information in successive frames. Accordingly, after presenting autoregressive models and how they are applied to estimate lost frames, two general methods are presented for using these models. The first method which is the same standard method of autoregressive models estimates lost frame in unidirectional form. Usually, in such condition, previous frames information is used for estimating lost frame. Yet, in the second method, information from the previous and next frames is used for estimating the lost frame. As a result, this method is known as bidirectional estimation. Then, carrying out a series of tests, performance of each method is assessed in different modes. And, results are compared.

Keywords: error steganography, unidirectional estimation, bidirectional estimation, AR linear estimation

Procedia PDF Downloads 512
510 Simple Rheological Method to Estimate the Branch Structures of Polyethylene under Reactive Modification

Authors: Mahdi Golriz

Abstract:

The aim of this work is to estimate the change in molecular structure of linear low-density polyethylene (LLDPE) during peroxide modification can be detected by a simple rheological method. For this purpose a commercial grade LLDPE (Exxon MobileTM LL4004EL) was reacted with different doses of dicumyl peroxide (DCP). The samples were analyzed by size-exclusion chromatography coupled with a light scattering detector. The dynamic shear oscillatory measurements showed a deviation of the δ-׀G ׀٭curve from that of the linear LLDPE, which can be attributed to the presence of long-chain branching (LCB). By the use of a simple rheological method that utilizes melt rheology, transformations in molecular architecture induced on an originally linear low density polyethylene during the early stages of reactive modification were indicated. Reasonable and consistent estimates are obtained, concerning the degree of LCB, the volume fraction of the various molecular species produced in peroxide modification of LLDPE.

Keywords: linear low-density polyethylene, peroxide modification, long-chain branching, rheological method

Procedia PDF Downloads 131
509 Convergence Analysis of Cubic B-Spline Collocation Method for Time Dependent Parabolic Advection-Diffusion Equations

Authors: Bharti Gupta, V. K. Kukreja

Abstract:

A comprehensive numerical study is presented for the solution of time-dependent advection diffusion problems by using cubic B-spline collocation method. The linear combination of cubic B-spline basis, taken as approximating function, is evaluated using the zeros of shifted Chebyshev polynomials as collocation points in each element to obtain the best approximation. A comparison, on the basis of efficiency and accuracy, with the previous techniques is made which confirms the superiority of the proposed method. An asymptotic convergence analysis of technique is also discussed, and the method is found to be of order two. The theoretical analysis is supported with suitable examples to show second order convergence of technique. Different numerical examples are simulated using MATLAB in which the 3-D graphical presentation has taken at different time steps as well as different domain of interest.

Keywords: cubic B-spline basis, spectral norms, shifted Chebyshev polynomials, collocation points, error estimates

Procedia PDF Downloads 203
508 Continuous Land Cover Change Detection in Subtropical Thicket Ecosystems

Authors: Craig Mahlasi

Abstract:

The Subtropical Thicket Biome has been in peril of transformation. Estimates indicate that as much as 63% of the Subtropical Thicket Biome is severely degraded. Agricultural expansion is the main driver of transformation. While several studies have sought to document and map the long term transformations, there is a lack of information on disturbance events that allow for timely intervention by authorities. Furthermore, tools that seek to perform continuous land cover change detection are often developed for forests and thus tend to perform poorly in thicket ecosystems. This study investigates the utility of Earth Observation data for continuous land cover change detection in Subtropical Thicket ecosystems. Temporal Neural Networks are implemented on a time series of Sentinel-2 observations. The model obtained 0.93 accuracy, a recall score of 0.93, and a precision score of 0.91 in detecting Thicket disturbances. The study demonstrates the potential of continuous land cover change in Subtropical Thicket ecosystems.

Keywords: remote sensing, land cover change detection, subtropical thickets, near-real time

Procedia PDF Downloads 138
507 Generalized Additive Model Approach for the Chilean Hake Population in a Bio-Economic Context

Authors: Selin Guney, Andres Riquelme

Abstract:

The traditional bio-economic method for fisheries modeling uses some estimate of the growth parameters and the system carrying capacity from a biological model for the population dynamics (usually a logistic population growth model) which is then analyzed as a traditional production function. The stock dynamic is transformed into a revenue function and then compared with the extraction costs to estimate the maximum economic yield. In this paper, the logistic population growth model for the population is combined with a forecast of the abundance and location of the stock by using a generalized additive model approach. The paper focuses on the Chilean hake population. This method allows for the incorporation of climatic variables and the interaction with other marine species, which in turn will increase the reliability of the estimates and generate better extraction paths for different conservation objectives, such as the maximum biological yield or the maximum economic yield.

Keywords: bio-economic, fisheries, GAM, production

Procedia PDF Downloads 230
506 Thermodynamics of Aqueous Solutions of Organic Molecule and Electrolyte: Use Cloud Point to Obtain Better Estimates of Thermodynamic Parameters

Authors: Jyoti Sahu, Vinay A. Juvekar

Abstract:

Electrolytes are often used to bring about salting-in and salting-out of organic molecules and polymers (e.g. polyethylene glycols/proteins) from the aqueous solutions. For quantification of these phenomena, a thermodynamic model which can accurately predict activity coefficient of electrolyte as a function of temperature is needed. The thermodynamics models available in the literature contain a large number of empirical parameters. These parameters are estimated using lower/upper critical solution temperature of the solution in the electrolyte/organic molecule at different temperatures. Since the number of parameters is large, inaccuracy can bethe creep in during their estimation, which can affect the reliability of prediction beyond the range in which these parameters are estimated. Cloud point of solution is related to its free energy through temperature and composition derivative. Hence, the Cloud point measurement can be used for accurate estimation of the temperature and composition dependence of parameters in the model for free energy. Hence, if we use a two pronged procedure in which we first use cloud point of solution to estimate some of the parameters of the thermodynamic model and determine the rest using osmotic coefficient data, we gain on two counts. First, since the parameters, estimated in each of the two steps, are fewer, we achieve higher accuracy of estimation. The second and more important gain is that the resulting model parameters are more sensitive to temperature. This is crucial when we wish to use the model outside temperatures window within which the parameter estimation is sought. The focus of the present work is to prove this proposition. We have used electrolyte (NaCl/Na2CO3)-water-organic molecule (Iso-propanol/ethanol) as the model system. The model of Robinson-Stokes-Glukauf is modified by incorporating the temperature dependent Flory-Huggins interaction parameters. The Helmholtz free energy expression contains, in addition to electrostatic and translational entropic contributions, three Flory-Huggins pairwise interaction contributions viz., and (w-water, p-polymer, s-salt). These parameters depend both on temperature and concentrations. The concentration dependence is expressed in the form of a quadratic expression involving the volume fractions of the interacting species. The temperature dependence is expressed in the form .To obtain the temperature-dependent interaction parameters for organic molecule-water and electrolyte-water systems, Critical solution temperature of electrolyte -water-organic molecules is measured using cloud point measuring apparatus The temperature and composition dependent interaction parameters for electrolyte-water-organic molecule are estimated through measurement of cloud point of solution. The model is used to estimate critical solution temperature (CST) of electrolyte water-organic molecules solution. We have experimentally determined the critical solution temperature of different compositions of electrolyte-water-organic molecule solution and compared the results with the estimates based on our model. The two sets of values show good agreement. On the other hand when only osmotic coefficients are used for estimation of the free energy model, CST predicted using the resulting model show poor agreement with the experiments. Thus, the importance of the CST data in the estimation of parameters of the thermodynamic model is confirmed through this work.

Keywords: concentrated electrolytes, Debye-Hückel theory, interaction parameters, Robinson-Stokes-Glueckauf model, Flory-Huggins model, critical solution temperature

Procedia PDF Downloads 366
505 Bayesian Meta-Analysis to Account for Heterogeneity in Studies Relating Life Events to Disease

Authors: Elizabeth Stojanovski

Abstract:

Associations between life events and various forms of cancers have been identified. The purpose of a recent random-effects meta-analysis was to identify studies that examined the association between adverse events associated with changes to financial status including decreased income and breast cancer risk. The same association was studied in four separate studies which displayed traits that were not consistent between studies such as the study design, location and time frame. It was of interest to pool information from various studies to help identify characteristics that differentiated study results. Two random-effects Bayesian meta-analysis models are proposed to combine the reported estimates of the described studies. The proposed models allow major sources of variation to be taken into account, including study level characteristics, between study variance, and within study variance and illustrate the ease with which uncertainty can be incorporated using a hierarchical Bayesian modelling approach.

Keywords: random-effects, meta-analysis, Bayesian, variation

Procedia PDF Downloads 139
504 Estimation of Seismic Deformation Demands of Tall Buildings with Symmetric Setbacks

Authors: Amir Alirezaei, Shahram Vahdani

Abstract:

This study estimates the seismic demands of tall buildings with central symmetric setbacks by using nonlinear time history analysis. Three setback structures, all 60-story high with setback in three levels, are used for evaluation. The effects of irregularities occurred by setback, are evaluated by determination of global-drift, story-displacement and story drift. Story-displacement is modified by roof displacement and first story displacement and story drift is modified by global drift. All results are calculated at the center of mass and in x and y direction. Also the absolute values of these quantities are determined. The results show that increasing of vertical irregularities increases the global drift of the structure and enlarges the deformations in the height of the structure. It is also observed that the effects of geometry irregularity in the seismic deformations of setback structures are higher than those of mass irregularity.

Keywords: deformation demand, drift, setback, tall building

Procedia PDF Downloads 402