Search results for: conditional random fields
4438 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing
Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan
Abstract:
This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium
Procedia PDF Downloads 2974437 Modelling the Dynamics of Corporate Bonds Spreads with Asymmetric GARCH Models
Authors: Sélima Baccar, Ephraim Clark
Abstract:
This paper can be considered as a new perspective to analyse credit spreads. A comprehensive empirical analysis of conditional variance of credit spreads indices is performed using various GARCH models. Based on a comparison between traditional and asymmetric GARCH models with alternative functional forms of the conditional density, we intend to identify what macroeconomic and financial factors have driven daily changes in the US Dollar credit spreads in the period from January 2011 through January 2013. The results provide a strong interdependence between credit spreads and the explanatory factors related to the conditions of interest rates, the state of the stock market, the bond market liquidity and the exchange risk. The empirical findings support the use of asymmetric GARCH models. The AGARCH and GJR models outperform the traditional GARCH in credit spreads modelling. We show, also, that the leptokurtic Student-t assumption is better than the Gaussian distribution and improves the quality of the estimates, whatever the rating or maturity.Keywords: corporate bonds, default risk, credit spreads, asymmetric garch models, student-t distribution
Procedia PDF Downloads 4754436 A Periodogram-Based Spectral Method Approach: The Relationship between Tourism and Economic Growth in Turkey
Authors: Mesut BALIBEY, Serpil TÜRKYILMAZ
Abstract:
A popular topic in the econometrics and time series area is the cointegrating relationships among the components of a nonstationary time series. Engle and Granger’s least squares method and Johansen’s conditional maximum likelihood method are the most widely-used methods to determine the relationships among variables. Furthermore, a method proposed to test a unit root based on the periodogram ordinates has certain advantages over conventional tests. Periodograms can be calculated without any model specification and the exact distribution under the assumption of a unit root is obtained. For higher order processes the distribution remains the same asymptotically. In this study, in order to indicate advantages over conventional test of periodograms, we are going to examine a possible relationship between tourism and economic growth during the period 1999:01-2010:12 for Turkey by using periodogram method, Johansen’s conditional maximum likelihood method, Engle and Granger’s ordinary least square method.Keywords: cointegration, economic growth, periodogram ordinate, tourism
Procedia PDF Downloads 2704435 Influence of Random Fibre Packing on the Compressive Strength of Fibre Reinforced Plastic
Authors: Y. Wang, S. Zhang, X. Chen
Abstract:
The longitudinal compressive strength of fibre reinforced plastic (FRP) possess a large stochastic variability, which limits efficient application of composite structures. This study aims to address how the random fibre packing affects the uncertainty of FRP compressive strength. An novel approach is proposed to generate random fibre packing status by a combination of Latin hypercube sampling and random sequential expansion. 3D nonlinear finite element model is built which incorporates both the matrix plasticity and fibre geometrical instability. The matrix is modeled by isotropic ideal elasto-plastic solid elements, and the fibres are modeled by linear-elastic rebar elements. Composite with a series of different nominal fibre volume fractions are studied. Premature fibre waviness at different magnitude and direction is introduced in the finite element model. Compressive tests on uni-directional CFRP (carbon fibre reinforced plastic) are conducted following the ASTM D6641. By a comparison of 3D FE models and compressive tests, it is clearly shown that the stochastic variation of compressive strength is partly caused by the random fibre packing, and normal or lognormal distribution tends to be a good fit the probabilistic compressive strength. Furthermore, it is also observed that different random fibre packing could trigger two different fibre micro-buckling modes while subjected to longitudinal compression: out-of-plane buckling and twisted buckling. The out-of-plane buckling mode results much larger compressive strength, and this is the major reason why the random fibre packing results a large uncertainty in the FRP compressive strength. This study would contribute to new approaches to the quality control of FRP considering higher compressive strength or lower uncertainty.Keywords: compressive strength, FRP, micro-buckling, random fibre packing
Procedia PDF Downloads 2734434 Comparison of Different Machine Learning Algorithms for Solubility Prediction
Authors: Muhammet Baldan, Emel Timuçin
Abstract:
Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.Keywords: random forest, machine learning, comparison, feature extraction
Procedia PDF Downloads 424433 Analysis of National Science and Technology Policies: The Case of South Korea
Authors: Jeonghwan Jeon
Abstract:
As the science and technology (S&T) has been rapidly advanced, the national government attempts to reflect changes in the S&T for promoting public R&D activities and economic development. Amongst others, due to the rapid advances and changes of S&T, it becomes important to analyze the trends of S&T policies for formulating the new policy and investigating promising S&T fields. Thus, this paper aims to trace the national S&T policies during this decade for analyzing the change of major S&T fields in the case of South Korea. As one of the organization for S&T policy in South Korea, the National Science and Technology Council (NSTC) has been established to coordinate inter-ministerial policies and programs and to determine all of the national and public S&T policy of South Korea. In this regard, the items on national S&T policy determined by the NSTC are useful for understanding the needs for major S&T fields and adapting to the rapid change of S&T. To this end, we first gathered the data on 512 items on the S&T agenda from 1999 to 2013. Based on these items, the trend of S&T policies is monitored and the major S&T fields are derived. Differences of policy purposes between S&T fields are identified to provide guideline for policy making such as budget allocation or investment promotion as well.Keywords: national science and technology, policy, trends, S&T field
Procedia PDF Downloads 5534432 Monitoring Trends of Science and Technology Policies in South Korea
Authors: Jeonghwan Jeon
Abstract:
As the science and technology(S&T) has been rapidly advanced, the national government attempts to reflect changes in the S&T for promoting public R&D activities and economic development. Amongst others, due to the rapid advances and changes of S&T, it becomes important to monitor the trends of S&T policies for formulating the new policy and investigating promising S&T fields. Thus, this paper aims to trace the national S&T policies during this decade for monitoring the change of major S&T fields in the case of South Korea. As one of the organization for S&T policy in South Korea, the National Science and Technology Council (NSTC) has been established to coordinate inter-ministerial policies and programs and to determine all of the national and public S&T policy of South Korea. In this regard, the items on national S&T policy determined by the NSTC are useful for understanding the needs for major S&T fields and adapting to the rapid change of S&T. To this end, we first gathered the data on 512 items on the S&T agenda from 1999 to 2013. Based on these items, the trend of S&T policies is monitored and the major S&T fields are derived. Differences of policy purposes between S&T fields are identified to provide guideline for policy making such as budget allocation or investment promotion as well.Keywords: science and technology policy, trends, S&T field, monitoring
Procedia PDF Downloads 3234431 Concurrent Hazard Fragility Analysis with Consideration of Structural Uncertainties
Authors: Ling-Han Liu, Qian-Qian Yu, Xiang-Lin Gu
Abstract:
In this paper, the fragility analysis of earthquake-strong wind concurrent hazards considering structural uncertainties was conducted. Eleven sets of structural uncertainty parameters were considered, and random structural models were generated using Latin hypercube sampling. The uncertainties in seismic ground motion and wind load inputs were incorporated, and the conditional failure probability of the structures was calculated. A 12-story concrete building was used as an example, with the IDR (Inter-story Drift Ratio) as the performance indicator. The failure probabilities under individual and multiple hazards were compared, along with a comparison of fragility analysis results with and without considering structural uncertainties. The numerical simulations show that including structural uncertainties increases the structural failure probability by 20%. The peak stress and strain of core-restrained concrete, the structural damping ratio, and the peak stress of unrestrained concrete are found to be decisive factors in the structural response.Keywords: structural uncertainty, incremental dynamic analysis, multi-hazard fragility, latin hypercube sampling
Procedia PDF Downloads 64430 Estimating the Probability of Winning the Best Actor/Actress Award Conditional on the Best Picture Nomination with Bayesian Hierarchical Models
Authors: Svetlana K. Eden
Abstract:
Movies and TV shows have long become part of modern culture. We all have our preferred genre, story, actors, and actresses. However, can we objectively discern good acting from the bad? As laymen, we are probably not objective, but what about the Oscar academy members? Are their votes based on objective measures? Oscar academy members are probably also biased due to many factors, including their professional affiliations or advertisement exposure. Heavily advertised films bring more publicity to their cast and are likely to have bigger budgets. Because a bigger budget may also help earn a Best Picture (BP) nomination, we hypothesize that best actor/actress (BA) nominees from BP-nominated movies would have higher chances of winning the award than those BA nominees from non-BP-nominated films. To test this hypothesis, three Bayesian hierarchical models are proposed, and their performance is evaluated. The results from all three models largely support our hypothesis. Depending on the proportion of BP nominations among BA nominees, the odds ratios (estimated over expected) of winning the BA award conditional on BP nomination vary from 2.8 [0.8-7.0] to 4.3 [2.0, 15.8] for actors and from 1.5 [0.0, 12.2] to 5.4 [2.7, 14.2] for actresses.Keywords: Oscar, best picture, best actor/actress, bias
Procedia PDF Downloads 2234429 Segmentation of Liver Using Random Forest Classifier
Authors: Gajendra Kumar Mourya, Dinesh Bhatia, Akash Handique, Sunita Warjri, Syed Achaab Amir
Abstract:
Nowadays, Medical imaging has become an integral part of modern healthcare. Abdominal CT images are an invaluable mean for abdominal organ investigation and have been widely studied in the recent years. Diagnosis of liver pathologies is one of the major areas of current interests in the field of medical image processing and is still an open problem. To deeply study and diagnose the liver, segmentation of liver is done to identify which part of the liver is mostly affected. Manual segmentation of the liver in CT images is time-consuming and suffers from inter- and intra-observer differences. However, automatic or semi-automatic computer aided segmentation of the Liver is a challenging task due to inter-patient Liver shape and size variability. In this paper, we present a technique for automatic segmenting the liver from CT images using Random Forest Classifier. Random forests or random decision forests are an ensemble learning method for classification that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes of the individual trees. After comparing with various other techniques, it was found that Random Forest Classifier provide a better segmentation results with respect to accuracy and speed. We have done the validation of our results using various techniques and it shows above 89% accuracy in all the cases.Keywords: CT images, image validation, random forest, segmentation
Procedia PDF Downloads 3134428 Neural Network Approaches for Sea Surface Height Predictability Using Sea Surface Temperature
Authors: Luther Ollier, Sylvie Thiria, Anastase Charantonis, Carlos E. Mejia, Michel Crépon
Abstract:
Sea Surface Height Anomaly (SLA) is a signature of the sub-mesoscale dynamics of the upper ocean. Sea Surface Temperature (SST) is driven by these dynamics and can be used to improve the spatial interpolation of SLA fields. In this study, we focused on the temporal evolution of SLA fields. We explored the capacity of deep learning (DL) methods to predict short-term SLA fields using SST fields. We used simulated daily SLA and SST data from the Mercator Global Analysis and Forecasting System, with a resolution of (1/12)◦ in the North Atlantic Ocean (26.5-44.42◦N, -64.25–41.83◦E), covering the period from 1993 to 2019. Using a slightly modified image-to-image convolutional DL architecture, we demonstrated that SST is a relevant variable for controlling the SLA prediction. With a learning process inspired by the teaching-forcing method, we managed to improve the SLA forecast at five days by using the SST fields as additional information. We obtained predictions of a 12 cm (20 cm) error of SLA evolution for scales smaller than mesoscales and at time scales of 5 days (20 days), respectively. Moreover, the information provided by the SST allows us to limit the SLA error to 16 cm at 20 days when learning the trajectory.Keywords: deep-learning, altimetry, sea surface temperature, forecast
Procedia PDF Downloads 904427 Examines the Proportionality between the Needs of Industry and Technical and Vocational Training of Male and Female Vocational Schools
Authors: Khalil Aryanfar, Pariya Gholipor, Elmira Hafez
Abstract:
This study examines the proportionality between the needs of industry and technical and vocational training of male and female vocational schools. The research method was descriptive that was conducted in two parts: documentary analysis and needs assessment and Delphi method was used in the need assessment. The statistical population of the study included 312 individuals from the industry sector employers and 52 of them were selected through stratified random sampling. Methods of data collection in this study, upstream documents include: document of the development of technical and vocational training, Statistical Yearbook 1393 in Tehran, the available documents in Isfahan Planning Department, the findings indicate that there is an almost proportionality between the needs of industry and Vocational training of male and female vocational schools in fields of welding, industrial electronics, electro technique, industrial drawing, auto mechanics, design, packaging, machine tool, metalworking, construction, accounting, computer graphics and the Administrative Affairs. The findings indicate that there is no proportionality between the needs of industry and Vocational training of male and female vocational schools in fields of Thermal - cooling systems, building electricity, building drawing, interior architecture, car electricity and motor repair.Keywords: needs assessment, technical and vocational training, industry
Procedia PDF Downloads 4554426 Three-Stage Multivariate Stratified Sample Surveys with Probabilistic Cost Constraint and Random Variance
Authors: Sanam Haseen, Abdul Bari
Abstract:
In this paper a three stage multivariate programming problem with random survey cost and variances as random variables has been formulated as a non-linear stochastic programming problem. The problem has been converted into an equivalent deterministic form using chance constraint programming and modified E-modeling. An empirical study of the problem has been done at the end of the paper using R-simulation.Keywords: chance constraint programming, modified E-model, stochastic programming, stratified sample surveys, three stage sample surveys
Procedia PDF Downloads 4584425 Dynamic Correlations and Portfolio Optimization between Islamic and Conventional Equity Indexes: A Vine Copula-Based Approach
Authors: Imen Dhaou
Abstract:
This study examines conditional Value at Risk by applying the GJR-EVT-Copula model, and finds the optimal portfolio for eight Dow Jones Islamic-conventional pairs. Our methodology consists of modeling the data by a bivariate GJR-GARCH model in which we extract the filtered residuals and then apply the Peak over threshold model (POT) to fit the residual tails in order to model marginal distributions. After that, we use pair-copula to find the optimal portfolio risk dependence structure. Finally, with Monte Carlo simulations, we estimate the Value at Risk (VaR) and the conditional Value at Risk (CVaR). The empirical results show the VaR and CVaR values for an equally weighted portfolio of Dow Jones Islamic-conventional pairs. In sum, we found that the optimal investment focuses on Islamic-conventional US Market index pairs because of high investment proportion; however, all other index pairs have low investment proportion. These results deliver some real repercussions for portfolio managers and policymakers concerning to optimal asset allocations, portfolio risk management and the diversification advantages of these markets.Keywords: CVaR, Dow Jones Islamic index, GJR-GARCH-EVT-pair copula, portfolio optimization
Procedia PDF Downloads 2564424 Estimation of a Finite Population Mean under Random Non Response Using Improved Nadaraya and Watson Kernel Weights
Authors: Nelson Bii, Christopher Ouma, John Odhiambo
Abstract:
Non-response is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random non-response using auxiliary data. In this study, it is assumed that random non-response occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random non-response. In particular, the auxiliary information is used via an improved Nadaraya-Watson kernel regression technique to compensate for random non-response. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at a 95% coverage rate. The results obtained in this study are useful, for instance, in choosing efficient estimators of the finite population mean in demographic sample surveys.Keywords: mean squared error, random non-response, two-stage cluster sampling, confidence interval lengths
Procedia PDF Downloads 1414423 Blocking of Random Chat Apps at Home Routers for Juvenile Protection in South Korea
Authors: Min Jin Kwon, Seung Won Kim, Eui Yeon Kim, Haeyoung Lee
Abstract:
Numerous anonymous chat apps that help people to connect with random strangers have been released in South Korea. However, they become a serious problem for young people since young people often use them for channels of prostitution or sexual violence. Although ISPs in South Korea are responsible for making inappropriate content inaccessible on their networks, they do not block traffic of random chat apps since 1) the use of random chat apps is entirely legal. 2) it is reported that they use HTTP proxy blocking so that non-HTTP traffic cannot be blocked. In this paper, we propose a service model that can block random chat apps at home routers. A service provider manages a blacklist that contains blocked apps’ information. Home routers that subscribe the service filter the traffic of the apps out using deep packet inspection. We have implemented a prototype of the proposed model, including a centralized server providing the blacklist, a Raspberry Pi-based home router that can filter traffic of the apps out, and an Android app used by the router’s administrator to locally customize the blacklist.Keywords: deep packet inspection, internet filtering, juvenile protection, technical blocking
Procedia PDF Downloads 3504422 On Hankel Matrices Approach to Interpolation Problem in Infinite and Finite Fields
Authors: Ivan Baravy
Abstract:
Interpolation problem, as it was initially posed in terms of polynomials, is well researched. However, further mathematical developments extended it significantly. Trigonometric interpolation is widely used in Fourier analysis, while its generalized representation as exponential interpolation is applicable to such problem of mathematical physics as modelling of Ziegler-Biersack-Littmark repulsive interatomic potentials. Formulated for finite fields, this problem arises in decoding Reed--Solomon codes. This paper shows the relation between different interpretations of the problem through the class of matrices of special structure - Hankel matrices.Keywords: Berlekamp-Massey algorithm, exponential interpolation, finite fields, Hankel matrices, Hankel polynomials
Procedia PDF Downloads 5214421 Programming with Grammars
Authors: Peter M. Maurer Maurer
Abstract:
DGL is a context free grammar-based tool for generating random data. Many types of simulator input data require some computation to be placed in the proper format. For example, it might be necessary to generate ordered triples in which the third element is the sum of the first two elements, or it might be necessary to generate random numbers in some sorted order. Although DGL is universal in computational power, generating these types of data is extremely difficult. To overcome this problem, we have enhanced DGL to include features that permit direct computation within the structure of a context free grammar. The features have been implemented as special types of productions, preserving the context free flavor of DGL specifications.Keywords: DGL, Enhanced Context Free Grammars, Programming Constructs, Random Data Generation
Procedia PDF Downloads 1494420 Reliability Analysis of Construction Schedule Plan Based on Building Information Modelling
Authors: Lu Ren, You-Liang Fang, Yan-Gang Zhao
Abstract:
In recent years, the application of BIM (Building Information Modelling) to construction schedule plan has been the focus of more and more researchers. In order to assess the reasonable level of the BIM-based construction schedule plan, that is whether the schedule can be completed on time, some researchers have introduced reliability theory to evaluate. In the process of evaluation, the uncertain factors affecting the construction schedule plan are regarded as random variables, and probability distributions of the random variables are assumed to be normal distribution, which is determined using two parameters evaluated from the mean and standard deviation of statistical data. However, in practical engineering, most of the uncertain influence factors are not normal random variables. So the evaluation results of the construction schedule plan will be unreasonable under the assumption that probability distributions of random variables submitted to the normal distribution. Therefore, in order to get a more reasonable evaluation result, it is necessary to describe the distribution of random variables more comprehensively. For this purpose, cubic normal distribution is introduced in this paper to describe the distribution of arbitrary random variables, which is determined by the first four moments (mean, standard deviation, skewness and kurtosis). In this paper, building the BIM model firstly according to the design messages of the structure and making the construction schedule plan based on BIM, then the cubic normal distribution is used to describe the distribution of the random variables due to the collecting statistical data of the random factors influencing construction schedule plan. Next the reliability analysis of the construction schedule plan based on BIM can be carried out more reasonably. Finally, the more accurate evaluation results can be given providing reference for the implementation of the actual construction schedule plan. In the last part of this paper, the more efficiency and accuracy of the proposed methodology for the reliability analysis of the construction schedule plan based on BIM are conducted through practical engineering case.Keywords: BIM, construction schedule plan, cubic normal distribution, reliability analysis
Procedia PDF Downloads 1494419 A New Concept for Deriving the Expected Value of Fuzzy Random Variables
Authors: Liang-Hsuan Chen, Chia-Jung Chang
Abstract:
Fuzzy random variables have been introduced as an imprecise concept of numeric values for characterizing the imprecise knowledge. The descriptive parameters can be used to describe the primary features of a set of fuzzy random observations. In fuzzy environments, the expected values are usually represented as fuzzy-valued, interval-valued or numeric-valued descriptive parameters using various metrics. Instead of the concept of area metric that is usually adopted in the relevant studies, the numeric expected value is proposed by the concept of distance metric in this study based on two characters (fuzziness and randomness) of FRVs. Comparing with the existing measures, although the results show that the proposed numeric expected value is same with those using the different metric, if only triangular membership functions are used. However, the proposed approach has the advantages of intuitiveness and computational efficiency, when the membership functions are not triangular types. An example with three datasets is provided for verifying the proposed approach.Keywords: fuzzy random variables, distance measure, expected value, descriptive parameters
Procedia PDF Downloads 3454418 Radio Frequency Identification Encryption via Modified Two Dimensional Logistic Map
Authors: Hongmin Deng, Qionghua Wang
Abstract:
A modified two dimensional (2D) logistic map based on cross feedback control is proposed. This 2D map exhibits more random chaotic dynamical properties than the classic one dimensional (1D) logistic map in the statistical characteristics analysis. So it is utilized as the pseudo-random (PN) sequence generator, where the obtained real-valued PN sequence is quantized at first, then applied to radio frequency identification (RFID) communication system in this paper. This system is experimentally validated on a cortex-M0 development board, which shows the effectiveness in key generation, the size of key space and security. At last, further cryptanalysis is studied through the test suite in the National Institute of Standards and Technology (NIST).Keywords: chaos encryption, logistic map, pseudo-random sequence, RFID
Procedia PDF Downloads 4024417 Occupational Exposure to Electromagnetic Fields Can Increase the Release of Mercury from Dental Amalgam Fillings
Authors: Ghazal Mortazavi, S. M. J. Mortazavi
Abstract:
Electricians, power line engineers and power station workers, welders, aluminum reduction workers, MRI operators and railway workers are occupationally exposed to different levels of electromagnetic fields. Mercury is among the most toxic metals. Dental amalgam fillings cause significant exposure to elemental mercury vapour in the general population. Today, substantial evidence indicates that mercury even at low doses may lead to toxicity. Increased release of mercury from dental amalgam fillings after exposure to MRI or microwave radiation emitted by mobile phones has been previously shown by our team. Moreover, our recent studies on the effects of stronger magnetic fields entirely confirmed our previous findings. From the other point of view, we have also shown that papers which reported no increased release of mercury after MRI, may have some methodological flaws. Over the past several years, our lab has focused on the health effects of exposure of laboratory animals and humans to different sources of electromagnetic fields such as mobile phones and their base stations, mobile phone jammers, laptop computers, radars, dentistry cavitrons, and MRI. As a strong association between exposure to electromagnetic fields and mercury level has been found in our studies, our findings lead us to this conclusion that occupational exposure to electromagnetic fields in workers with dental amalgam fillings can lead to elevated levels of mercury. Studies which reported that exposure to mercury can be a risk factor of Alzheimer’s disease (AD) due to the accumulation of amyloid beta protein (Aβ) in the brain and those reported that long-term occupational exposure to high levels of electromagnetic fields can increase the risk of Alzheimer's disease and dementia in male workers support our concept and confirm the significant role of the occupational exposure to electromagnetic fields in increasing the mercury level in workers with amalgam fillings.Keywords: occupational exposure, electromagnetic fields, workers, mercury release, dental amalgam, restorative dentistry
Procedia PDF Downloads 4344416 [Keynote Speech]: Feature Selection and Predictive Modeling of Housing Data Using Random Forest
Authors: Bharatendra Rai
Abstract:
Predictive data analysis and modeling involving machine learning techniques become challenging in presence of too many explanatory variables or features. Presence of too many features in machine learning is known to not only cause algorithms to slow down, but they can also lead to decrease in model prediction accuracy. This study involves housing dataset with 79 quantitative and qualitative features that describe various aspects people consider while buying a new house. Boruta algorithm that supports feature selection using a wrapper approach build around random forest is used in this study. This feature selection process leads to 49 confirmed features which are then used for developing predictive random forest models. The study also explores five different data partitioning ratios and their impact on model accuracy are captured using coefficient of determination (r-square) and root mean square error (rsme).Keywords: housing data, feature selection, random forest, Boruta algorithm, root mean square error
Procedia PDF Downloads 3244415 Evaluating Performance of Value at Risk Models for the MENA Islamic Stock Market Portfolios
Authors: Abderrazek Ben Maatoug, Ibrahim Fatnassi, Wassim Ben Ayed
Abstract:
In this paper we investigate the issue of market risk quantification for Middle East and North Africa (MENA) Islamic market equity. We use Value-at-Risk (VaR) as a measure of potential risk in Islamic stock market, for long and short position, based on Riskmetrics model and the conditional parametric ARCH class model volatility with normal, student and skewed student distribution. The sample consist of daily data for the 2006-2014 of 11 Islamic stock markets indices. We conduct Kupiec and Engle and Manganelli tests to evaluate the performance for each model. The main finding of our empirical results show that (i) the superior performance of VaR models based on the Student and skewed Student distribution, for the significance level of α=1% , for all Islamic stock market indices, and for both long and short trading positions (ii) Risk Metrics model, and VaR model based on conditional volatility with normal distribution provides the best accurate VaR estimations for both long and short trading positions for a significance level of α=5%.Keywords: value-at-risk, risk management, islamic finance, GARCH models
Procedia PDF Downloads 5924414 Solving Process Planning, Weighted Apparent Tardiness Cost Dispatching, and Weighted Processing plus Weight Due-Date Assignment Simultaneously Using a Hybrid Search
Authors: Halil Ibrahim Demir, Caner Erden, Abdullah Hulusi Kokcam, Mumtaz Ipek
Abstract:
Process planning, scheduling, and due date assignment are three important manufacturing functions which are studied independently in literature. There are hundreds of works on IPPS and SWDDA problems but a few works on IPPSDDA problem. Integrating these three functions is very crucial due to the high relationship between them. Since the scheduling problem is in the NP-Hard problem class without any integration, an integrated problem is even harder to solve. This study focuses on the integration of these functions. Sum of weighted tardiness, earliness, and due date related costs are used as a penalty function. Random search and hybrid metaheuristics are used to solve the integrated problem. Marginal improvement in random search is very high in the early iterations and reduces enormously in later iterations. At that point directed search contribute to marginal improvement more than random search. In this study, random and genetic search methods are combined to find better solutions. Results show that overall performance becomes better as the integration level increases.Keywords: process planning, genetic algorithm, hybrid search, random search, weighted due-date assignment, weighted scheduling
Procedia PDF Downloads 3644413 Disintegration of Deuterons by Photons Reaction Model for GEANT4 with Dibaryon Formalism
Authors: Jae Won Shin, Chang Ho Hyun
Abstract:
A disintegration of deuterons by photons (dγ → np) reaction model for GEANT4 is developed in this work. An effective field theory with dibaryon fields Introducing a dibaryon field, we can take into account the effective range contribution to the propagator up to infinite order, and it consequently makes the convergence of the theory better than the pionless effective field theory without dibaryon fields. We develop a hadronic model for GEANT4 which is specialized for the disintegration of the deuteron by photons, dγ → np. For the description of two-nucleon interactions, we employ an effective field theory so called pionless theory with dibaryon fields (dEFT). In spite of its simplicity, the theory has proven very effective and useful in the applications to various two-nucleon systems and processes at low energies. We apply the new model of GEANT4 (G4dEFT) to the calculation of total and differential cross sections in dγ → np, and obtain good agreements to experimental data for a wide range of incoming photon energies.Keywords: dγ → np, dibaryon fields, effective field theory, GEANT4
Procedia PDF Downloads 3804412 Long Term Love Relationships Analyzed as a Dynamic System with Random Variations
Authors: Nini Johana Marín Rodríguez, William Fernando Oquendo Patino
Abstract:
In this work, we model a coupled system where we explore the effects of steady and random behavior on a linear system like an extension of the classic Strogatz model. This is exemplified by modeling a couple love dynamics as a linear system of two coupled differential equations and studying its stability for four types of lovers chosen as CC='Cautious- Cautious', OO='Only other feelings', OP='Opposites' and RR='Romeo the Robot'. We explore the effects of, first, introducing saturation, and second, adding a random variation to one of the CC-type lover, which will shape his character by trying to model how its variability influences the dynamics between love and hate in couple in a long run relationship. This work could also be useful to model other kind of systems where interactions can be modeled as linear systems with external or internal random influence. We found the final results are not easy to predict and a strong dependence on initial conditions appear, which a signature of chaos.Keywords: differential equations, dynamical systems, linear system, love dynamics
Procedia PDF Downloads 3544411 Software Reliability Prediction Model Analysis
Authors: Lela Mirtskhulava, Mariam Khunjgurua, Nino Lomineishvili, Koba Bakuria
Abstract:
Software reliability prediction gives a great opportunity to measure the software failure rate at any point throughout system test. A software reliability prediction model provides with the technique for improving reliability. Software reliability is very important factor for estimating overall system reliability, which depends on the individual component reliabilities. It differs from hardware reliability in that it reflects the design perfection. Main reason of software reliability problems is high complexity of software. Various approaches can be used to improve the reliability of software. We focus on software reliability model in this article, assuming that there is a time redundancy, the value of which (the number of repeated transmission of basic blocks) can be an optimization parameter. We consider given mathematical model in the assumption that in the system may occur not only irreversible failures, but also a failure that can be taken as self-repairing failures that significantly affect the reliability and accuracy of information transfer. Main task of the given paper is to find a time distribution function (DF) of instructions sequence transmission, which consists of random number of basic blocks. We consider the system software unreliable; the time between adjacent failures has exponential distribution.Keywords: exponential distribution, conditional mean time to failure, distribution function, mathematical model, software reliability
Procedia PDF Downloads 4654410 The Effects of Plantation Size and Internal Transport on Energy Efficiency of Biofuel Production
Authors: Olga Orynycz, Andrzej Wasiak
Abstract:
Mathematical model describing energetic efficiency (defined as a ratio of energy obtained in the form of biofuel to the sum of energy inputs necessary to facilitate production) of agricultural subsystem as a function of technological parameters was developed. Production technology is characterized by parameters of machinery, topological characteristics of the plantation as well as transportation routes inside and outside of plantation. The relationship between the energetic efficiency of agricultural and industrial subsystems is also derived. Due to the assumed large area of the individual field, the operations last for several days increasing inter-fields routes because of several returns. The total distance driven outside of the fields is, however, small as compared to the distance driven inside of the fields. This results in small energy consumption during inter-fields transport that, however, causes a substantial decrease of the energetic effectiveness of the whole system.Keywords: biofuel, energetic efficiency, EROEI, mathematical modelling, production system
Procedia PDF Downloads 3464409 Numerical Computation of Specific Absorption Rate and Induced Current for Workers Exposed to Static Magnetic Fields of MRI Scanners
Authors: Sherine Farrag
Abstract:
Currently-used MRI scanners in Cairo City possess static magnetic field (SMF) that varies from 0.25 up to 3T. More than half of them possess SMF of 1.5T. The SMF of the magnet determine the diagnostic power of a scanner, but not worker's exposure profile. This research paper presents an approach for numerical computation of induced electric fields and SAR values by estimation of fringe static magnetic fields. Iso-gauss line of MR was mapped and a polynomial function of the 7th degree was generated and tested. Induced current field due to worker motion in the SMF and SAR values for organs and tissues have been calculated. Results illustrate that the computation tool used permits quick accurate MRI iso-gauss mapping and calculation of SAR values which can then be used for assessment of occupational exposure profile of MRI operators.Keywords: MRI occupational exposure, MRI safety, induced current density, specific absorption rate, static magnetic fields
Procedia PDF Downloads 430