**Commenced**in January 2007

**Frequency:**Monthly

**Edition:**International

**Paper Count:**1678

# World Academy of Science, Engineering and Technology

## [Mathematical and Computational Sciences]

### Online ISSN : 1307-6892

##### 1678 Designing Social Care Policies in the Long Term: A Study Using Regression, Clustering and Backpropagation Neural Nets

**Authors:**
Sotirios Raptis

**Abstract:**

Linking social needs to social classes using different criteria may lead to social services misuse. The paper discusses using ML and Neural Networks (NNs) in linking public services in Scotland in the long term and advocates, this can result in a reduction of the services cost connecting resources needed in groups for similar services. The paper combines typical regression models with clustering and cross-correlation as complementary constituents to predict the demand. Insurance companies and public policymakers can pack linked services such as those offered to the elderly or to low-income people in the longer term. The work is based on public data from 22 services offered by Public Health Services (PHS) Scotland and from the Scottish Government (SG) from 1981 to 2019 that are broken into 110 years series called factors and uses Linear Regression (LR), Autoregression (ARMA) and 3 types of back-propagation (BP) Neural Networks (BPNN) to link them under specific conditions. Relationships found were between smoking related healthcare provision, mental health-related health services, and epidemiological weight in Primary 1(Education) Body Mass Index (BMI) in children. Primary component analysis (PCA) found 11 significant factors while C-Means (CM) clustering gave 5 major factors clusters.

**Keywords:**
Probability,
cohorts,
data frames,
services,
prediction.

##### 1677 Base Change for Fisher Metrics: Case of the q−Gaussian Inverse Distribution

**Authors:**
Gabriel I. Loaiza O.,
Carlos A. Cadavid M.,
Juan C. Arango P.

**Abstract:**

It is known that the Riemannian manifold determined by the family of inverse Gaussian distributions endowed with the Fisher metric has negative constant curvature κ = −1/2 , as does the family of usual Gaussian distributions. In the present paper, firstly we arrive at this result by following a different path, much simpler than the previous ones. We first put the family in exponential form, thus endowing the family with a new set of parameters, or coordinates, θ1, θ2; then we determine the matrix of the Fisher metric in terms of these parameters; and finally we compute this matrix in the original parameters. Secondly, we define the Inverse q−Gaussian distribution family (q < 3), as the family obtained by replacing the usual exponential function by the Tsallis q−exponential function in the expression for the Inverse Gaussian distribution, and observe that it supports two possible geometries, the Fisher and the q−Fisher geometry. And finally, we apply our strategy to obtain results about the Fisher and q−Fisher geometry of the Inverse q−Gaussian distribution family, similar to the ones obtained in the case of the Inverse Gaussian distribution family.

**Keywords:**
Base of Changes,
Information Geometry,
Inverse
Gaussian distribution,
Inverse q-Gaussian distribution,
Statistical
Manifolds.

##### 1676 Solution of Two-Point Nonlinear Boundary Problems Using Taylor Series Approximation and the Ying Buzu Shu Algorithm

**Authors:**
U. C. Amadi,
N. A. Udoh

**Abstract:**

One of the major challenges faced in solving initial and boundary problems is how to find approximate solutions with minimal deviation from the exact solution without so much rigor and complications. The Taylor series method provides a simple way of obtaining an infinite series which converges to the exact solution for initial value problems and this method of solution is somewhat limited for a two point boundary problem since the infinite series has to be truncated to include the boundary conditions. In this paper, the Ying Buzu Shu algorithm is used to solve a two point boundary nonlinear diffusion problem for the fourth and sixth order solution and compare their relative error and rate of convergence to the exact solution.

**Keywords:**
Ying Buzu Shu,
nonlinear boundary problem,
Taylor series algorithm,
infinite series.

##### 1675 The Analysis of Different Classes of Weighted Fuzzy Petri Nets and Their Features

**Authors:**
Yurii Bloshko,
Oksana Olar

**Abstract:**

This paper presents the analysis of six different classes of Petri nets: fuzzy Petri nets (FPN), generalized fuzzy Petri nets (GFPN), parameterized fuzzy Petri nets (PFPN), T2GFPN, flexible generalized fuzzy Petri nets (FGFPN), binary Petri nets (BPN). These classes were simulated in the special software PNeS® for the analysis of its pros and cons on the example of models which are dedicated to the decision-making process of passenger transport logistics. The paper includes the analysis of two approaches: when input values are filled with the experts’ knowledge; when fuzzy expectations represented by output values are added to the point. These approaches fulfill the possibilities of triples of functions which are replaced with different combinations of t-/s-norms.

**Keywords:**
Fuzzy petri net,
intelligent computational techniques,
knowledge representation,
triangular norms.

##### 1674 Optimizing Data Evaluation Metrics for Fraud Detection Using Machine Learning

**Authors:**
Jennifer Leach,
Umashanger Thayasivam

**Abstract:**

The use of technology has benefited society in more ways than one ever thought possible. Unfortunately, as society’s knowledge of technology has advanced, so has its knowledge of ways to use technology to manipulate others. This has led to a simultaneous advancement in the world of fraud. Machine learning techniques can offer a possible solution to help decrease these advancements. This research explores how the use of various machine learning techniques can aid in detecting fraudulent activity across two different types of fraudulent datasets, and the accuracy, precision, recall, and F1 were recorded for each method. Each machine learning model was also tested across five different training and testing splits in order to discover which split and technique would lead to the most optimal results.

**Keywords:**
Data science,
fraud detection,
machine learning,
supervised learning.

##### 1673 The Possibility of Solving a 3x3 Rubik’s Cube under 3 Seconds

**Authors:**
Chung To Kong,
Siu Ming Yiu

**Abstract:**

Rubik's cube was invented in 1974. Since then, speedcubers all over the world try their best to break the world record again and again. The newest record is 3.47 seconds. There are many factors that affect the timing including turns per second (tps), algorithm, finger trick, and hardware of the cube. In this paper, the lower bound of the cube solving time will be discussed using convex optimization. Extended analysis of the world records will be used to understand how to improve the timing. With the understanding of each part of the solving step, the paper suggests a list of speed improvement technique. Based on the analysis of the world record, there is a high possibility that the 3 seconds mark will be broken soon.

**Keywords:**
Rubik’s cube,
convex optimization,
speed cubing,
CFOP.

##### 1672 Matrix Completion with Heterogeneous Observation Cost Using Sparsity-Number of Column-Space

**Authors:**
Ilqar Ramazanli

**Abstract:**

The matrix completion problem has been studied broadly under many underlying conditions. In many real-life scenarios, we could expect elements from distinct columns or distinct positions to have a different cost. In this paper, we explore this generalization under adaptive conditions. We approach the problem under two different cost models. The first one is that entries from different columns have different observation costs, but, within the same column, each entry has a uniform cost. The second one is any two entry has different observation cost, despite being the same or different columns. We provide complexity analysis of our algorithms and provide tightness guarantees.

**Keywords:**
Matrix completion,
adaptive learning,
heterogeneous
cost,
Matroid optimization.

##### 1671 Using Statistical Significance and Prediction to Test Long/Short Term Public Services and Patients Cohorts: A Case Study in Scotland

**Authors:**
Sotirios Raptis

**Abstract:**

Health and Social care (HSc) services planning and scheduling are facing unprecedented challenges, due to the pandemic pressure and also suffer from unplanned spending that is negatively impacted by the global financial crisis. Data-driven approaches can help to improve policies, plan and design services provision schedules using algorithms that assist healthcare managers to face unexpected demands using fewer resources. The paper discusses services packing using statistical significance tests and machine learning (ML) to evaluate demands similarity and coupling. This is achieved by predicting the range of the demand (class) using ML methods such as Classification and Regression Trees (CART), Random Forests (RF), and Logistic Regression (LGR). The significance tests Chi-Squared and Student’s test are used on data over a 39 years span for which data exist for services delivered in Scotland. The demands are associated using probabilities and are parts of statistical hypotheses. These hypotheses, as their NULL part, assume that the target demand is statistically dependent on other services’ demands. This linking is checked using the data. In addition, ML methods are used to linearly predict the above target demands from the statistically found associations and extend the linear dependence of the target’s demand to independent demands forming, thus, groups of services. Statistical tests confirmed ML coupling and made the prediction statistically meaningful and proved that a target service can be matched reliably to other services while ML showed that such marked relationships can also be linear ones. Zero padding was used for missing years records and illustrated better such relationships both for limited years and for the entire span offering long-term data visualizations while limited years periods explained how well patients numbers can be related in short periods of time or that they can change over time as opposed to behaviours across more years. The prediction performance of the associations were measured using metrics such as Receiver Operating Characteristic (ROC), Area Under Curve (AUC) and Accuracy (ACC) as well as the statistical tests Chi-Squared and Student. Co-plots and comparison tables for the RF, CART, and LGR methods as well as the p-value from tests and Information Exchange (IE/MIE) measures are provided showing the relative performance of ML methods and of the statistical tests as well as the behaviour using different learning ratios. The impact of k-neighbours classification (k-NN), Cross-Correlation (CC) and C-Means (CM) first groupings was also studied over limited years and for the entire span. It was found that CART was generally behind RF and LGR but in some interesting cases, LGR reached an AUC = 0 falling below CART, while the ACC was as high as 0.912 showing that ML methods can be confused by zero-padding or by data’s irregularities or by the outliers. On average, 3 linear predictors were sufficient, LGR was found competing well RF and CART followed with the same performance at higher learning ratios. Services were packed only when a significance level (p-value) of their association coefficient was more than 0.05. Social factors relationships were observed between home care services and treatment of old people, low birth weights, alcoholism, drug abuse, and emergency admissions. The work found that different HSc services can be well packed as plans of limited duration, across various services sectors, learning configurations, as confirmed by using statistical hypotheses.

**Keywords:**
Class,
cohorts,
data frames,
grouping,
prediction,
probabilities,
services.

##### 1670 A Large Dataset Imputation Approach Applied to Country Conflict Prediction Data

**Authors:**
Benjamin D. Leiby,
Darryl K. Ahner

**Abstract:**

This study demonstrates an alternative stochastic imputation approach for large datasets when preferred commercial packages struggle to iterate due to numerical problems. A large country conflict dataset motivates the search to impute missing values well over a common threshold of 20% missingness. The methodology capitalizes on correlation while using model residuals to provide the uncertainty in estimating unknown values. Examination of the methodology provides insight toward choosing linear or nonlinear modeling terms. Static tolerances common in most packages are replaced with tailorable tolerances that exploit residuals to fit each data element. The methodology evaluation includes observing computation time, model fit, and the comparison of known values to replaced values created through imputation. Overall, the country conflict dataset illustrates promise with modeling first-order interactions, while presenting a need for further refinement that mimics predictive mean matching.

**Keywords:**
Correlation,
country conflict,
imputation,
stochastic regression.

##### 1669 On Deterministic Chaos: Disclosing the Missing Mathematics from the Lorenz-Haken Equations

**Authors:**
Belkacem Meziane

**Abstract:**

The original 3D Lorenz-Haken equations -which describe laser dynamics- are converted into 2-second-order differential equations out of which the so far missing mathematics is extracted. Leaning on high-order trigonometry, important outcomes are pulled out: A fundamental result attributes chaos to forbidden periodic solutions, inside some precisely delimited region of the control parameter space that governs self-pulsing.

**Keywords:**
chaos,
Lorenz-Haken equations,
laser dynamics,
nonlinearities

##### 1668 Freighter Aircraft Selection Using Entropic Programming for Multiple Criteria Decision Making Analysis

**Authors:**
C. Ardil

**Abstract:**

This paper proposes entropic programming for the freighter aircraft selection problem using the multiple criteria decision analysis method. The study aims to propose a systematic and comprehensive framework by focusing on the perspective of freighter aircraft selection. In order to achieve this goal, an integrated entropic programming approach was proposed to evaluate and rank alternatives. The decision criteria and aircraft alternatives were identified from the research data analysis. The objective criteria weights were determined by the mean weight method and the standard deviation method. The proposed entropic programming model was applied to a practical decision problem for evaluating and selecting freighter aircraft. The proposed entropic programming technique gives robust, reliable, and efficient results in modeling decision making analysis problems. As a result of entropic programming analysis, Boeing B747-8F, a freighter aircraft alternative ( a3), was chosen as the most suitable freighter aircraft candidate.

**Keywords:**
entropic programming,
additive weighted model,
multiple criteria decision making analysis,
MCDMA,
TOPSIS,
aircraft selection,
freighter aircraft,
Boeing B747-8F,
Boeing B777F,
Airbus A350F

##### 1667 On the Algorithmic Iterative Solutions of Conjugate Gradient, Gauss-Seidel and Jacobi Methods for Solving Systems of Linear Equations

**Authors:**
H. D. Ibrahim,
H. C. Chinwenyi,
H. N. Ude

**Abstract:**

In this paper, efforts were made to examine and compare the algorithmic iterative solutions of conjugate gradient method as against other methods such as Gauss-Seidel and Jacobi approaches for solving systems of linear equations of the form Ax = b, where A is a real n x n symmetric and positive definite matrix. We performed algorithmic iterative steps and obtained analytical solutions of a typical 3 x 3 symmetric and positive definite matrix using the three methods described in this paper (Gauss-Seidel, Jacobi and Conjugate Gradient methods) respectively. From the results obtained, we discovered that the Conjugate Gradient method converges faster to exact solutions in fewer iterative steps than the two other methods which took much iteration, much time and kept tending to the exact solutions.

**Keywords:**
conjugate gradient,
linear equations,
symmetric and positive definite matrix,
Gauss-Seidel,
Jacobi,
algorithm

##### 1666 Adaptive Kalman Filter for Noise Estimation and Identification with Bayesian Approach

**Authors:**
Farhad Asadi,
S. Hossein Sadati

**Abstract:**

Bayesian approach can be used for parameter identification and extraction in state space models and its ability for analyzing sequence of data in dynamical system is proved in different literatures. In this paper, adaptive Kalman filter with Bayesian approach for identification of variances in measurement parameter noise is developed. Next, it is applied for estimation of the dynamical state and measurement data in discrete linear dynamical system. This algorithm at each step time estimates noise variance in measurement noise and state of system with Kalman filter. Next, approximation is designed at each step separately and consequently sufficient statistics of the state and noise variances are computed with a fixed-point iteration of an adaptive Kalman filter. Different simulations are applied for showing the influence of noise variance in measurement data on algorithm. Firstly, the effect of noise variance and its distribution on detection and identification performance is simulated in Kalman filter without Bayesian formulation. Then, simulation is applied to adaptive Kalman filter with the ability of noise variance tracking in measurement data. In these simulations, the influence of noise distribution of measurement data in each step is estimated, and true variance of data is obtained by algorithm and is compared in different scenarios. Afterwards, one typical modeling of nonlinear state space model with inducing noise measurement is simulated by this approach. Finally, the performance and the important limitations of this algorithm in these simulations are explained.

**Keywords:**
adaptive filtering,
Bayesian approach Kalman filtering approach,
variance tracking

##### 1665 Military Fighter Aircraft Selection Using Multiplicative Multiple Criteria Decision Making Analysis Method

**Authors:**
C. Ardil

**Abstract:**

Multiplicative multiple criteria decision making analysis (MCDMA) method is a systematic decision support system to aid decision makers reach appropriate decisions. The application of multiplicative MCDMA in the military aircraft selection problem is significant for proper decision making process, which is the decisive factor in minimizing expenditures and increasing defense capability and capacity. Nine military fighter aircraft alternatives were evaluated by ten decision criteria to solve the decision making problem. In this study, multiplicative MCDMA model aims to evaluate and select an appropriate military fighter aircraft for the Air Force fleet planning. The ranking results of multiplicative MCDMA model were compared with the ranking results of additive MCDMA, logarithmic MCDMA, and regrettive MCDMA models under the L_{2} norm data normalization technique to substantiate the robustness of the proposed method. The final ranking results indicate the military fighter aircraft Su-57 as the best available solution.

**Keywords:**
Aircraft Selection,
Military Fighter Aircraft Selection,
Air Force Fleet Planning,
Multiplicative MCDMA,
Additive MCDMA,
Logarithmic MCDMA,
Regrettive MCDMA,
Mean Weight,
Multiple Criteria Decision Making Analysis,
Sensitivity Analysis

##### 1664 Discrete Breeding Swarm for Cost Minimization of Parallel Job Shop Scheduling Problem

**Authors:**
Tarek Aboueldah,
Hanan Farag

**Abstract:**

Parallel Job Shop Scheduling Problem (JSSP) is a multi-objective and multi constrains NP-optimization problem. Traditional Artificial Intelligence techniques have been widely used; however, they could be trapped into the local minimum without reaching the optimum solution. Thus, we propose a hybrid Artificial Intelligence (AI) model with Discrete Breeding Swarm (DBS) added to traditional AI to avoid this trapping. This model is applied in the cost minimization of the Car Sequencing and Operator Allocation (CSOA) problem. The practical experiment shows that our model outperforms other techniques in cost minimization.

**Keywords:**
Parallel Job Shop Scheduling Problem,
Artificial Intelligence,
Discrete Breeding Swarm,
Car Sequencing and Operator Allocation,
cost minimization.

##### 1663 The Contribution of Edgeworth, Bootstrap and Monte Carlo Methods in Financial Data

**Authors:**
Edlira Donefski,
Tina Donefski,
Lorenc Ekonomi

**Abstract:**

Edgeworth Approximation, Bootstrap and Monte Carlo Simulations have a considerable impact on the achieving certain results related to different problems taken into study. In our paper, we have treated a financial case related to the effect that have the components of a Cash-Flow of one of the most successful businesses in the world, as the financial activity, operational activity and investing activity to the cash and cash equivalents at the end of the three-months period. To have a better view of this case we have created a Vector Autoregression model, and after that we have generated the impulse responses in the terms of Asymptotic Analysis (Edgeworth Approximation), Monte Carlo Simulations and Residual Bootstrap based on the standard errors of every series created. The generated results consisted of the common tendencies for the three methods applied, that consequently verified the advantage of the three methods in the optimization of the model that contains many variants.

**Keywords:**
Autoregression,
Bootstrap,
Edgeworth Expansion,
Monte Carlo Method.

##### 1662 Identifying Network Subgraph-Associated Essential Genes in Molecular Networks

**Authors:**
Efendi Zaenudin,
Chien-Hung Huang,
Ka-Lok Ng

**Abstract:**

Essential genes play an important role in the survival of an organism. It has been shown that cancer-associated essential genes are genes necessary for cancer cell proliferation, where these genes are potential therapeutic targets. Also, it was demonstrated that mutations of the cancer-associated essential genes give rise to the resistance of immunotherapy for patients with tumors. In the present study, we focus on studying the biological effects of the essential genes from a network perspective. We hypothesize that one can analyze a biological molecular network by decomposing it into both three-node and four-node digraphs (subgraphs). These network subgraphs encode the regulatory interaction information among the network’s genetic elements. In this study, the frequency of occurrence of the subgraph-associated essential genes in a molecular network was quantified by using the statistical parameter, odds ratio. Biological effects of subgraph-associated essential genes are discussed. In summary, the subgraph approach provides a systematic method for analyzing molecular networks and it can capture useful biological information for biomedical research.

**Keywords:**
Biological molecular networks,
essential genes,
graph theory,
network subgraphs.

##### 1661 Bayesian Geostatistical Modelling of COVID-19 Datasets

**Authors:**
I. Oloyede

**Abstract:**

The COVID-19 dataset is obtained by extracting weather, longitude, latitude, ISO3666, cases and death of coronavirus patients across the globe. The data were extracted for a period of eight day choosing uniform time within the specified period. Then mapping of cases and deaths with reverence to continents were obtained. Bayesian Geostastical modelling was carried out on the dataset. The study found out that countries in the tropical region suffered less deaths/attacks compared to countries in the temperate region, this is due to high temperature in the tropical region.

**Keywords:**
COVID-19,
Bayesian,
geostastical modelling,
prior,
posterior.

##### 1660 The Fallacy around Inserting Brackets to Evaluate Expressions Involving Multiplication and Division

**Authors:**
Manduth Ramchander

**Abstract:**

Evaluating expressions involving multiplication and division can give rise to the fallacy that brackets can be arbitrarily inserted into expressions involving multiplication and division. The aim of this article was to draw upon mathematical theory to prove that brackets cannot be arbitrarily inserted into expressions involving multiplication and division and in particular in expressions where division precedes multiplication. In doing so, it demonstrates that the notion that two different answers are possible, when evaluating expressions involving multiplication and division, is indeed a false one. Searches conducted in a number of scholarly databases unearthed the rules to be applied when removing brackets from expressions, which revealed that consideration needs to be given to sign changes when brackets are removed. The rule pertaining to expressions involving multiplication and division was then extended upon, in its reverse format, to prove that brackets cannot be arbitrarily inserted into expressions involving multiplication and division. The application of the rule demonstrates that an expression involving multiplication and division can have only one correct answer. It is recommended that both the rule and its reverse be included in the curriculum, preferably at the juncture when manipulation with brackets is introduced.

**Keywords:**
Brackets,
multiplication,
division,
operations,
order.

##### 1659 Hermite–Hadamard Type Integral Inequalities Involving k–Riemann–Liouville Fractional Integrals and Their Applications

**Authors:**
Artion Kashuri,
Rozana Liko

**Abstract:**

**Keywords:**
Hermite–Hadamard’s inequalities,
k–Riemann–Liouville fractional integral,
H¨older’s inequality,
Special means.

##### 1658 Integral Domains and Their Algebras: Topological Aspects

**Authors:**
Shai Sarussi

**Abstract:**

**Keywords:**
Algebras over integral domains,
Alexandroff topology,
valuation domains,
integral domains.

##### 1657 Algebras over an Integral Domain and Immediate Neighbors

**Authors:**
Shai Sarussi

**Abstract:**

**Keywords:**
Algebras over integral domains,
Alexandroff topology,
immediate neighbors,
integral domains.

##### 1656 Utility Analysis of API Economy Based on Multi-Sided Platform Markets Model

**Authors:**
Mami Sugiura,
Shinichi Arakawa,
Masayuki Murata,
Satoshi Imai,
Toru Katagiri,
Motoyoshi Sekiya

**Abstract:**

**Keywords:**
API economy,
multi-sided markets,
API
evaluator,
platform,
platform provider.

##### 1655 Model of Optimal Centroids Approach for Multivariate Data Classification

**Authors:**
Pham Van Nha,
Le Cam Binh

**Abstract:**

**Keywords:**
Analysis of optimization,
artificial intelligence-based
optimization,
optimization for learning and data analysis,
global
optimization.

##### 1654 Derivation of Fractional Black-Scholes Equations Driven by Fractional G-Brownian Motion and Their Application in European Option Pricing

**Authors:**
Changhong Guo,
Shaomei Fang,
Yong He

**Abstract:**

**Keywords:**
European option pricing,
fractional Black-Scholes
equations,
fractional G-Brownian motion,
Taylor’s series of fractional
order,
uncertain volatility.

##### 1653 Multi-Criteria Based Robust Markowitz Model under Box Uncertainty

**Authors:**
Pulak Swain,
A. K. Ojha

**Abstract:**

**Keywords:**
Portfolio optimization,
multi-objective optimization,
E-constraint method,
box uncertainty,
robust optimization.

##### 1652 The Non-Uniqueness of Partial Differential Equations Options Price Valuation Formula for Heston Stochastic Volatility Model

**Authors:**
H. D. Ibrahim,
H. C. Chinwenyi,
T. Danjuma

**Abstract:**

An option is defined as a financial contract that provides the holder the right but not the obligation to buy or sell a specified quantity of an underlying asset in the future at a fixed price (called a strike price) on or before the expiration date of the option. This paper examined two approaches for derivation of Partial Differential Equation (PDE) options price valuation formula for the Heston stochastic volatility model. We obtained various PDE option price valuation formulas using the riskless portfolio method and the application of Feynman-Kac theorem respectively. From the results obtained, we see that the two derived PDEs for Heston model are distinct and non-unique. This establishes the fact of incompleteness in the model for option price valuation.

**Keywords:**
Option price valuation,
Partial Differential Equations,
Black-Scholes PDEs,
Ito process.

##### 1651 Step Method for Solving Nonlinear Two Delays Differential Equation in Parkinson’s Disease

**Authors:**
H. N. Agiza,
M. A. Sohaly,
M. A. Elfouly

**Abstract:**

Parkinson's disease (PD) is a heterogeneous disorder with common age of onset, symptoms, and progression levels. In this paper we will solve analytically the PD model as a non-linear delay differential equation using the steps method. The step method transforms a system of delay differential equations (DDEs) into systems of ordinary differential equations (ODEs). On some numerical examples, the analytical solution will be difficult. So we will approximate the analytical solution using Picard method and Taylor method to ODEs*.*

**Keywords:**
Parkinson's disease,
Step method,
delay differential equation,
simulation.

##### 1650 Dual-Actuated Vibration Isolation Technology for a Rotary System’s Position Control on a Vibrating Frame: Disturbance Rejection and Active Damping

**Authors:**
Kamand Bagherian,
Nariman Niknejad

**Abstract:**

**Keywords:**
Vibration isolation,
position control,
discrete-time
nonlinear controller,
active damping,
disturbance tracking
algorithm,
oscillation transmitting support,
stability robustness.

##### 1649 Hybrid Equity Warrants Pricing Formulation under Stochastic Dynamics

**Authors:**
Teh Raihana Nazirah Roslan,
Siti Zulaiha Ibrahim,
Sharmila Karim

**Abstract:**

A warrant is a financial contract that confers the right but not the obligation, to buy or sell a security at a certain price before expiration. The standard procedure to value equity warrants using call option pricing models such as the Black–Scholes model had been proven to contain many flaws, such as the assumption of constant interest rate and constant volatility. In fact, existing alternative models were found focusing more on demonstrating techniques for pricing, rather than empirical testing. Therefore, a mathematical model for pricing and analyzing equity warrants which comprises stochastic interest rate and stochastic volatility is essential to incorporate the dynamic relationships between the identified variables and illustrate the real market. Here, the aim is to develop dynamic pricing formulations for hybrid equity warrants by incorporating stochastic interest rates from the Cox-Ingersoll-Ross (CIR) model, along with stochastic volatility from the Heston model. The development of the model involves the derivations of stochastic differential equations that govern the model dynamics. The resulting equations which involve Cauchy problem and heat equations are then solved using partial differential equation approaches. The analytical pricing formulas obtained in this study comply with the form of analytical expressions embedded in the Black-Scholes model and other existing pricing models for equity warrants. This facilitates the practicality of this proposed formula for comparison purposes and further empirical study.

**Keywords:**
Cox-Ingersoll-Ross model,
equity warrants,
Heston model,
hybrid models,
stochastic.