Abstracts | Mathematical and Computational Sciences
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1394

World Academy of Science, Engineering and Technology

[Mathematical and Computational Sciences]

Online ISSN : 1307-6892

1034 Operational Matrix Method for Fuzzy Fractional Reaction Diffusion Equation

Authors: Sachin Kumar

Abstract:

Fuzzy fractional diffusion equation is widely useful to depict different physical processes arising in physics, biology, and hydrology. The motive of this article is to deal with the fuzzy fractional diffusion equation. We study a mathematical model of fuzzy space-time fractional diffusion equation in which unknown function, coefficients, and initial-boundary conditions are fuzzy numbers. First, we find out a fuzzy operational matrix of Legendre polynomial of Caputo type fuzzy fractional derivative having a non-singular Mittag-Leffler kernel. The main advantages of this method are that it reduces the fuzzy fractional partial differential equation (FFPDE) to a system of fuzzy algebraic equations from which we can find the solution of the problem. The feasibility of our approach is shown by some numerical examples. Hence, our method is suitable to deal with FFPDE and has good accuracy.

Keywords: fractional PDE, fuzzy valued function, diffusion equation, Legendre polynomial, spectral method

Procedia PDF Downloads 160
1033 Robust Numerical Scheme for Pricing American Options under Jump Diffusion Models

Authors: Salah Alrabeei, Mohammad Yousuf

Abstract:

The goal of option pricing theory is to help the investors to manage their money, enhance returns and control their financial future by theoretically valuing their options. However, most of the option pricing models have no analytical solution. Furthermore, not all the numerical methods are efficient to solve these models because they have nonsmoothing payoffs or discontinuous derivatives at the exercise price. In this paper, we solve the American option under jump diffusion models by using efficient time-dependent numerical methods. several techniques are integrated to reduced the overcome the computational complexity. Fast Fourier Transform (FFT) algorithm is used as a matrix-vector multiplication solver, which reduces the complexity from O(M2) into O(M logM). Partial fraction decomposition technique is applied to rational approximation schemes to overcome the complexity of inverting polynomial of matrices. The proposed method is easy to implement on serial or parallel versions. Numerical results are presented to prove the accuracy and efficiency of the proposed method.

Keywords: integral differential equations, jump–diffusion model, American options, rational approximation

Procedia PDF Downloads 90
1032 Inverse Scattering for a Second-Order Discrete System via Transmission Eigenvalues

Authors: Abdon Choque-Rivero

Abstract:

The Jacobi system with the Dirichlet boundary condition is considered on a half-line lattice when the coefficients are real valued. The inverse problem of recovery of the coefficients from various data sets containing the so-called transmission eigenvalues is analyzed. The Marchenko method is utilized to solve the corresponding inverse problem.

Keywords: inverse scattering, discrete system, transmission eigenvalues, Marchenko method

Procedia PDF Downloads 115
1031 The Structure of Invariant Manifolds after a Supercritical Hamiltonian Hopf Bifurcation

Authors: Matthaios Katsanikas

Abstract:

We study the structure of the invariant manifolds of complex unstable periodic orbits of a family of periodic orbits, in a 3D autonomous Hamiltonian system of galactic type, after a transition of this family from stability to complex instability (Hamiltonian Hopf bifurcation). We consider the case of a supercritical Hamiltonian Hopf bifurcation. The invariant manifolds of complex unstable periodic orbits have two kinds of structures. The first kind is represented by a disk confined structure on the 4D space of section. The second kind is represented by a complicated central tube structure that is associated with an extended network of tube structures, strips and flat structures of sheet type on the 4D space of section.

Keywords: dynamical systems, galactic dynamics, chaos, phase space

Procedia PDF Downloads 113
1030 A Coupled System of Caputo-Type Katugampola Fractional Differential Equations with Integral Boundary Conditions

Authors: Yacine Arioua

Abstract:

In this paper, we investigate the existence and uniqueness of solutions for a coupled system of nonlinear Caputo-type Katugampola fractional differential equations with integral boundary conditions. Based upon a contraction mapping principle, Schauders fixed point theorems, some new existence and uniqueness results of solutions for the given problems are obtained. For application, some examples are given to illustrate the usefulness of our main results.

Keywords: fractional differential equations, coupled system, Caputo-Katugampola derivative, fixed point theorems, existence, uniqueness

Procedia PDF Downloads 235
1029 Algebraic Characterization of Sheaves over Boolean Spaces

Authors: U. M. Swamy

Abstract:

A compact Hausdorff and totally disconnected topological space are known as Boolean space in view of the stone duality between Boolean algebras and such topological spaces. A sheaf over X is a triple (S, p, X) where S and X are topological spaces and p is a local homeomorphism of S onto X (that is, for each element s in S, there exist open sets U and G containing s and p(s) in S and X respectively such that the restriction of p to U is a homeomorphism of U onto G). Here we mainly concern on sheaves over Boolean spaces. From a given sheaf over a Boolean space, we obtain an algebraic structure in such a way that there is a one-to-one correspondence between these algebraic structures and sheaves over Boolean spaces.

Keywords: Boolean algebra, Boolean space, sheaf, stone duality

Procedia PDF Downloads 321
1028 Impact Evaluation of Vaccination against Eight-Child-Killer Diseases on under-Five Children Mortality at Mbale District, Uganda

Authors: Lukman Abiodun Nafiu

Abstract:

This study examines the impact evaluation of vaccination against eight-child-killer diseases on under-five children mortality at Mbale District. It was driven by three specific objectives which are to determine the proportion of under-five children mortality due to the eight-child-killer diseases to the total under-five children mortality; establish the cause-effect relationship between the eight-child-killer diseases and under-five children mortality; as well as establish the dependence of under-five children mortality in the location at Mbale District. A community based cross-sectional and longitudinal (panel) study design involving both quantitative and qualitative (focus group discussion and in-depth interview) approaches was employed over a period of 36 months. Multi-stage cluster design involving Health Sub-District (HSD), Forms of Ownership (FOO) and Health Facilities Centres (HFC) as the first, second and third stages respectively was used. Data was collected regarding the eight-child-killer diseases namely: measles, pneumonia, pertussis (whooping cough), diphtheria, poliomyelitis (polio), tetanus, haemophilus influenza, rotavirus gastroenteritis and mortality regarding immunized and non-immunized children aged 0-59 months. We monitored the children over a period of 24 months. The study used a sample of 384 children out of all the registered children for each year at Mbale Referral Hospital and other Primary Health Care Centres (HCIV, HCIII and HCII) at Mbale District between 2015 and 2019. These children were followed from birth to their current state (living or dead). The data collected in this study was analysed using cross tabulation and the chi-square test. The study concluded that majority of mothers at Mbale district took their children for immunization and thus reducing the occurrence of under-five children mortality. Overall, 2.3%, 4.6%, 3.1%, 5.4%, 1.5%, 3.8%, 0.0% and 0.0% of under-five children had polio, tetanus, diphtheria, measles, pertussis, pneumonia, haemophilus influenzae and rotavirus gastroenteritis respectively across all the sub counties at Mbale district during the period considered. Also, different locations (sub counties) do not have significant influence on the occurrence of these eight-child-killer diseases among the under-five children at Mbale district. Therefore, the study recommended that government and agencies should continue to work together to implement measures of vaccination programs and increasing access to basic health care with a continuous improvement on the social interventions to progress child survival.

Keywords: Diseases, Mortality, Children, Vaccination

Procedia PDF Downloads 101
1027 Hybrid Equity Warrants Pricing Formulation under Stochastic Dynamics

Authors: Teh Raihana Nazirah Roslan, Siti Zulaiha Ibrahim, Sharmila Karim

Abstract:

A warrant is a financial contract that confers the right but not the obligation, to buy or sell a security at a certain price before expiration. The standard procedure to value equity warrants using call option pricing models such as the Black–Scholes model had been proven to contain many flaws, such as the assumption of constant interest rate and constant volatility. In fact, existing alternative models were found focusing more on demonstrating techniques for pricing, rather than empirical testing. Therefore, a mathematical model for pricing and analyzing equity warrants which comprises stochastic interest rate and stochastic volatility is essential to incorporate the dynamic relationships between the identified variables and illustrate the real market. Here, the aim is to develop dynamic pricing formulations for hybrid equity warrants by incorporating stochastic interest rates from the Cox-Ingersoll-Ross (CIR) model, along with stochastic volatility from the Heston model. The development of the model involves the derivations of stochastic differential equations that govern the model dynamics. The resulting equations which involve Cauchy problem and heat equations are then solved using partial differential equation approaches. The analytical pricing formulas obtained in this study comply with the form of analytical expressions embedded in the Black-Scholes model and other existing pricing models for equity warrants. This facilitates the practicality of this proposed formula for comparison purposes and further empirical study.

Keywords: Cox-Ingersoll-Ross model, equity warrants, Heston model, hybrid models, stochastic

Procedia PDF Downloads 87
1026 Spatial-Temporal Awareness Approach for Extensive Re-Identification

Authors: Tyng-Rong Roan, Fuji Foo, Wenwey Hseush

Abstract:

Recent development of AI and edge computing plays a critical role to capture meaningful events such as detection of an unattended bag. One of the core problems is re-identification across multiple CCTVs. Immediately following the detection of a meaningful event is to track and trace the objects related to the event. In an extensive environment, the challenge becomes severe when the number of CCTVs increases substantially, imposing difficulties in achieving high accuracy while maintaining real-time performance. The algorithm that re-identifies cross-boundary objects for extensive tracking is referred to Extensive Re-Identification, which emphasizes the issues related to the complexity behind a great number of CCTVs. The Spatial-Temporal Awareness approach challenges the conventional thinking and concept of operations which is labor intensive and time consuming. The ability to perform Extensive Re-Identification through a multi-sensory network provides the next-level insights – creating value beyond traditional risk management.

Keywords: long-short-term memory, re-identification, security critical application, spatial-temporal awareness

Procedia PDF Downloads 90
1025 An Adaptive Conversational AI Approach for Self-Learning

Authors: Airy Huang, Fuji Foo, Aries Prasetya Wibowo

Abstract:

In recent years, the focus of Natural Language Processing (NLP) development has been gradually shifting from the semantics-based approach to deep learning one, which performs faster with fewer resources. Although it performs well in many applications, the deep learning approach, due to the lack of semantics understanding, has difficulties in noticing and expressing a novel business case with a pre-defined scope. In order to meet the requirements of specific robotic services, deep learning approach is very labor-intensive and time consuming. It is very difficult to improve the capabilities of conversational AI in a short time, and it is even more difficult to self-learn from experiences to deliver the same service in a better way. In this paper, we present an adaptive conversational AI algorithm that combines both semantic knowledge and deep learning to address this issue by learning new business cases through conversations. After self-learning from experience, the robot adapts to the business cases originally out of scope. The idea is to build new or extended robotic services in a systematic and fast-training manner with self-configured programs and constructed dialog flows. For every cycle in which a chat bot (conversational AI) delivers a given set of business cases, it is trapped to self-measure its performance and rethink every unknown dialog flows to improve the service by retraining with those new business cases. If the training process reaches a bottleneck and incurs some difficulties, human personnel will be informed of further instructions. He or she may retrain the chat bot with newly configured programs, or new dialog flows for new services. One approach employs semantics analysis to learn the dialogues for new business cases and then establish the necessary ontology for the new service. With the newly learned programs, it completes the understanding of the reaction behavior and finally uses dialog flows to connect all the understanding results and programs, achieving the goal of self-learning process. We have developed a chat bot service mounted on a kiosk, with a camera for facial recognition and a directional microphone array for voice capture. The chat bot serves as a concierge with polite conversation for visitors. As a proof of concept. We have demonstrated to complete 90% of reception services with limited self-learning capability.

Keywords: conversational AI, chatbot, dialog management, semantic analysis

Procedia PDF Downloads 101
1024 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus

Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo

Abstract:

The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.

Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning

Procedia PDF Downloads 125
1023 Finite Element and Split Bregman Methods for Solving a Family of Optimal Control Problem with Partial Differential Equation Constraint

Authors: Mahmoud Lot

Abstract:

In this article, we will discuss the solution of elliptic optimal control problem. First, by using the nite element method, we obtain the discrete form of the problem. The obtained discrete problem is actually a large scale constrained optimization problem. Solving this optimization problem with traditional methods is difficult and requires a lot of CPU time and memory. But split Bergman method converts the constrained problem to an unconstrained, and hence it saves time and memory requirement. Then we use the split Bregman method for solving this problem, and examples show the speed and accuracy of split Bregman methods for solving these types of problems. We also use the SQP method for solving the examples and compare with the split Bregman method.

Keywords: Split Bregman Method, optimal control with elliptic partial differential equation constraint, finite element method

Procedia PDF Downloads 115
1022 Evaluation of a Piecewise Linear Mixed-Effects Model in the Analysis of Randomized Cross-over Trial

Authors: Moses Mwangi, Geert Verbeke, Geert Molenberghs

Abstract:

Cross-over designs are commonly used in randomized clinical trials to estimate efficacy of a new treatment with respect to a reference treatment (placebo or standard). The main advantage of using cross-over design over conventional parallel design is its flexibility, where every subject become its own control, thereby reducing confounding effect. Jones & Kenward, discuss in detail more recent developments in the analysis of cross-over trials. We revisit the simple piecewise linear mixed-effects model, proposed by Mwangi et. al, (in press) for its first application in the analysis of cross-over trials. We compared performance of the proposed piecewise linear mixed-effects model with two commonly cited statistical models namely, (1) Grizzle model; and (2) Jones & Kenward model, used in estimation of the treatment effect, in the analysis of randomized cross-over trial. We estimate two performance measurements (mean square error (MSE) and coverage probability) for the three methods, using data simulated from the proposed piecewise linear mixed-effects model. Piecewise linear mixed-effects model yielded lowest MSE estimates compared to Grizzle and Jones & Kenward models for both small (Nobs=20) and large (Nobs=600) sample sizes. It’s coverage probability were highest compared to Grizzle and Jones & Kenward models for both small and large sample sizes. A piecewise linear mixed-effects model is a better estimator of treatment effect than its two competing estimators (Grizzle and Jones & Kenward models) in the analysis of cross-over trials. The data generating mechanism used in this paper captures two time periods for a simple 2-Treatments x 2-Periods cross-over design. Its application is extendible to more complex cross-over designs with multiple treatments and periods. In addition, it is important to note that, even for single response models, adding more random effects increases the complexity of the model and thus may be difficult or impossible to fit in some cases.

Keywords: Evaluation, Grizzle model, Jones & Kenward model, Performance measures, Simulation

Procedia PDF Downloads 93
1021 Integrated Nested Laplace Approximations For Quantile Regression

Authors: Kajingulu Malandala, Ranganai Edmore

Abstract:

The asymmetric Laplace distribution (ADL) is commonly used as the likelihood function of the Bayesian quantile regression, and it offers different families of likelihood method for quantile regression. Notwithstanding their popularity and practicality, ADL is not smooth and thus making it difficult to maximize its likelihood. Furthermore, Bayesian inference is time consuming and the selection of likelihood may mislead the inference, as the Bayes theorem does not automatically establish the posterior inference. Furthermore, ADL does not account for greater skewness and Kurtosis. This paper develops a new aspect of quantile regression approach for count data based on inverse of the cumulative density function of the Poisson, binomial and Delaporte distributions using the integrated nested Laplace Approximations. Our result validates the benefit of using the integrated nested Laplace Approximations and support the approach for count data.

Keywords: quantile regression, Delaporte distribution, count data, integrated nested Laplace approximation

Procedia PDF Downloads 134
1020 Using Linear Logistic Regression to Evaluation the Patient and System Delay and Effective Factors in Mortality of Patients with Acute Myocardial Infarction

Authors: Firouz Amani, Adalat Hoseinian, Sajjad Hakimian

Abstract:

Background: The mortality due to Myocardial Infarction (MI) is often occur during the first hours after onset of symptom. So, for taking the necessary treatment and decreasing the mortality rate, timely visited of the hospital could be effective in this regard. The aim of this study was to investigate the impact of effective factors in mortality of MI patients by using Linear Logistic Regression. Materials and Methods: In this case-control study, all patients with Acute MI who referred to the Ardabil city hospital were studied. All of died patients were considered as the case group (n=27) and we select 27 matched patients without Acute MI as a control group. Data collected for all patients in two groups by a same checklist and then analyzed by SPSS version 24 software using statistical methods. We used the linear logistic regression model to determine the effective factors on mortality of MI patients. Results: The mean age of patients in case group was significantly higher than control group (75.1±11.7 vs. 63.1±11.6, p=0.001).The history of non-cardinal diseases in case group with 44.4% significantly higher than control group with 7.4% (p=0.002).The number of performed PCIs in case group with 40.7% significantly lower than control group with 74.1% (P=0.013). The time distance between hospital admission and performed PCI in case group with 110.9 min was significantly upper than control group with 56 min (P=0.001). The mean of delay time from Onset of symptom to hospital admission (patient delay) and the mean of delay time from hospital admissions to receive treatment (system delay) was similar between two groups. By using logistic regression model we revealed that history of non-cardinal diseases (OR=283) and the number of performed PCIs (OR=24.5) had significant impact on mortality of MI patients in compare to other factors. Conclusion: Results of this study showed that of all studied factors, the number of performed PCIs, history of non-cardinal illness and the interval between onset of symptoms and performed PCI have significant relation with morality of MI patients and other factors were not meaningful. So, doing more studies with a large sample and investigated other involved factors such as smoking, weather and etc. is recommended in future.

Keywords: acute MI, mortality, heart failure, arrhythmia

Procedia PDF Downloads 104
1019 Rayleigh Wave Propagation in an Orthotropic Medium under the Influence of Exponentially Varying Inhomogeneities

Authors: Sumit Kumar Vishwakarma

Abstract:

The aim of the paper is to investigate the influence of inhomogeneity associated with the elastic constants and density of the orthotropic medium. The inhomogeneity is considered as exponential function of depth. The impact of gravity had been discussed. Using the concept of separation of variables, the system of a partial differential equation (equation of motion) has been converted into ordinary differential equation, which is coupled in nature. It further reduces to a biquadratic equation whose roots were found by using MATLAB. A suitable boundary condition is employed to derive the dispersion equation in a closed-form. Numerical simulations had been performed to show the influence of the inhomogeneity parameter. It was observed that as the numerical values of increases, the phase velocity of Rayleigh waves decreases at a particular wavenumber. Graphical illustrations were drawn to visualize the effect of the increasing and decreasing values of the inhomogeneity parameter. It can be concluded that it has a remarkable bearing on the phase velocity as well as damping velocity.

Keywords: Rayleigh waves, orthotropic medium, gravity field, inhomogeneity

Procedia PDF Downloads 102
1018 Forecasting Amman Stock Market Data Using a Hybrid Method

Authors: Ahmad Awajan, Sadam Al Wadi

Abstract:

In this study, a hybrid method based on Empirical Mode Decomposition and Holt-Winter (EMD-HW) is used to forecast Amman stock market data. First, the data are decomposed by EMD method into Intrinsic Mode Functions (IMFs) and residual components. Then, all components are forecasted by HW technique. Finally, forecasting values are aggregated together to get the forecasting value of stock market data. Empirical results showed that the EMD- HW outperform individual forecasting models. The strength of this EMD-HW lies in its ability to forecast non-stationary and non- linear time series without a need to use any transformation method. Moreover, EMD-HW has a relatively high accuracy comparing with eight existing forecasting methods based on the five forecast error measures.

Keywords: Holt-Winter method, empirical mode decomposition, forecasting, time series

Procedia PDF Downloads 98
1017 An Improved Approach to Solve Two-Level Hierarchical Time Minimization Transportation Problem

Authors: Kalpana Dahiya

Abstract:

This paper discusses a two-level hierarchical time minimization transportation problem, which is an important class of transportation problems arising in industries. This problem has been studied by various researchers, and a number of polynomial time iterative algorithms are available to find its solution. All the existing algorithms, though efficient, have some shortcomings. The current study proposes an alternate solution algorithm for the problem that is more efficient in terms of computational time than the existing algorithms. The results justifying the underlying theory of the proposed algorithm are given. Further, a detailed comparison of the computational behaviour of all the algorithms for randomly generated instances of this problem of different sizes validates the efficiency of the proposed algorithm.

Keywords: global optimization, hierarchical optimization, transportation problem, concave minimization

Procedia PDF Downloads 121
1016 An Approach for Estimation in Hierarchical Clustered Data Applicable to Rare Diseases

Authors: Daniel C. Bonzo

Abstract:

Practical considerations lead to the use of unit of analysis within subjects, e.g., bleeding episodes or treatment-related adverse events, in rare disease settings. This is coupled with data augmentation techniques such as extrapolation to enlarge the subject base. In general, one can think about extrapolation of data as extending information and conclusions from one estimand to another estimand. This approach induces hierarchichal clustered data with varying cluster sizes. Extrapolation of clinical trial data is being accepted increasingly by regulatory agencies as a means of generating data in diverse situations during drug development process. Under certain circumstances, data can be extrapolated to a different population, a different but related indication, and different but similar product. We consider here the problem of estimation (point and interval) using a mixed-models approach under an extrapolation. It is proposed that estimators (point and interval) be constructed using weighting schemes for the clusters, e.g., equally weighted and with weights proportional to cluster size. Simulated data generated under varying scenarios are then used to evaluate the performance of this approach. In conclusion, the evaluation result showed that the approach is a useful means for improving statistical inference in rare disease settings and thus aids not only signal detection but risk-benefit evaluation as well.

Keywords: clustered data, estimand, extrapolation, mixed model

Procedia PDF Downloads 106
1015 A Bivariate Inverse Generalized Exponential Distribution and Its Applications in Dependent Competing Risks Model

Authors: Fatemah A. Alqallaf, Debasis Kundu

Abstract:

The aim of this paper is to introduce a bivariate inverse generalized exponential distribution which has a singular component. The proposed bivariate distribution can be used when the marginals have heavy-tailed distributions, and they have non-monotone hazard functions. Due to the presence of the singular component, it can be used quite effectively when there are ties in the data. Since it has four parameters, it is a very flexible bivariate distribution, and it can be used quite effectively for analyzing various bivariate data sets. Several dependency properties and dependency measures have been obtained. The maximum likelihood estimators cannot be obtained in closed form, and it involves solving a four-dimensional optimization problem. To avoid that, we have proposed to use an EM algorithm, and it involves solving only one non-linear equation at each `E'-step. Hence, the implementation of the proposed EM algorithm is very straight forward in practice. Extensive simulation experiments and the analysis of one data set have been performed. We have observed that the proposed bivariate inverse generalized exponential distribution can be used for modeling dependent competing risks data. One data set has been analyzed to show the effectiveness of the proposed model.

Keywords: Block and Basu bivariate distributions, competing risks, EM algorithm, Marshall-Olkin bivariate exponential distribution, maximum likelihood estimators

Procedia PDF Downloads 110
1014 Bayesian Variable Selection in Quantile Regression with Application to the Health and Retirement Study

Authors: Priya Kedia, Kiranmoy Das

Abstract:

There is a rich literature on variable selection in regression setting. However, most of these methods assume normality for the response variable under consideration for implementing the methodology and establishing the statistical properties of the estimates. In many real applications, the distribution for the response variable may be non-Gaussian, and one might be interested in finding the best subset of covariates at some predetermined quantile level. We develop dynamic Bayesian approach for variable selection in quantile regression framework. We use a zero-inflated mixture prior for the regression coefficients, and consider the asymmetric Laplace distribution for the response variable for modeling different quantiles of its distribution. An efficient Gibbs sampler is developed for our computation. Our proposed approach is assessed through extensive simulation studies, and real application of the proposed approach is also illustrated. We consider the data from health and retirement study conducted by the University of Michigan, and select the important predictors when the outcome of interest is out-of-pocket medical cost, which is considered as an important measure for financial risk. Our analysis finds important predictors at different quantiles of the outcome, and thus enhance our understanding on the effects of different predictors on the out-of-pocket medical cost.

Keywords: variable selection, quantile regression, Gibbs sampler, asymmetric Laplace distribution

Procedia PDF Downloads 125
1013 Constructing Orthogonal De Bruijn and Kautz Sequences and Applications

Authors: Yaw-Ling Lin

Abstract:

A de Bruijn graph of order k is a graph whose vertices representing all length-k sequences with edges joining pairs of vertices whose sequences have maximum possible overlap (length k−1). Every Hamiltonian cycle of this graph defines a distinct, minimum length de Bruijn sequence containing all k-mers exactly once. A Kautz sequence is the minimal generating sequence so as the sequence of minimal length that produces all possible length-k sequences with the restriction that every two consecutive alphabets in the sequences must be different. A collection of de Bruijn/Kautz sequences are orthogonal if any two sequences are of maximally differ in sequence composition; that is, the maximum length of their common substring is k. In this paper, we discuss how such a collection of (maximal) orthogonal de Bruijn/Kautz sequences can be made and use the algorithm to build up a web application service for the synthesized DNA and other related biomolecular sequences.

Keywords: biomolecular sequence synthesis, de Bruijn sequences, Eulerian cycle, Hamiltonian cycle, Kautz sequences, orthogonal sequences

Procedia PDF Downloads 125
1012 Estimating the Probability of Winning the Best Actor/Actress Award Conditional on the Best Picture Nomination with Bayesian Hierarchical Models

Authors: Svetlana K. Eden

Abstract:

Movies and TV shows have long become part of modern culture. We all have our preferred genre, story, actors, and actresses. However, can we objectively discern good acting from the bad? As laymen, we are probably not objective, but what about the Oscar academy members? Are their votes based on objective measures? Oscar academy members are probably also biased due to many factors, including their professional affiliations or advertisement exposure. Heavily advertised films bring more publicity to their cast and are likely to have bigger budgets. Because a bigger budget may also help earn a Best Picture (BP) nomination, we hypothesize that best actor/actress (BA) nominees from BP-nominated movies would have higher chances of winning the award than those BA nominees from non-BP-nominated films. To test this hypothesis, three Bayesian hierarchical models are proposed, and their performance is evaluated. The results from all three models largely support our hypothesis. Depending on the proportion of BP nominations among BA nominees, the odds ratios (estimated over expected) of winning the BA award conditional on BP nomination vary from 2.8 [0.8-7.0] to 4.3 [2.0, 15.8] for actors and from 1.5 [0.0, 12.2] to 5.4 [2.7, 14.2] for actresses.

Keywords: Oscar, best picture, best actor/actress, bias

Procedia PDF Downloads 196
1011 Identifying Coloring in Graphs with Twins

Authors: Souad Slimani, Sylvain Gravier, Simon Schmidt

Abstract:

Recently, several vertex identifying notions were introduced (identifying coloring, lid-coloring,...); these notions were inspired by identifying codes. All of them, as well as original identifying code, is based on separating two vertices according to some conditions on their closed neighborhood. Therefore, twins can not be identified. So most of known results focus on twin-free graph. Here, we show how twins can modify optimal value of vertex-identifying parameters for identifying coloring and locally identifying coloring.

Keywords: identifying coloring, locally identifying coloring, twins, separating

Procedia PDF Downloads 113
1010 Stochastic Multicast Routing Protocol for Flying Ad-Hoc Networks

Authors: Hyunsun Lee, Yi Zhu

Abstract:

Wireless ad-hoc network is a decentralized type of temporary machine-to-machine connection that is spontaneous or impromptu so that it does not rely on any fixed infrastructure and centralized administration. As unmanned aerial vehicles (UAVs), also called drones, have recently become more accessible and widely utilized in military and civilian domains such as surveillance, search and detection missions, traffic monitoring, remote filming, product delivery, to name a few. The communication between these UAVs become possible and materialized through Flying Ad-hoc Networks (FANETs). However, due to the high mobility of UAVs that may cause different types of transmission interference, it is vital to design robust routing protocols for FANETs. In this talk, the multicast routing method based on a modified stochastic branching process is proposed. The stochastic branching process is often used to describe an early stage of an infectious disease outbreak, and the reproductive number in the process is used to classify the outbreak into a major or minor outbreak. The reproductive number to regulate the local transmission rate is adapted and modified for flying ad-hoc network communication. The performance of the proposed routing method is compared with other well-known methods such as flooding method and gossip method based on three measures; average reachability, average node usage and average branching factor. The proposed routing method achieves average reachability very closer to flooding method, average node usage closer to gossip method, and outstanding average branching factor among methods. It can be concluded that the proposed multicast routing scheme is more efficient than well-known routing schemes such as flooding and gossip while it maintains high performance.

Keywords: Flying Ad-hoc Networks, Multicast Routing, Stochastic Branching Process, Unmanned Aerial Vehicles

Procedia PDF Downloads 86
1009 Net Work Meta Analysis to Identify the Most Effective Dressings to Treat Pressure Injury

Authors: Lukman Thalib, Luis Furuya-Kanamori, Rachel Walker, Brigid Gillespie, Suhail Doi

Abstract:

Background and objectives: There are many topical treatments available for Pressure Injury (PI) treatment, yet there is a lack of evidence with regards to the most effective treatment. The objective of this study was to compare the effect of various topical treatments and identify the best treatment choice(s) for PI healing. Methods: Network meta-analysis of published randomized controlled trials that compared the two or more of the following dressing groups: basic, foam, active, hydroactive, and other wound dressings. The outcome complete healing following treatment and the generalised pair-wise modelling framework was used to generate mixed treatment effects against hydroactive wound dressing, currently the standard of treatment for PIs. All treatments were then ranked by their point estimates. Main Results: 40 studies (1,757 participants) comparing 5 dressing groups were included in the analysis. All dressings groups ranked better than basic (i.e. saline gauze or similar inert dressing). The foam (RR 1.18; 95%CI 0.95-1.48) and active wound dressing (RR 1.16; 95%CI 0.92-1.47) ranked better than hydroactive wound dressing in terms of healing of PIs when the latter was used as the reference group. Conclusion & Recommendations: There was considerable uncertainty around the estimates, yet, the use of hydroactive wound dressings appear to perform better than basic dressings. Foam and active wound dressing groups show promise and need further investigation. High-quality research on clinical effectiveness of the topical treatments are warranted to identify if foam and active wound dressings do provide advantages over hydroactive dressings.

Keywords: Net work Meta Analysis, Pressure Injury, Dresssing, Pressure Ulcer

Procedia PDF Downloads 93
1008 Reducing the Computational Overhead of Metaheuristics Parameterization with Exploratory Landscape Analysis

Authors: Iannick Gagnon, Alain April

Abstract:

The performance of a metaheuristic on a given problem class depends on the class itself and the choice of parameters. Parameter tuning is the most time-consuming phase of the optimization process after the main calculations and it often nullifies the speed advantage of metaheuristics over traditional optimization algorithms. Several off-the-shelf parameter tuning algorithms are available, but when the objective function is expensive to evaluate, these can be prohibitively expensive to use. This paper presents a surrogate-like method for finding adequate parameters using fitness landscape analysis on simple benchmark functions and real-world objective functions. The result is a simple compound similarity metric based on the empirical correlation coefficient and a measure of convexity. It is then used to find the best benchmark functions to serve as surrogates. The near-optimal parameter set is then found using fractional factorial design. The real-world problem of NACA airfoil lift coefficient maximization is used as a preliminary proof of concept. The overall aim of this research is to reduce the computational overhead of metaheuristics parameterization.

Keywords: metaheuristics, stochastic optimization, particle swarm optimization, exploratory landscape analysis

Procedia PDF Downloads 121
1007 Modeling of Carbon Monoxide Distribution under the Sky-Train Stations

Authors: Suranath Chomcheon, Nathnarong Khajohnsaksumeth, Benchawan Wiwatanapataphee

Abstract:

Carbon monoxide is one of the harmful gases which have colorless, odorless, and tasteless. Too much carbon monoxide taken into the human body causes the reduction of oxygen transportation within human body cells leading to many symptoms including headache, nausea, vomiting, loss of consciousness, and death. Carbon monoxide is considered as one of the air pollution indicators. It is mainly released as soot from the exhaust pipe of the incomplete combustion of the vehicle engine. Nowadays, the increase in vehicle usage and the slowly moving of the vehicle struck by the traffic jam has created a large amount of carbon monoxide, which accumulated in the street canyon area. In this research, we study the effect of parameters such as wind speed and aspect ratio of the height building affecting the ventilation. We consider the model of the pollutant under the Bangkok Transit System (BTS) stations in a two-dimensional geometrical domain. The convention-diffusion equation and Reynolds-averaged Navier-stokes equation is used to describe the concentration and the turbulent flow of carbon monoxide. The finite element method is applied to obtain the numerical result. The result shows that our model can describe the dispersion patterns of carbon monoxide for different wind speeds.

Keywords: air pollution, carbon monoxide, finite element, street canyon

Procedia PDF Downloads 96
1006 A Non-Standard Finite Difference Scheme for the Solution of Laplace Equation with Dirichlet Boundary Conditions

Authors: Khaled Moaddy

Abstract:

In this paper, we present a fast and accurate numerical scheme for the solution of a Laplace equation with Dirichlet boundary conditions. The non-standard finite difference scheme (NSFD) is applied to construct the numerical solutions of a Laplace equation with two different Dirichlet boundary conditions. The solutions obtained using NSFD are compared with the solutions obtained using the standard finite difference scheme (SFD). The NSFD scheme is demonstrated to be reliable and efficient.

Keywords: standard finite difference schemes, non-standard schemes, Laplace equation, Dirichlet boundary conditions

Procedia PDF Downloads 106
1005 Proximal Method of Solving Split System of Minimization Problem

Authors: Anteneh Getachew Gebrie, Rabian Wangkeeree

Abstract:

The purpose of this paper is to introduce iterative algorithm solving split system of minimization problem given as a task of finding a common minimizer point of finite family of proper, lower semicontinuous convex functions and whose image under a bounded linear operator is also common minimizer point of another finite family of proper, lower semicontinuous convex functions. We obtain strong convergence of the sequence generated by our algorithm under some suitable conditions on the parameters. The iterative schemes are developed with a way of selecting the step sizes such that the information of operator norm is not necessary. Some applications and numerical experiment is given to analyse the efficiency of our algorithm.

Keywords: Hilbert Space, minimization problems, Moreau-Yosida approximate, split feasibility problem

Procedia PDF Downloads 108