World Academy of Science, Engineering and Technology
[Mathematical and Computational Sciences]
Online ISSN : 1307-6892
1226 Magnetohemodynamic of Blood Flow Having Impact of Radiative Flux Due to Infrared Magnetic Hyperthermia: Spectral Relaxation Approach
Authors: Ebenezer O. Ige, Funmilayo H. Oyelami, Joshua Olutayo-Irheren, Joseph T. Okunlola
Abstract:
Hyperthermia therapy is an adjuvant procedure during which perfused body tissues is subjected to elevated range of temperature in bid to achieve improved drug potency and efficacy of cancer treatment. While a selected class of hyperthermia techniques is shouldered on the thermal radiations derived from single-sourced electro-radiation measures, there are deliberations on conjugating dual radiation field sources in an attempt to improve the delivery of therapy procedure. This paper numerically explores the thermal effectiveness of combined infrared hyperemia having nanoparticle recirculation in the vicinity of imposed magnetic field on subcutaneous strata of a model lesion as ablation scheme. An elaborate Spectral relaxation method (SRM) was formulated to handle equation of coupled momentum and thermal equilibrium in the blood-perfused tissue domain of a spongy fibrous tissue. Thermal diffusion regimes in the presence of external magnetic field imposition were described leveraging on the renowned Roseland diffusion approximation to delineate the impact of radiative flux within the computational domain. The contribution of tissue sponginess was examined using mechanics of pore-scale porosity over a selected of clinical informed scenarios. Our observations showed for a substantial depth of spongy lesion, magnetic field architecture constitute the control regimes of hemodynamics in the blood-tissue interface while facilitating thermal transport across the depth of the model lesion. This parameter-indicator could be utilized to control the dispensing of hyperthermia treatment in intravenous perfused tissue.Keywords: spectra relaxation scheme, thermal equilibrium, Roseland diffusion approximation, hyperthermia therapy
Procedia PDF Downloads 1171225 Virtual Assessment of Measurement Error in the Fractional Flow Reserve
Authors: Keltoum Chahour, Mickael Binois
Abstract:
Due to a lack of standardization during the invasive fractional flow reserve (FFR) procedure, the index is subject to many sources of uncertainties. In this paper, we investigate -through simulation- the effect of the (FFR) device position and configuration on the obtained value of the (FFR) fraction. For this purpose, we use computational fluid dynamics (CFD) in a 3D domain corresponding to a diseased arterial portion. The (FFR) pressure captor is introduced inside it with a given length and coefficient of bending to capture the (FFR) value. To get over the computational limitations, basically, the time of the simulation is about 2h 15min for one (FFR) value; we generate a Gaussian Process (GP) model for (FFR) prediction. The (GP) model indicates good accuracy and demonstrates the effective error in the measurement created by the random configuration of the pressure captor.Keywords: fractional flow reserve, Gaussian processes, computational fluid dynamics, drift
Procedia PDF Downloads 1321224 Diagonal Vector Autoregressive Models and Their Properties
Authors: Usoro Anthony E., Udoh Emediong
Abstract:
Diagonal Vector Autoregressive Models are special classes of the general vector autoregressive models identified under certain conditions, where parameters are restricted to the diagonal elements in the coefficient matrices. Variance, autocovariance, and autocorrelation properties of the upper and lower diagonal VAR models are derived. The new set of VAR models is verified with empirical data and is found to perform favourably with the general VAR models. The advantage of the diagonal models over the existing models is that the new models are parsimonious, given the reduction in the interactive coefficients of the general VAR models.Keywords: VAR models, diagonal VAR models, variance, autocovariance, autocorrelations
Procedia PDF Downloads 1151223 The Effectiveness of a Hybrid Diffie-Hellman-RSA-Advanced Encryption Standard Model
Authors: Abdellahi Cheikh
Abstract:
With the emergence of quantum computers with very powerful capabilities, the security of the exchange of shared keys between two interlocutors poses a big problem in terms of the rapid development of technologies such as computing power and computing speed. Therefore, the Diffie-Hellmann (DH) algorithm is more vulnerable than ever. No mechanism guarantees the security of the key exchange, so if an intermediary manages to intercept it, it is easy to intercept. In this regard, several studies have been conducted to improve the security of key exchange between two interlocutors, which has led to interesting results. The modification made on our model Diffie-Hellman-RSA-AES (DRA), which encrypts the information exchanged between two users using the three-encryption algorithms DH, RSA and AES, by using stenographic photos to hide the contents of the p, g and ClesAES values that are sent in an unencrypted state at the level of DRA model to calculate each user's public key. This work includes a comparative study between the DRA model and all existing solutions, as well as the modification made to this model, with an emphasis on the aspect of reliability in terms of security. This study presents a simulation to demonstrate the effectiveness of the modification made to the DRA model. The obtained results show that our model has a security advantage over the existing solution, so we made these changes to reinforce the security of the DRA model.Keywords: Diffie-Hellmann, DRA, RSA, advanced encryption standard
Procedia PDF Downloads 931222 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record
Authors: Raghavi C. Janaswamy
Abstract:
In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.Keywords: electronic health record, graph neural network, heterogeneous data, prediction
Procedia PDF Downloads 851221 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference
Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade
Abstract:
In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory
Procedia PDF Downloads 861220 Complex Dynamics of a Four Species Food-Web Model: An Analysis through Beddington-Deangelis Functional Response in the Presence of Additional Food
Authors: Surbhi Rani, Sunita Gakkhar
Abstract:
The four-dimensional food web system consisting of two prey species for a generalist middle predator and a top predator is proposed and investigated. The middle predator is predating both the prey species with a modified Holling type-II functional response. The food web model is found to be well-posed, bounded, and dissipative. The proposed model's essential dynamical features are studied in terms of local stability. The four species' survival is explored, and persistence conditions are established. The numerical simulations reveal the persistence in the form of a chaotic attractor or stable focus. The conclusion is that providing additional food to the middle predator may help to control the food chain's chaos.Keywords: predator-prey model, existence of equilibrium points, local stability, chaos, numerical simulations
Procedia PDF Downloads 1071219 An Audit of Climate Change and Sustainability Teaching in Medical School
Authors: M. Tiachachat, M. Mihoubi
Abstract:
The Bell polynomials are special polynomials in combinatorial analysis that have a wide range of applications in mathematics. They have interested many authors. The exponential partial Bell polynomials have been well reduced to some special combinatorial sequences. Numerous researchers had already been interested in the above polynomials, as evidenced by many articles in the literature. Inspired by this work, in this work, we propose a family of special polynomials named after the 2-successive partial Bell polynomials. Using the combinatorial approach, we prove the properties of these numbers, derive several identities, and discuss some special cases. This family includes well-known numbers and polynomials such as Stirling numbers, Bell numbers and polynomials, and so on. We investigate their properties by employing generating functionsKeywords: 2-associated r-Stirling numbers, the exponential partial Bell polynomials, generating function, combinatorial interpretation
Procedia PDF Downloads 1091218 Mathematical Modelling of Spatial Distribution of Covid-19 Outbreak Using Diffusion Equation
Authors: Kayode Oshinubi, Brice Kammegne, Jacques Demongeot
Abstract:
The use of mathematical tools like Partial Differential Equations and Ordinary Differential Equations have become very important to predict the evolution of a viral disease in a population in order to take preventive and curative measures. In December 2019, a novel variety of Coronavirus (SARS-CoV-2) was identified in Wuhan, Hubei Province, China causing a severe and potentially fatal respiratory syndrome, i.e., COVID-19. Since then, it has become a pandemic declared by World Health Organization (WHO) on March 11, 2020 which has spread around the globe. A reaction-diffusion system is a mathematical model that describes the evolution of a phenomenon subjected to two processes: a reaction process in which different substances are transformed, and a diffusion process that causes a distribution in space. This article provides a mathematical study of the Susceptible, Exposed, Infected, Recovered, and Vaccinated population model of the COVID-19 pandemic by the bias of reaction-diffusion equations. Both local and global asymptotic stability conditions for disease-free and endemic equilibria are determined using the Lyapunov function are considered and the endemic equilibrium point exists and is stable if it satisfies Routh–Hurwitz criteria. Also, adequate conditions for the existence and uniqueness of the solution of the model have been proved. We showed the spatial distribution of the model compartments when the basic reproduction rate $\mathcal{R}_0 < 1$ and $\mathcal{R}_0 > 1$ and sensitivity analysis is performed in order to determine the most sensitive parameters in the proposed model. We demonstrate the model's effectiveness by performing numerical simulations. We investigate the impact of vaccination and the significance of spatial distribution parameters in the spread of COVID-19. The findings indicate that reducing contact with an infected person and increasing the proportion of susceptible people who receive high-efficacy vaccination will lessen the burden of COVID-19 in the population. To the public health policymakers, we offered a better understanding of the COVID-19 management.Keywords: COVID-19, SEIRV epidemic model, reaction-diffusion equation, basic reproduction number, vaccination, spatial distribution
Procedia PDF Downloads 1221217 Chebyshev Wavelets and Applications
Authors: Emanuel Guariglia
Abstract:
In this paper we deal with Chebyshev wavelets. We analyze their properties computing their Fourier transform. Moreover, we discuss the differential properties of Chebyshev wavelets due the connection coefficients. The differential properties of Chebyshev wavelets, expressed by the connection coefficients (also called refinable integrals), are given by finite series in terms of the Kronecker delta. Moreover, we treat the p-order derivative of Chebyshev wavelets and compute its Fourier transform. Finally, we expand the mother wavelet in Taylor series with an application both in fractional calculus and fractal geometry.Keywords: Chebyshev wavelets, Fourier transform, connection coefficients, Taylor series, local fractional derivative, Cantor set
Procedia PDF Downloads 1221216 Vehicle to Vehicle Communication: Collision Avoidance Scenarios
Authors: Ahmed Emad, Ahmed Salah, Abdelrahman Magdy, Omar Rashid, Mohammed Adel
Abstract:
This research paper discusses vehicle-to-vehicle technology as an important application of linear algebra. This communication technology represents an efficient and promising application to help to ensure the safety of the drivers by warning them when a crash possibility is close. The major link that combines our topic with linear algebra is the Laplacian matrix. Some main definitions used in the V2V were illustrated, such as VANET and its characteristics. The V2V technology could be applied in different applications with different traffic scenarios and various ways to warn car drivers. These scenarios were simulated programs such as MATLAB and Python to test how the V2V system would respond to the different scenarios and warn the car drivers exposed to the threat of collisions.Keywords: V2V communication, vehicle to vehicle scenarios, VANET, FCW, EEBL, IMA, Laplacian matrix
Procedia PDF Downloads 1621215 Notes on Frames in Weighted Hardy Spaces and Generalized Weighted Composition Operators
Authors: Shams Alyusof
Abstract:
This work is to enrich the studies of the frames due to their prominent role in pure mathematics as well as in applied mathematics and many applications in computer science and engineering. Recently, there are remarkable studies of operators that preserve frames on some spaces, and this research could be considered as an extension of such studies. Indeed, this paper is to we characterize weighted composition operators that preserve frames in weighted Hardy spaces on the open unit disk. Moreover, it shows that this characterization does not apply to generalized weighted composition operators on such spaces. Nevertheless, this study could be extended to provide more specific characterizations.Keywords: frames, generalized weighted composition operators, weighted Hardy spaces, analytic functions
Procedia PDF Downloads 1201214 Step into the Escalator’s Fractal Behavior by Using the Poincare Map
Authors: Ali Albadri
Abstract:
Step band in an escalator moves in a cyclic periodic pattern. Similarly, most if not all of the components and sub-assemblies in the escalator operate in the same way. If you mark up one step in the step band of an escalator and stand next to the escalator, on the incline, to watch the marked-up step when it passes by, you ask yourself, does the marked up step behaves exactly the same way during each revolution when it passes you by again and again? We can say that; there is some similarity in this example and the example when an astronomer watches planets in the sky, and he or she asks himself or herself, does each planet intersects the plan of observation in the same position for every pantry rotation? For a fact, we know for the answer to the second example is no, because scientist, astronomers, and mathematicians have proven that planets deviate from their paths to take new paths during their planetary moves, albeit with minimal change. But what about the answer to the question in the first example? considering that there is increase in the wear and tear of components with time in the step, in the step band, in the tracks and in many other places in the escalator. There is also the accumulation of fatigue in the components and sub-assemblies. This research is part of many studies which we are conducting to address the answer for the question in the first example. We have been using the fractal dimension as a quantities tool and the Poincare map as a qualitative tool. This study has shown that the fractal dimension value and the shape and distribution of the orbits in the Poincare map has significant correlation with the quality of the mechanical components and sub-assemblies in the escalator.Keywords: fractal dimension, Poincare map, rugby ball orbit, worm orbit
Procedia PDF Downloads 591213 Singular Stochastic Control Model with Carrying Capacity of Population Management Policy for Squirrels in Durian Orchards
Authors: Sasiwimol Auepong, Raywat Tanadkithirun
Abstract:
In this work, the problem that squirrels ruin durian, which is an economical fruit in Thailand, is considered. We seek the strategy for the durian farmers to eliminate the squirrels under the consideration that squirrels also provide ecosystem service. The population dynamics of squirrels are constructed to have carrying capacity since we consider the population in a confined area. A performance index indicating the total benefit of a given elimination strategy is provided. It comprises the cost of countermeasures, the loss of resources, and the ecosystem service provided by squirrels. The optimal performance index is numerically solved through the variational inequality using the finite difference method. The optimal strategy to control the squirrel population is also given numerically.Keywords: controlled stochastic differential equation, durian, finite difference method, performance index, singular stochastic control model, squirrel
Procedia PDF Downloads 881212 The Logistics Equation and Fractal Dimension in Escalators Operations
Authors: Ali Albadri
Abstract:
The logistics equation has never been used or studied in scientific fields outside the field of ecology. It has never been used to understand the behavior of a dynamic system of mechanical machines, like an escalator. We have studied the compatibility of the logistic map against real measurements from an escalator. This study has proven that there is good compatibility between the logistics equation and the experimental measurements. It has discovered the potential of a relationship between the fractal dimension and the non-linearity parameter, R, in the logistics equation. The fractal dimension increases as the R parameter (non-linear parameter) increases. It implies that the fractal dimension increases as the phase of the life span of the machine move from the steady/stable phase to the periodic double phase to a chaotic phase. The fractal dimension and the parameter R can be used as a tool to verify and check the health of machines. We have come up with a theory that there are three areas of behaviors, which they can be classified during the life span of a machine, a steady/stable stage, a periodic double stage, and a chaotic stage. The level of attention to the machine differs depending on the stage that the machine is in. The rate of faults in a machine increases as the machine moves through these three stages. During the double period and the chaotic stages, the number of faults starts to increase and become less predictable. The rate of predictability improves as our monitoring of the changes in the fractal dimension and the parameter R improves. The principles and foundations of our theory in this work have and will have a profound impact on the design of systems, on the way of operation of systems, and on the maintenance schedules of the systems. The systems can be mechanical, electrical, or electronic. The discussed methodology in this paper will give businesses the chance to be more careful at the design stage and planning for maintenance to control costs. The findings in this paper can be implied and used to correlate the three stages of a mechanical system to more in-depth mechanical parameters like wear and fatigue life.Keywords: logistcs map, bifurcation map, fractal dimension, logistics equation
Procedia PDF Downloads 1061211 Vibration of Nanobeam Subjected to Constant Magnetic Field and Ramp-Type Thermal Loading under Non-Fourier Heat Conduction Law of Lord-Shulman
Authors: Hamdy M. Youssef
Abstract:
In this work, the usual Euler–Bernoulli nanobeam has been modeled in the context of Lord-Shulman thermoelastic theorem, which contains non-Fourier heat conduction law. The nanobeam has been subjected to a constant magnetic field and ramp-type thermal loading. The Laplace transform definition has been applied to the governing equations, and the solutions have been obtained by using a direct approach. The inversions of the Laplace transform have been calculated numerically by using Tzou approximation method. The solutions have been applied to a nanobeam made of silicon nitride. The distributions of the temperature increment, lateral deflection, strain, stress, and strain-energy density have been represented in figures with different values of the magnetic field intensity and ramp-time heat parameter. The value of the magnetic field intensity and ramp-time heat parameter have significant effects on all the studied functions, and they could be used as tuners to control the energy which has been generated through the nanobeam.Keywords: nanobeam, vibration, constant magnetic field, ramp-type thermal loading, non-Fourier heat conduction law
Procedia PDF Downloads 1371210 The Analysis of the Two Dimensional Huxley Equation Using the Galerkin Method
Authors: Pius W. Molo Chin
Abstract:
Real life problems such as the Huxley equation are always modeled as nonlinear differential equations. These problems need accurate and reliable methods for their solutions. In this paper, we propose a nonstandard finite difference method in time and the Galerkin combined with the compactness method in the space variables. This coupled method, is used to analyze a two dimensional Huxley equation for the existence and uniqueness of the continuous solution of the problem in appropriate spaces to be defined. We proceed to design a numerical scheme consisting of the aforementioned method and show that the scheme is stable. We further show that the stable scheme converges with the rate which is optimal in both the L2 as well as the H1-norms. Furthermore, we show that the scheme replicates the decaying qualities of the exact solution. Numerical experiments are presented with the help of an example to justify the validity of the designed scheme.Keywords: Huxley equations, non-standard finite difference method, Galerkin method, optimal rate of convergence
Procedia PDF Downloads 2151209 Spatially Encoded Hyperspectral Compressive Microscope for Broadband VIS/NIR Imaging
Authors: Lukáš Klein, Karel Žídek
Abstract:
Hyperspectral imaging counts among the most frequently used multidimensional sensing methods. While there are many approaches to capturing a hyperspectral data cube, optical compression is emerging as a valuable tool to reduce the setup complexity and the amount of data storage needed. Hyperspectral compressive imagers have been created in the past; however, they have primarily focused on relatively narrow sections of the electromagnetic spectrum. A broader spectral study of samples can provide helpful information, especially for applications involving the harmonic generation and advanced material characterizations. We demonstrate a broadband hyperspectral microscope based on the single-pixel camera principle. Captured spatially encoded data are processed to reconstruct a hyperspectral cube in a combined visible and near-infrared spectrum (from 400 to 2500 nm). Hyperspectral cubes can be reconstructed with a spectral resolution of up to 3 nm and spatial resolution of up to 7 µm (subject to diffraction) with a high compressive ratio.Keywords: compressive imaging, hyperspectral imaging, near-infrared spectrum, single-pixel camera, visible spectrum
Procedia PDF Downloads 891208 The Duties of the Immortals and the Name of Anauša or Anušiya
Authors: Behzad Moeini Sam, Sara Mohammadi Avandi
Abstract:
One of the reasons for the success of the Achaemenids was the innovation and precise organization used in the administrative and military fields. Of course, these organizations had their roots in the previous governments that had changed in these borrowings. The units of the Achaemenid army are also among the cases that have their origins in the ancient East. In this article, the attempt is to find the sources of the Immortal Army based on the writings of old and current authors and archaeological documents, and the name mentioned by Herodotus and rejected by some authors. Of course, linguistic sources have also been used for better conclusions than the indicated sources. It emphasizes linguistic data to lead to a better deduction. Thus, it was included that ‘anauša’ is more probable than anušiya.Keywords: army, immortal, ten thousand, Anauša, Anušiya
Procedia PDF Downloads 721207 Artificial Intelligence Technologies Used in Healthcare: Its Implication on the Healthcare Workforce and Applications in the Diagnosis of Diseases
Authors: Rowanda Daoud Ahmed, Mansoor Abdulhak, Muhammad Azeem Afzal, Sezer Filiz, Usama Ahmad Mughal
Abstract:
This paper discusses important aspects of AI in the healthcare domain. The increase of data in healthcare both in size and complexity, opens more room for artificial intelligence applications. Our focus is to review the main AI methods within the scope of the health care domain. The results of the review show that recommendations for diagnosis and recommendations for treatment, patent engagement, and administrative tasks are the key applications of AI in healthcare. Understanding the potential of AI methods in the domain of healthcare would benefit healthcare practitioners and will improve patient outcomes.Keywords: AI in healthcare, technologies of AI, neural network, future of AI in healthcare
Procedia PDF Downloads 1111206 Finding Data Envelopment Analysis Targets Using Multi-Objective Programming in DEA-R with Stochastic Data
Authors: R. Shamsi, F. Sharifi
Abstract:
In this paper, we obtain the projection of inefficient units in data envelopment analysis (DEA) in the case of stochastic inputs and outputs using the multi-objective programming (MOP) structure. In some problems, the inputs might be stochastic while the outputs are deterministic, and vice versa. In such cases, we propose a multi-objective DEA-R model because in some cases (e.g., when unnecessary and irrational weights by the BCC model reduce the efficiency score), an efficient decision-making unit (DMU) is introduced as inefficient by the BCC model, whereas the DMU is considered efficient by the DEA-R model. In some other cases, only the ratio of stochastic data may be available (e.g., the ratio of stochastic inputs to stochastic outputs). Thus, we provide a multi-objective DEA model without explicit outputs and prove that the input-oriented MOP DEA-R model in the invariable return to scale case can be replaced by the MOP-DEA model without explicit outputs in the variable return to scale and vice versa. Using the interactive methods for solving the proposed model yields a projection corresponding to the viewpoint of the DM and the analyst, which is nearer to reality and more practical. Finally, an application is provided.Keywords: DEA-R, multi-objective programming, stochastic data, data envelopment analysis
Procedia PDF Downloads 1051205 Evolution under Length Constraints for Convolutional Neural Networks Architecture Design
Authors: Ousmane Youme, Jean Marie Dembele, Eugene Ezin, Christophe Cambier
Abstract:
In recent years, the convolutional neural networks (CNN) architectures designed by evolution algorithms have proven to be competitive with handcrafted architectures designed by experts. However, these algorithms need a lot of computational power, which is beyond the capabilities of most researchers and engineers. To overcome this problem, we propose an evolution architecture under length constraints. It consists of two algorithms: a search length strategy to find an optimal space and a search architecture strategy based on a genetic algorithm to find the best individual in the optimal space. Our algorithms drastically reduce resource costs and also keep good performance. On the Cifar-10 dataset, our framework presents outstanding performance with an error rate of 5.12% and only 4.6 GPU a day to converge to the optimal individual -22 GPU a day less than the lowest cost automatic evolutionary algorithm in the peer competition.Keywords: CNN architecture, genetic algorithm, evolution algorithm, length constraints
Procedia PDF Downloads 1271204 Integral Domains and Alexandroff Topology
Authors: Shai Sarussi
Abstract:
Let S be an integral domain which is not a field, let F be its field of fractions, and let A be an F-algebra. An S-subalgebra R of A is called S-nice if R ∩ F = S and F R = A. A topological space whose set of open sets is closed under arbitrary intersections is called an Alexandroff space. Inspired by the well-known Zariski-Riemann space and the Zariski topology on the set of prime ideals of a commutative ring, we define a topology on the set of all S-nice subalgebras of A. Consequently, we get an interplay between Algebra and topology, that gives us a better understanding of the S-nice subalgebras of A. It is shown that every irreducible subset of S-nice subalgebras of A has a supremum; and a characterization of the irreducible components is given, in terms of maximal S-nice subalgebras of A.Keywords: Alexandroff topology, integral domains, Zariski-Riemann space, S-nice subalgebras
Procedia PDF Downloads 1091203 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 1121202 Large Neural Networks Learning From Scratch With Very Few Data and Without Explicit Regularization
Authors: Christoph Linse, Thomas Martinetz
Abstract:
Recent findings have shown that Neural Networks generalize also in over-parametrized regimes with zero training error. This is surprising, since it is completely against traditional machine learning wisdom. In our empirical study we fortify these findings in the domain of fine-grained image classification. We show that very large Convolutional Neural Networks with millions of weights do learn with only a handful of training samples and without image augmentation, explicit regularization or pretraining. We train the architectures ResNet018, ResNet101 and VGG19 on subsets of the difficult benchmark datasets Caltech101, CUB_200_2011, FGVCAircraft, Flowers102 and StanfordCars with 100 classes and more, perform a comprehensive comparative study and draw implications for the practical application of CNNs. Finally, we show that VGG19 with 140 million weights learns to distinguish airplanes and motorbikes with up to 95% accuracy using only 20 training samples per class.Keywords: convolutional neural networks, fine-grained image classification, generalization, image recognition, over-parameterized, small data sets
Procedia PDF Downloads 871201 Mostar Type Indices and QSPR Analysis of Octane Isomers
Authors: B. Roopa Sri, Y Lakshmi Naidu
Abstract:
Chemical Graph Theory (CGT) is the branch of mathematical chemistry in which molecules are modeled to study their physicochemical properties using molecular descriptors. Amongst these descriptors, topological indices play a vital role in predicting the properties by defining the graph topology of the molecule. Recently, the bond-additive topological index known as the Mostar index has been proposed. In this paper, we compute the Mostar-type indices of octane isomers and use the data obtained to perform QSPR analysis. Furthermore, we show the correlation between the Mostar type indices and the properties.Keywords: chemical graph theory, mostar type indices, octane isomers, qspr analysis, topological index
Procedia PDF Downloads 1281200 Solving Mean Field Problems: A Survey of Numerical Methods and Applications
Authors: Amal Machtalay
Abstract:
In this survey, we aim to review the rapidly growing literature on numerical methods to solve different forms of mean field problems, namely mean field games (MFG), mean field controls (MFC), potential MFGs, and master equations, as well as their corresponding recent applications. Here, we distinguish two families of numerical methods: iterative methods based on mesh generation and those called mesh-free, normally related to neural networking and learning frameworks.Keywords: mean-field games, numerical schemes, partial differential equations, complex systems, machine learning
Procedia PDF Downloads 1121199 Parallel Random Number Generation for the Modern Supercomputer Architectures
Authors: Roman Snytsar
Abstract:
Pseudo-random numbers are often used in scientific computing such as the Monte Carlo Simulations or the Quantum Inspired Optimization. Requirements for a parallel random number generator running in the modern multi-core vector environment are more stringent than those for sequential random number generators. As well as passing the usual quality tests, the output of the parallel random number generator must be verifiable and reproducible throughout the concurrent execution. We propose a family of vectorized Permuted Congruential Generators. Implementations are available for multiple modern vector modern computer architectures. Besides demonstrating good single core performance, the generators scale easily across many processor cores and multiple distributed nodes. We provide performance and parallel speedup analysis and comparisons between the implementations.Keywords: pseudo-random numbers, quantum optimization, SIMD, parallel computing
Procedia PDF Downloads 1191198 Modified Form of Margin Based Angular Softmax Loss for Speaker Verification
Authors: Jamshaid ul Rahman, Akhter Ali, Adnan Manzoor
Abstract:
Learning-based systems have received increasing interest in recent years; recognition structures, including end-to-end speak recognition, are one of the hot topics in this area. A famous work on end-to-end speaker verification by using Angular Softmax Loss gained significant importance and is considered useful to directly trains a discriminative model instead of the traditional adopted i-vector approach. The margin-based strategy in angular softmax is beneficial to learn discriminative speaker embeddings where the random selection of margin values is a big issue in additive angular margin and multiplicative angular margin. As a better solution in this matter, we present an alternative approach by introducing a bit similar form of an additive parameter that was originally introduced for face recognition, and it has a capacity to adjust automatically with the corresponding margin values and is applicable to learn more discriminative features than the Softmax. Experiments are conducted on the part of Fisher dataset, where it observed that the additive parameter with angular softmax to train the front-end and probabilistic linear discriminant analysis (PLDA) in the back-end boosts the performance of the structure.Keywords: additive parameter, angular softmax, speaker verification, PLDA
Procedia PDF Downloads 1001197 An Efficient Propensity Score Method for Causal Analysis With Application to Case-Control Study in Breast Cancer Research
Authors: Ms Azam Najafkouchak, David Todem, Dorothy Pathak, Pramod Pathak, Joseph Gardiner
Abstract:
Propensity score (PS) methods have recently become the standard analysis as a tool for the causal inference in the observational studies where exposure is not randomly assigned, thus, confounding can impact the estimation of treatment effect on the outcome. For the binary outcome, the effect of treatment on the outcome can be estimated by odds ratios, relative risks, and risk differences. However, using the different PS methods may give you a different estimation of the treatment effect on the outcome. Several methods of PS analyses have been used mainly, include matching, inverse probability of weighting, stratification, and covariate adjusted on PS. Due to the dangers of discretizing continuous variables (exposure, covariates), the focus of this paper will be on how the variation in cut-points or boundaries will affect the average treatment effect (ATE) utilizing the stratification of PS method. Therefore, we are trying to avoid choosing arbitrary cut-points, instead, we continuously discretize the PS and accumulate information across all cut-points for inferences. We will use Monte Carlo simulation to evaluate ATE, focusing on two PS methods, stratification and covariate adjusted on PS. We will then show how this can be observed based on the analyses of the data from a case-control study of breast cancer, the Polish Women’s Health Study.Keywords: average treatment effect, propensity score, stratification, covariate adjusted, monte Calro estimation, breast cancer, case_control study
Procedia PDF Downloads 105