World Academy of Science, Engineering and Technology
[Mathematical and Computational Sciences]
Online ISSN : 1307-6892
1286 Efficient Reconstruction of DNA Distance Matrices Using an Inverse Problem Approach
Authors: Boris Melnikov, Ye Zhang, Dmitrii Chaikovskii
Abstract:
We continue to consider one of the cybernetic methods in computational biology related to the study of DNA chains. Namely, we are considering the problem of reconstructing the not fully filled distance matrix of DNA chains. When applied in a programming context, it is revealed that with a modern computer of average capabilities, creating even a small-sized distance matrix for mitochondrial DNA sequences is quite time-consuming with standard algorithms. As the size of the matrix grows larger, the computational effort required increases significantly, potentially spanning several weeks to months of non-stop computer processing. Hence, calculating the distance matrix on conventional computers is hardly feasible, and supercomputers are usually not available. Therefore, we started publishing our variants of the algorithms for calculating the distance between two DNA chains; then, we published algorithms for restoring partially filled matrices, i.e., the inverse problem of matrix processing. In this paper, we propose an algorithm for restoring the distance matrix for DNA chains, and the primary focus is on enhancing the algorithms that shape the greedy function within the branches and boundaries method framework.Keywords: DNA chains, distance matrix, optimization problem, restoring algorithm, greedy algorithm, heuristics
Procedia PDF Downloads 1161285 A Study of Families of Bistar and Corona Product of Graph: Reverse Topological Indices
Authors: Gowtham Kalkere Jayanna, Mohamad Nazri Husin
Abstract:
Graph theory, chemistry, and technology are all combined in cheminformatics. The structure and physiochemical properties of organic substances are linked using some useful graph invariants and the corresponding molecular graph. In this paper, we study specific reverse topological indices such as the reverse sum-connectivity index, the reverse Zagreb index, the reverse arithmetic-geometric, and the geometric-arithmetic, the reverse Sombor, the reverse Nirmala indices for the bistar graphs B (n: m) and the corona product Kₘ∘Kₙ', where Kₙ' Represent the complement of a complete graph Kₙ.Keywords: reverse topological indices, bistar graph, the corona product, graph
Procedia PDF Downloads 951284 Identifying Factors Contributing to the Spread of Lyme Disease: A Regression Analysis of Virginia’s Data
Authors: Fatemeh Valizadeh Gamchi, Edward L. Boone
Abstract:
This research focuses on Lyme disease, a widespread infectious condition in the United States caused by the bacterium Borrelia burgdorferi sensu stricto. It is critical to identify environmental and economic elements that are contributing to the spread of the disease. This study examined data from Virginia to identify a subset of explanatory variables significant for Lyme disease case numbers. To identify relevant variables and avoid overfitting, linear poisson, and regularization regression methods such as a ridge, lasso, and elastic net penalty were employed. Cross-validation was performed to acquire tuning parameters. The methods proposed can automatically identify relevant disease count covariates. The efficacy of the techniques was assessed using four criteria on three simulated datasets. Finally, using the Virginia Department of Health’s Lyme disease data set, the study successfully identified key factors, and the results were consistent with previous studies.Keywords: lyme disease, Poisson generalized linear model, ridge regression, lasso regression, elastic net regression
Procedia PDF Downloads 1351283 Role of Water Supply in the Functioning of the MLDB Systems
Authors: Ramanpreet Kaur, Upasana Sharma
Abstract:
The purpose of this paper is to address the challenges faced by MLDB system at the piston foundry plant due to interruption in supply of water. For the MLDB system to work in Model, two sub-units must be connected to the robotic main unit. The system cannot function without robotics and water supply by the fan (WSF). Insufficient water supply is the cause of system failure. The system operates at top performance using two sub-units. If one sub-unit fails, the system capacity is reduced. Priority of repair is given to the main unit i.e. Robotic and WSF. To solve the problem, semi-Markov process and regenerative point technique are used. Relevant graphs are also included to particular case.Keywords: MLDB system, robotic, semi-Markov process, regenerative point technique
Procedia PDF Downloads 741282 The Shannon Entropy and Multifractional Markets
Authors: Massimiliano Frezza, Sergio Bianchi, Augusto Pianese
Abstract:
Introduced by Shannon in 1948 in the field of information theory as the average rate at which information is produced by a stochastic set of data, the concept of entropy has gained much attention as a measure of uncertainty and unpredictability associated with a dynamical system, eventually depicted by a stochastic process. In particular, the Shannon entropy measures the degree of order/disorder of a given signal and provides useful information about the underlying dynamical process. It has found widespread application in a variety of fields, such as, for example, cryptography, statistical physics and finance. In this regard, many contributions have employed different measures of entropy in an attempt to characterize the financial time series in terms of market efficiency, market crashes and/or financial crises. The Shannon entropy has also been considered as a measure of the risk of a portfolio or as a tool in asset pricing. This work investigates the theoretical link between the Shannon entropy and the multifractional Brownian motion (mBm), stochastic process which recently is the focus of a renewed interest in finance as a driving model of stochastic volatility. In particular, after exploring the current state of research in this area and highlighting some of the key results and open questions that remain, we show a well-defined relationship between the Shannon (log)entropy and the memory function H(t) of the mBm. In details, we allow both the length of time series and time scale to change over analysis to study how the relation modify itself. On the one hand, applications are developed after generating surrogates of mBm trajectories based on different memory functions; on the other hand, an empirical analysis of several international stock indexes, which confirms the previous results, concludes the work.Keywords: Shannon entropy, multifractional Brownian motion, Hurst–Holder exponent, stock indexes
Procedia PDF Downloads 1101281 The Volume–Volatility Relationship Conditional to Market Efficiency
Authors: Massimiliano Frezza, Sergio Bianchi, Augusto Pianese
Abstract:
The relation between stock price volatility and trading volume represents a controversial issue which has received a remarkable attention over the past decades. In fact, an extensive literature shows a positive relation between price volatility and trading volume in the financial markets, but the causal relationship which originates such association is an open question, from both a theoretical and empirical point of view. In this regard, various models, which can be considered as complementary rather than competitive, have been introduced to explain this relationship. They include the long debated Mixture of Distributions Hypothesis (MDH); the Sequential Arrival of Information Hypothesis (SAIH); the Dispersion of Beliefs Hypothesis (DBH); the Noise Trader Hypothesis (NTH). In this work, we analyze whether stock market efficiency can explain the diversity of results achieved during the years. For this purpose, we propose an alternative measure of market efficiency, based on the pointwise regularity of a stochastic process, which is the Hurst–H¨older dynamic exponent. In particular, we model the stock market by means of the multifractional Brownian motion (mBm) that displays the property of a time-changing regularity. Mostly, such models have in common the fact that they locally behave as a fractional Brownian motion, in the sense that their local regularity at time t0 (measured by the local Hurst–H¨older exponent in a neighborhood of t0 equals the exponent of a fractional Brownian motion of parameter H(t0)). Assuming that the stock price follows an mBm, we introduce and theoretically justify the Hurst–H¨older dynamical exponent as a measure of market efficiency. This allows to measure, at any time t, markets’ departures from the martingale property, i.e. from efficiency as stated by the Efficient Market Hypothesis. This approach is applied to financial markets; using data for the SP500 index from 1978 to 2017, on the one hand we find that when efficiency is not accounted for, a positive contemporaneous relationship emerges and is stable over time. Conversely, it disappears as soon as efficiency is taken into account. In particular, this association is more pronounced during time frames of high volatility and tends to disappear when market becomes fully efficient.Keywords: volume–volatility relationship, efficient market hypothesis, martingale model, Hurst–Hölder exponent
Procedia PDF Downloads 761280 The Algorithm to Solve the Extend General Malfatti’s Problem in a Convex Circular Triangle
Authors: Ching-Shoei Chiang
Abstract:
The Malfatti’s Problem solves the problem of fitting 3 circles into a right triangle such that these 3 circles are tangent to each other, and each circle is also tangent to a pair of the triangle’s sides. This problem has been extended to any triangle (called general Malfatti’s Problem). Furthermore, the problem has been extended to have 1+2+…+n circles inside the triangle with special tangency properties among circles and triangle sides; we call it extended general Malfatti’s problem. In the extended general Malfatti’s problem, call it Tri(Tn), where Tn is the triangle number, there are closed-form solutions for Tri(T₁) (inscribed circle) problem and Tri(T₂) (3 Malfatti’s circles) problem. These problems become more complex when n is greater than 2. In solving Tri(Tn) problem, n>2, algorithms have been proposed to solve these problems numerically. With a similar idea, this paper proposed an algorithm to find the radii of circles with the same tangency properties. Instead of the boundary of the triangle being a straight line, we use a convex circular arc as the boundary and try to find Tn circles inside this convex circular triangle with the same tangency properties among circles and boundary Carc. We call these problems the Carc(Tn) problems. The CPU time it takes for Carc(T16) problem, which finds 136 circles inside a convex circular triangle with specified tangency properties, is less than one second.Keywords: circle packing, computer-aided geometric design, geometric constraint solver, Malfatti’s problem
Procedia PDF Downloads 1091279 Sensitivity Analysis and Solitary Wave Solutions to the (2+1)-Dimensional Boussinesq Equation in Dispersive Media
Authors: Naila Nasreen, Dianchen Lu
Abstract:
This paper explores the dynamical behavior of the (2+1)-dimensional Boussinesq equation, which is a nonlinear water wave equation and is used to model wave packets in dispersive media with weak nonlinearity. This equation depicts how long wave made in shallow water propagates due to the influence of gravity. The (2+1)- dimensional Boussinesq equation combines the two-way propagation of the classical Boussinesq equation with the dependence on a second spatial variable, as that occurs in the two-dimensional Kadomstev- Petviashvili equation. This equation provides a description of head- on collision of oblique waves and it possesses some interesting properties. The governing model is discussed by the assistance of Ricatti equation mapping method, a relatively integration tool. The solutions have been extracted in different forms the solitary wave solutions as well as hyperbolic and periodic solutions. Moreover, the sensitivity analysis is demonstrated for the designed dynamical structural system’s wave profiles, where the soliton wave velocity and wave number parameters regulate the water wave singularity. In addition to being helpful for elucidating nonlinear partial differential equations, the method in use gives previously extracted solutions and extracts fresh exact solutions. Assuming the right values for the parameters, various graph in different shapes are sketched to provide information about the visual format of the earned results. This paper’s findings support the efficacy of the approach taken in enhancing nonlinear dynamical behavior. We believe this research will be of interest to a wide variety of engineers that work with engineering models. Findings show the effectiveness simplicity, and generalizability of the chosen computational approach, even when applied to complicated systems in a variety of fields, especially in ocean engineering.Keywords: (2+1)-dimensional Boussinesq equation, solitary wave solutions, Ricatti equation mapping approach, nonlinear phenomena
Procedia PDF Downloads 981278 Large Strain Creep Analysis of Composite Thick-Walled Anisotropic Cylinders
Authors: Vinod Kumar Arya
Abstract:
Creep analysis of a thick-walled composite anisotropic cylinder under internal pressure and considering large strains is presented. Using a threshold creep law for composite materials, expressions for stresses, strains, and strain rates are derived for several anisotropic cases. Numerical results, presented through several graphs and tables, depict the effect of anisotropy on the stress, strain, and strain rate distributions. Since for a specific type of material anisotropy described in the paper, these quantities are found to have the lowest values at the inner radius (the potential location of cylinder failure), it is concluded that by employing such an anisotropic material for the design of a thick-walled cylinder a longer service life for the cylinder may be achieved.Keywords: creep, composites, large strains, thick-walled cylinders, anisotropy
Procedia PDF Downloads 1481277 Modelling Optimal Control of Diabetes in the Workplace
Authors: Eunice Christabel Chukwu
Abstract:
Introduction: Diabetes is a chronic medical condition which is characterized by high levels of glucose in the blood and urine; it is usually diagnosed by means of a glucose tolerance test (GTT). Diabetes can cause a range of health problems if left unmanaged, as it can lead to serious complications. It is essential to manage the condition effectively, particularly in the workplace where the impact on work productivity can be significant. This paper discusses the modelling of optimal control of diabetes in the workplace using a control theory approach. Background: Diabetes mellitus is a condition caused by too much glucose in the blood. Insulin, a hormone produced by the pancreas, controls the blood sugar level by regulating the production and storage of glucose. In diabetes, there may be a decrease in the body’s ability to respond to insulin or a decrease in insulin produced by the pancreas which will lead to abnormalities in the metabolism of carbohydrates, proteins, and fats. In addition to the health implications, the condition can also have a significant impact on work productivity, as employees with uncontrolled diabetes are at risk of absenteeism, reduced performance, and increased healthcare costs. While several interventions are available to manage diabetes, the most effective approach is to control blood glucose levels through a combination of lifestyle modifications and medication. Methodology: The control theory approach involves modelling the dynamics of the system and designing a controller that can regulate the system to achieve optimal performance. In the case of diabetes, the system dynamics can be modelled using a mathematical model that describes the relationship between insulin, glucose, and other variables. The controller can then be designed to regulate the glucose levels to maintain them within a healthy range. Results: The modelling of optimal control of diabetes in the workplace using a control theory approach has shown promising results. The model has been able to predict the optimal dose of insulin required to maintain glucose levels within a healthy range, taking into account the individual’s lifestyle, medication regimen, and other relevant factors. The approach has also been used to design interventions that can improve diabetes management in the workplace, such as regular glucose monitoring and education programs. Conclusion: The modelling of optimal control of diabetes in the workplace using a control theory approach has significant potential to improve diabetes management and work productivity. By using a mathematical model and a controller to regulate glucose levels, the approach can help individuals with diabetes to achieve optimal health outcomes while minimizing the impact of the condition on their work performance. Further research is needed to validate the model and develop interventions that can be implemented in the workplace.Keywords: mathematical model, blood, insulin, pancreas, model, glucose
Procedia PDF Downloads 591276 Lattice Boltzmann Simulation of Fluid Flow and Heat Transfer Through Porous Media by Means of Pore-Scale Approach: Effect of Obstacles Size and Arrangement on Tortuosity and Heat Transfer for a Porosity Degree
Authors: Annunziata D’Orazio, Arash Karimipour, Iman Moradi
Abstract:
The size and arrangement of the obstacles in the porous media has an influential effect on the fluid flow and heat transfer, even in the same porosity. Regarding to this, in the present study, several different amounts of obstacles, in both regular and stagger arrangements, in the analogous porosity have been simulated through a channel. In order to compare the effect of stagger and regular arrangements, as well as different quantity of obstacles in the same porosity, on fluid flow and heat transfer. In the present study, the Single Relaxation Time Lattice Boltzmann Method, with Bhatnagar-Gross-Ktook (BGK) approximation and D2Q9 model, is implemented for the numerical simulation. Also, the temperature field is modeled through a Double Distribution Function (DDF) approach. Results are presented in terms of velocity and temperature fields, streamlines, percentage of pressure drop and Nusselt number of the obstacles walls. Also, the correlation between tortuosity and Nusselt number of the obstacles walls, for both regular and staggered arrangements, has been proposed. On the other hand, the results illustrated that by increasing the amount of obstacles, as well as changing their arrangement from regular to staggered, in the same porosity, the rate of tortuosity and Nusselt number of the obstacles walls increased.Keywords: lattice boltzmann method, heat transfer, porous media, pore-scale, porosity, tortuosity
Procedia PDF Downloads 851275 AER Model: An Integrated Artificial Society Modeling Method for Cloud Manufacturing Service Economic System
Authors: Deyu Zhou, Xiao Xue, Lizhen Cui
Abstract:
With the increasing collaboration among various services and the growing complexity of user demands, there are more and more factors affecting the stable development of the cloud manufacturing service economic system (CMSE). This poses new challenges to the evolution analysis of the CMSE. Many researchers have modeled and analyzed the evolution process of CMSE from the perspectives of individual learning and internal factors influencing the system, but without considering other important characteristics of the system's individuals (such as heterogeneity, bounded rationality, etc.) and the impact of external environmental factors. Therefore, this paper proposes an integrated artificial social model for the cloud manufacturing service economic system, which considers both the characteristics of the system's individuals and the internal and external influencing factors of the system. The model consists of three parts: the Agent model, environment model, and rules model (Agent-Environment-Rules, AER): (1) the Agent model considers important features of the individuals, such as heterogeneity and bounded rationality, based on the adaptive behavior mechanisms of perception, action, and decision-making; (2) the environment model describes the activity space of the individuals (real or virtual environment); (3) the rules model, as the driving force of system evolution, describes the mechanism of the entire system's operation and evolution. Finally, this paper verifies the effectiveness of the AER model through computational and experimental results.Keywords: cloud manufacturing service economic system (CMSE), AER model, artificial social modeling, integrated framework, computing experiment, agent-based modeling, social networks
Procedia PDF Downloads 781274 Climate Changes in Albania and Their Effect on Cereal Yield
Authors: Lule Basha, Eralda Gjika
Abstract:
This study is focused on analyzing climate change in Albania and its potential effects on cereal yields. Initially, monthly temperature and rainfalls in Albania were studied for the period 1960-2021. Climacteric variables are important variables when trying to model cereal yield behavior, especially when significant changes in weather conditions are observed. For this purpose, in the second part of the study, linear and nonlinear models explaining cereal yield are constructed for the same period, 1960-2021. The multiple linear regression analysis and lasso regression method are applied to the data between cereal yield and each independent variable: average temperature, average rainfall, fertilizer consumption, arable land, land under cereal production, and nitrous oxide emissions. In our regression model, heteroscedasticity is not observed, data follow a normal distribution, and there is a low correlation between factors, so we do not have the problem of multicollinearity. Machine-learning methods, such as random forest, are used to predict cereal yield responses to climacteric and other variables. Random Forest showed high accuracy compared to the other statistical models in the prediction of cereal yield. We found that changes in average temperature negatively affect cereal yield. The coefficients of fertilizer consumption, arable land, and land under cereal production are positively affecting production. Our results show that the Random Forest method is an effective and versatile machine-learning method for cereal yield prediction compared to the other two methods.Keywords: cereal yield, climate change, machine learning, multiple regression model, random forest
Procedia PDF Downloads 901273 Evidence Theory Based Emergency Multi-Attribute Group Decision-Making: Application in Facility Location Problem
Authors: Bidzina Matsaberidze
Abstract:
It is known that, in emergency situations, multi-attribute group decision-making (MAGDM) models are characterized by insufficient objective data and a lack of time to respond to the task. Evidence theory is an effective tool for describing such incomplete information in decision-making models when the expert and his knowledge are involved in the estimations of the MAGDM parameters. We consider an emergency decision-making model, where expert assessments on humanitarian aid from distribution centers (HADC) are represented in q-rung ortho-pair fuzzy numbers, and the data structure is described within the data body theory. Based on focal probability construction and experts’ evaluations, an objective function-distribution centers’ selection ranking index is constructed. Our approach for solving the constructed bicriteria partitioning problem consists of two phases. In the first phase, based on the covering’s matrix, we generate a matrix, the columns of which allow us to find all possible partitionings of the HADCs with the service centers. Some constraints are also taken into consideration while generating the matrix. In the second phase, based on the matrix and using our exact algorithm, we find the partitionings -allocations of the HADCs to the centers- which correspond to the Pareto-optimal solutions. For an illustration of the obtained results, a numerical example is given for the facility location-selection problem.Keywords: emergency MAGDM, q-rung orthopair fuzzy sets, evidence theory, HADC, facility location problem, multi-objective combinatorial optimization problem, Pareto-optimal solutions
Procedia PDF Downloads 921272 Factors of Scientific Rise and Fall of the Islamic Empire
Authors: Saeed Seyed Agha Banihashemi
Abstract:
The history of mathematics as one of the trends in the field of mathematics has special importance and in most of the important universities of the world, this trend in the field of mathematics is taught and researched. In teaching the history of mathematics and mathematics books, special attention is paid to the scientific works of the four Greek-Indian-Islamic and European civilizations, although the history of mathematics in China and East Asia is a special category due to its ancient civilization. In this article, while examining mathematics in the Islamic empire, the factors of the scientific rise and fall of the Islamic empire, which can include mathematics, have been studied. In this article, according to my own research and other sources mentioned s, It is believed the factors of scientific rise and fall in the Islamic Empire.Keywords: history of mathematics, alkandi, cryptology, manuscripts
Procedia PDF Downloads 1131271 Stability and Boundedness Theorems of Solutions of Certain Systems of Differential Equations
Authors: Adetunji A. Adeyanju., Mathew O. Omeike, Johnson O. Adeniran, Biodun S. Badmus
Abstract:
In this paper, we discuss certain conditions for uniform asymptotic stability and uniform ultimate boundedness of solutions to some systems of Aizermann-type of differential equations by means of second method of Lyapunov. In achieving our goal, some Lyapunov functions are constructed to serve as basic tools. The stability results in this paper, extend some stability results for some Aizermann-type of differential equations found in literature. Also, we prove some results on uniform boundedness and uniform ultimate boundedness of solutions of systems of equations study.Keywords: Aizermann, boundedness, first order, Lyapunov function, stability
Procedia PDF Downloads 831270 Effective Use of Visuals in Teaching Mathematics
Authors: Gohar Marikyan
Abstract:
This article is about investigating how to effectively use visuals in teaching introductory mathematics. The analysis showed the use of visuals in teaching introductory mathematics can be an effective tool for enhancing students’ learning and engagement in mathematics. The use of visuals was particularly effective for teaching concepts of numbers, operations with whole numbers, and properties of operations. The analysis also provides strong evidence that the effectiveness of visuals varied depending on the way the visuals are used. Furthermore, the analysis revealed that the use of visuals in mathematics instruction had a positive impact on student’s attitudes toward mathematics, with students showing higher levels of motivation and enjoyment in mathematics classes.Keywords: analytical thinking skills, instructional strategies with visuals, introductory mathematics, student engagement and motivation
Procedia PDF Downloads 1211269 Computer-Integrated Surgery of the Human Brain, New Possibilities
Authors: Ugo Galvanetto, Pirto G. Pavan, Mirco Zaccariotto
Abstract:
The discipline of Computer-integrated surgery (CIS) will provide equipment able to improve the efficiency of healthcare systems and, which is more important, clinical results. Surgeons and machines will cooperate in new ways that will extend surgeons’ ability to train, plan and carry out surgery. Patient specific CIS of the brain requires several steps: 1 - Fast generation of brain models. Based on image recognition of MR images and equipped with artificial intelligence, image recognition techniques should differentiate among all brain tissues and segment them. After that, automatic mesh generation should create the mathematical model of the brain in which the various tissues (white matter, grey matter, cerebrospinal fluid …) are clearly located in the correct positions. 2 – Reliable and fast simulation of the surgical process. Computational mechanics will be the crucial aspect of the entire procedure. New algorithms will be used to simulate the mechanical behaviour of cutting through cerebral tissues. 3 – Real time provision of visual and haptic feedback A sophisticated human-machine interface based on ergonomics and psychology will provide the feedback to the surgeon. The present work will address in particular point 2. Modelling the cutting of soft tissue in a structure as complex as the human brain is an extremely challenging problem in computational mechanics. The finite element method (FEM), that accurately represents complex geometries and accounts for material and geometrical nonlinearities, is the most used computational tool to simulate the mechanical response of soft tissues. However, the main drawback of FEM lies in the mechanics theory on which it is based, classical continuum Mechanics, which assumes matter is a continuum with no discontinuity. FEM must resort to complex tools such as pre-defined cohesive zones, external phase-field variables, and demanding remeshing techniques to include discontinuities. However, all approaches to equip FEM computational methods with the capability to describe material separation, such as interface elements with cohesive zone models, X-FEM, element erosion, phase-field, have some drawbacks that make them unsuitable for surgery simulation. Interface elements require a-priori knowledge of crack paths. The use of XFEM in 3D is cumbersome. Element erosion does not conserve mass. The Phase Field approach adopts a diffusive crack model instead of describing true tissue separation typical of surgical procedures. Modelling discontinuities, so difficult when using computational approaches based on classical continuum Mechanics, is instead easy for novel computational methods based on Peridynamics (PD). PD is a non-local theory of mechanics formulated with no use of spatial derivatives. Its governing equations are valid at points or surfaces of discontinuity, and it is, therefore especially suited to describe crack propagation and fragmentation problems. Moreover, PD does not require any criterium to decide the direction of crack propagation or the conditions for crack branching or coalescence; in the PD-based computational methods, cracks develop spontaneously in the way which is the most convenient from an energy point of view. Therefore, in PD computational methods, crack propagation in 3D is as easy as it is in 2D, with a remarkable advantage with respect to all other computational techniques.Keywords: computational mechanics, peridynamics, finite element, biomechanics
Procedia PDF Downloads 801268 Home Range and Spatial Interaction Modelling of Black Bears
Authors: Fekadu L. Bayisa, Elvan Ceyhan, Todd D. Steury
Abstract:
Interaction between individuals within the same species is an important component of population dynamics. An interaction can be either static (based on spatial overlap) or dynamic (based on movement interactions). Using GPS collar data, we can quantify both static and dynamic interactions between black bears. The goal of this work is to determine the level of black bear interactions using the 95% and 50% home ranges, as well as to model black bear spatial interactions, which could be attraction, avoidance/repulsion, or a lack of interaction at all, to gain new insights and improve our understanding of ecological processes. Recent methodological developments in home range estimation, inhomogeneous multitype/cross-type summary statistics, and envelope testing methods are explored to study the nature of black bear interactions. Our findings, in general, indicate that the black bears of one type in our data set tend to cluster around another type.Keywords: autocorrelated kernel density estimator, cross-type summary function, inhomogeneous multitype Poisson process, kernel density estimator, minimum convex polygon, pointwise and global envelope tests
Procedia PDF Downloads 811267 Complex Dynamics in a Model of Management of the Protected Areas
Authors: Paolo Russu
Abstract:
This paper investigates the economic and ecological dynamics that emerge in Protected Areas (PAs) due to interactions between visitors and the animals that live there. The PAs contain two species whose interactions are determined by the Lotka-Volterra equations system. Visitors' decisions to visit PAs are influenced by the entrance cost required to enter the park and the chance of witnessing the species living there. Visitors have contradictory effects on the species and thus on the sustainability of the protected areas: on the one hand, an increase in the number of tourists damages the natural habitat of the regions and thus the species living there; on the other hand, it increases the total amount of entrance fees that the managing body of the PAs can use to perform defensive expenditures that protect the species from extinction. For a given set of parameter values, saddle-node bifurcation, Hopf bifurcation, homoclinic orbits, and a Bogdanov–Takens bifurcation of codimension two has been investigated. The system displays periodic doubling and chaotic solutions, as numerical examples demonstrate. Pontryagin's Maximum Principle was utilised to develop an optimal admission charge policy that maximised social gain and ecosystem conservation.Keywords: chaos, bifurcation points, dynamical model, optimal control
Procedia PDF Downloads 811266 A Generalization of the Secret Sharing Scheme Codes Over Certain Ring
Authors: Ibrahim Özbek, Erdoğan Mehmet Özkan
Abstract:
In this study, we generalize (k,n) threshold secret sharing scheme on the study Ozbek and Siap to the codes over the ring Fq+ αFq. In this way, it is mentioned that the method obtained in that article can also be used on codes over rings, and new advantages to be obtained. The method of securely sharing the key in cryptography, which Shamir first systematized and Massey carried over to codes, became usable for all error-correcting codes. The firewall of this scheme is based on the hardness of the syndrome decoding problem. Also, an open study area is left for those working for other rings and code classes. All codes that correct errors with this method have been the working area of this method.Keywords: secret sharing scheme, linear codes, algebra, finite rings
Procedia PDF Downloads 721265 On the Cyclic Property of Groups of Prime Order
Authors: Ying Yi Wu
Abstract:
The study of finite groups is a central topic in algebraic structures, and one of the most fundamental questions in this field is the classification of finite groups up to isomorphism. In this paper, we investigate the cyclic property of groups of prime order, which is a crucial result in the classification of finite abelian groups. We prove the following statement: If p is a prime, then every group G of order p is cyclic. Our proof utilizes the properties of group actions and the class equation, which provide a powerful tool for studying the structure of finite groups. In particular, we first show that any non-identity element of G generates a cyclic subgroup of G. Then, we establish the existence of an element of order p, which implies that G is generated by a single element. Finally, we demonstrate that any two generators of G are conjugate, which shows that G is a cyclic group. Our result has significant implications in the classification of finite groups, as it implies that any group of prime order is isomorphic to the cyclic group of the same order. Moreover, it provides a useful tool for understanding the structure of more complicated finite groups, as any finite abelian group can be decomposed into a direct product of cyclic groups. Our proof technique can also be extended to other areas of group theory, such as the classification of finite p-groups, where p is a prime. Therefore, our work has implications beyond the specific result we prove and can contribute to further research in algebraic structures.Keywords: group theory, finite groups, cyclic groups, prime order, classification.
Procedia PDF Downloads 831264 A Proof of the Fact that a Finite Morphism is Proper
Authors: Ying Yi Wu
Abstract:
In this paper, we present a proof of the fact that a finite morphism is proper. We show that a finite morphism is universally closed and of finite type, which are the conditions for properness. Our proof is based on the theory of schemes and involves the use of the projection formula and the base change theorem. We first show that a finite morphism is of finite type and then proceed to show that it is universally closed. We use the fact that a finite morphism is also an affine morphism, which allows us to use the theory of coherent sheaves and their modules. We then show that the map induced by a finite morphism is flat and that the module it induces is of finite type. We use these facts to show that a finite morphism is universally closed. Our proof is constructive, and we provide details for each step of the argument.Keywords: finite, morphism, schemes, projection.
Procedia PDF Downloads 1071263 Assessment of ASEI-PDSI Method on Students’ Attitude and Achievement in Junior Secondary Schools Mathematics in FCT-Abuja
Authors: Amenaghawon Clement Osemwinyen
Abstract:
The Activity, Student-centred, Experiment, Improvisation - Plan, Do, See, Improve (ASEI-PDSI) method championed by the Strengthening Mathematics And Science Education (SMASE) - Nigeria Project is an attempt to improve the quality of mathematics, which has consistently declined over the years in both public primary and secondary schools across the country. The study thus assessed the ASEI-PDSI method on students’ attitudes and achievement in junior secondary schools (JSS) mathematics in FCT-Abuja. A survey research design was adopted, and 100 mathematics teachers using a stratified random sampling method were used for the study. The data were collected using structured questionnaires and analyzed using descriptive statistics. The findings showed that the ASEI-PDSI method had significantly improved the attitudes of students toward mathematics. The study also revealed that the ASEI-PDSI method significantly influenced junior secondary school (JSS) students’ mathematics achievement. Amongst the recommendations were that teachers should be encouraged to adopt the ASEI-PDSI method in teaching and learning mathematics in order to create a mathematically stimulating classroom environment which could advertently influence junior secondary school (JSS) students’ attitude and academic performance in mathematics. Also, regular in-service training programs should be organized by stakeholders (government and other interest groups) so as to improve the teaching strategies of teachers, mostly as they affect the ASEI-PDSI method.Keywords: achievement, ASEI-PDSI method, attitude, mathematics, SMASE
Procedia PDF Downloads 1111262 Assessing the Resilience of the Insurance Industry under Solvency II
Authors: Vincenzo Russo, Rosella Giacometti
Abstract:
The paper aims to assess the insurance industry's resilience under Solvency II against adverse scenarios. Starting from the economic balance sheet available under Solvency II for insurance and reinsurance undertakings, we assume that assets and liabilities follow a bivariate geometric Brownian motion (GBM). Then, using the results available under Margrabe's formula, we establish an analytical solution to calibrate the volatility of the asset-liability ratio. In such a way, we can estimate the probability of default and the probability of breaching the undertaking's Solvency Capital Requirement (SCR). Furthermore, since estimating the volatility of the Solvency Ratio became crucial for insurers in light of the financial crises featured in the last decades, we introduce a novel measure that we call Resiliency Ratio. The Resiliency Ratio can be used, in addition to the Solvency Ratio, to evaluate the insurance industry's resilience in case of adverse scenarios. Finally, we introduce a simplified stress test tool to evaluate the economic balance sheet under stressed conditions. The model we propose is featured by analytical tractability and fast calibration procedure where only the disclosed data available under the Solvency II public reporting are needed for the calibration. Using the data published regularly by the European Insurance and Occupational Pensions Authority (EIOPA) in an aggregated form by country, an empirical analysis has been performed to calibrate the model and provide the related results at the country level.Keywords: Solvency II, solvency ratio, volatility of the asset-liability ratio, probability of default, probability to breach the SCR, resilience ratio, stress test
Procedia PDF Downloads 811261 Comparison of the Boundary Element Method and the Method of Fundamental Solutions for Analysis of Potential and Elasticity
Authors: S. Zenhari, M. R. Hematiyan, A. Khosravifard, M. R. Feizi
Abstract:
The boundary element method (BEM) and the method of fundamental solutions (MFS) are well-known fundamental solution-based methods for solving a variety of problems. Both methods are boundary-type techniques and can provide accurate results. In comparison to the finite element method (FEM), which is a domain-type method, the BEM and the MFS need less manual effort to solve a problem. The aim of this study is to compare the accuracy and reliability of the BEM and the MFS. This comparison is made for 2D potential and elasticity problems with different boundary and loading conditions. In the comparisons, both convex and concave domains are considered. Both linear and quadratic elements are employed for boundary element analysis of the examples. The discretization of the problem domain in the BEM, i.e., converting the boundary of the problem into boundary elements, is relatively simple; however, in the MFS, obtaining appropriate locations of collocation and source points needs more attention to obtain reliable solutions. The results obtained from the presented examples show that both methods lead to accurate solutions for convex domains, whereas the BEM is more suitable than the MFS for concave domains.Keywords: boundary element method, method of fundamental solutions, elasticity, potential problem, convex domain, concave domain
Procedia PDF Downloads 891260 A Look at the Quantum Theory of Atoms in Molecules from the Discrete Morse Theory
Authors: Dairo Jose Hernandez Paez
Abstract:
The quantum theory of atoms in molecules (QTAIM) allows us to obtain topological information on electronic density in quantum mechanical systems. The QTAIM starts by considering the electron density as a continuous mathematical object. On the other hand, the discretization of electron density is also a mathematical object, which, from discrete mathematics, would allow a new approach to its topological study. From this point of view, it is necessary to develop a series of steps that provide the theoretical support that guarantees its application. Some of the steps that we consider most important are mentioned below: (1) obtain good representations of the electron density through computational calculations, (2) design a methodology for the discretization of electron density, and construct the simplicial complex. (3) Make an analysis of the discrete vector field associating the simplicial complex. (4) Finally, in this research, we propose to use the discrete Morse theory as a mathematical tool to carry out studies of electron density topology.Keywords: discrete mathematics, Discrete Morse theory, electronic density, computational calculations
Procedia PDF Downloads 1011259 Multiple Version of Roman Domination in Graphs
Authors: J. C. Valenzuela-Tripodoro, P. Álvarez-Ruíz, M. A. Mateos-Camacho, M. Cera
Abstract:
In 2004, it was introduced the concept of Roman domination in graphs. This concept was initially inspired and related to the defensive strategy of the Roman Empire. An undefended place is a city so that no legions are established on it, whereas a strong place is a city in which two legions are deployed. This situation may be modeled by labeling the vertices of a finite simple graph with labels {0, 1, 2}, satisfying the condition that any 0-vertex must be adjacent to, at least, a 2-vertex. Roman domination in graphs is a variant of classic domination. Clearly, the main aim is to obtain such labeling of the vertices of the graph with minimum cost, that is to say, having minimum weight (sum of all vertex labels). Formally, a function f: V (G) → {0, 1, 2} is a Roman dominating function (RDF) in the graph G = (V, E) if f(u) = 0 implies that f(v) = 2 for, at least, a vertex v which is adjacent to u. The weight of an RDF is the positive integer w(f)= ∑_(v∈V)▒〖f(v)〗. The Roman domination number, γ_R (G), is the minimum weight among all the Roman dominating functions? Obviously, the set of vertices with a positive label under an RDF f is a dominating set in the graph, and hence γ(G)≤γ_R (G). In this work, we start the study of a generalization of RDF in which we consider that any undefended place should be defended from a sudden attack by, at least, k legions. These legions can be deployed in the city or in any of its neighbours. A function f: V → {0, 1, . . . , k + 1} such that f(N[u]) ≥ k + |AN(u)| for all vertex u with f(u) < k, where AN(u) represents the set of active neighbours (i.e., with a positive label) of vertex u, is called a [k]-multiple Roman dominating functions and it is denoted by [k]-MRDF. The minimum weight of a [k]-MRDF in the graph G is the [k]-multiple Roman domination number ([k]-MRDN) of G, denoted by γ_[kR] (G). First, we prove that the [k]-multiple Roman domination decision problem is NP-complete even when restricted to bipartite and chordal graphs. A problem that had been resolved for other variants and wanted to be generalized. We know the difficulty of calculating the exact value of the [k]-MRD number, even for families of particular graphs. Here, we present several upper and lower bounds for the [k]-MRD number that permits us to estimate it with as much precision as possible. Finally, some graphs with the exact value of this parameter are characterized.Keywords: multiple roman domination function, decision problem np-complete, bounds, exact values
Procedia PDF Downloads 1071258 TDApplied: An R Package for Machine Learning and Inference with Persistence Diagrams
Authors: Shael Brown, Reza Farivar
Abstract:
Persistence diagrams capture valuable topological features of datasets that other methods cannot uncover. Still, their adoption in data pipelines has been limited due to the lack of publicly available tools in R (and python) for analyzing groups of them with machine learning and statistical inference. In an easy-to-use and scalable R package called TDApplied, we implement several applied analysis methods tailored to groups of persistence diagrams. The two main contributions of our package are comprehensiveness (most functions do not have implementations elsewhere) and speed (shown through benchmarking against other R packages). We demonstrate applications of the tools on simulated data to illustrate how easily practical analyses of any dataset can be enhanced with topological information.Keywords: machine learning, persistence diagrams, R, statistical inference
Procedia PDF Downloads 841257 Time of Week Intensity Estimation from Interval Censored Data with Application to Police Patrol Planning
Authors: Jiahao Tian, Michael D. Porter
Abstract:
Law enforcement agencies are tasked with crime prevention and crime reduction under limited resources. Having an accurate temporal estimate of the crime rate would be valuable to achieve such a goal. However, estimation is usually complicated by the interval-censored nature of crime data. We cast the problem of intensity estimation as a Poisson regression using an EM algorithm to estimate the parameters. Two special penalties are added that provide smoothness over the time of day and day of the week. This approach presented here provides accurate intensity estimates and can also uncover day-of-week clusters that share the same intensity patterns. Anticipating where and when crimes might occur is a key element to successful policing strategies. However, this task is complicated by the presence of interval-censored data. The censored data refers to the type of data that the event time is only known to lie within an interval instead of being observed exactly. This type of data is prevailing in the field of criminology because of the absence of victims for certain types of crime. Despite its importance, the research in temporal analysis of crime has lagged behind the spatial component. Inspired by the success of solving crime-related problems with a statistical approach, we propose a statistical model for the temporal intensity estimation of crime with censored data. The model is built on Poisson regression and has special penalty terms added to the likelihood. An EM algorithm was derived to obtain maximum likelihood estimates, and the resulting model shows superior performance to the competing model. Our research is in line with the smart policing initiative (SPI) proposed by the Bureau Justice of Assistance (BJA) as an effort to support law enforcement agencies in building evidence-based, data-driven law enforcement tactics. The goal is to identify strategic approaches that are effective in crime prevention and reduction. In our case, we allow agencies to deploy their resources for a relatively short period of time to achieve the maximum level of crime reduction. By analyzing a particular area within cities where data are available, our proposed approach could not only provide an accurate estimate of intensities for the time unit considered but a time-variation crime incidence pattern. Both will be helpful in the allocation of limited resources by either improving the existing patrol plan with the understanding of the discovery of the day of week cluster or supporting extra resources available.Keywords: cluster detection, EM algorithm, interval censoring, intensity estimation
Procedia PDF Downloads 64