Search results for: Bertrand-Chebyshev theorem
35 Analytical Solutions for Tunnel Collapse Mechanisms in Circular Cross-Section Tunnels under Seepage and Seismic Forces
Authors: Zhenyu Yang, Qiunan Chen, Xiaocheng Huang
Abstract:
Reliable prediction of tunnel collapse remains a prominent challenge in the field of civil engineering. In this study, leveraging the nonlinear Hoek-Brown failure criterion and the upper-bound theorem, an analytical solution for the collapse surface of shallowly buried circular tunnels was derived, taking into account the coupled effects of surface loads and pore water pressures. Initially, surface loads and pore water pressures were introduced as external force factors, equating the energy dissipation rate to the external force, yielding our objective function. Subsequently, the variational method was employed for optimization, and the outcomes were juxtaposed with previous research findings. Furthermore, we utilized the deduced equation set to systematically analyze the influence of various rock mass parameters on collapse shape and extent. To validate our analytical solutions, a comparison with prior studies was executed. The corroboration underscored the efficacy of our proposed methodology, offering invaluable insights for collapse risk assessment in practical engineering applications.Keywords: tunnel roof stability, analytical solution, hoek–brown failure criterion, limit analysis
Procedia PDF Downloads 8434 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation Using Physics-Informed Neural Network
Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy
Abstract:
The physics-informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on a strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary conditions to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of the Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful in studying various optical phenomena.Keywords: deep learning, optical soliton, physics informed neural network, partial differential equation
Procedia PDF Downloads 7033 Nonlinear Evolution on Graphs
Authors: Benniche Omar
Abstract:
We are concerned with abstract fully nonlinear differential equations having the form y’(t)=Ay(t)+f(t,y(t)) where A is an m—dissipative operator (possibly multi—valued) defined on a subset D(A) of a Banach space X with values in X and f is a given function defined on I×X with values in X. We consider a graph K in I×X. We recall that K is said to be viable with respect to the above abstract differential equation if for each initial data in K there exists at least one trajectory starting from that initial data and remaining in K at least for a short time. The viability problem has been studied by many authors by using various techniques and frames. If K is closed, it is shown that a tangency condition, which is mainly linked to the dynamic, is crucial for viability. In the case when X is infinite dimensional, compactness and convexity assumptions are needed. In this paper, we are concerned with the notion of near viability for a given graph K with respect to y’(t)=Ay(t)+f(t,y(t)). Roughly speaking, the graph K is said to be near viable with respect to y’(t)=Ay(t)+f(t,y(t)), if for each initial data in K there exists at least one trajectory remaining arbitrary close to K at least for short time. It is interesting to note that the near viability is equivalent to an appropriate tangency condition under mild assumptions on the dynamic. Adding natural convexity and compactness assumptions on the dynamic, we may recover the (exact) viability. Here we investigate near viability for a graph K in I×X with respect to y’(t)=Ay(t)+f(t,y(t)) where A and f are as above. We emphasis that the t—dependence on the perturbation f leads us to introduce a new tangency concept. In the base of a tangency conditions expressed in terms of that tangency concept, we formulate criteria for K to be near viable with respect to y’(t)=Ay(t)+f(t,y(t)). As application, an abstract null—controllability theorem is given.Keywords: abstract differential equation, graph, tangency condition, viability
Procedia PDF Downloads 14432 Time Delayed Susceptible-Vaccinated-Infected-Recovered-Susceptible Epidemic Model along with Nonlinear Incidence and Nonlinear Treatment
Authors: Kanica Goel, Nilam
Abstract:
Infectious diseases are a leading cause of death worldwide and hence a great challenge for every nation. Thus, it becomes utmost essential to prevent and reduce the spread of infectious disease among humans. Mathematical models help to better understand the transmission dynamics and spread of infections. For this purpose, in the present article, we have proposed a nonlinear time-delayed SVIRS (Susceptible-Vaccinated-Infected-Recovered-Susceptible) mathematical model with nonlinear type incidence rate and nonlinear type treatment rate. Analytical study of the model shows that model exhibits two types of equilibrium points, namely, disease-free equilibrium and endemic equilibrium. Further, for the long-term behavior of the model, stability of the model is discussed with the help of basic reproduction number R₀ and we showed that disease-free equilibrium is locally asymptotically stable if the basic reproduction number R₀ is less than one and unstable if the basic reproduction number R₀ is greater than one for the time lag τ≥0. Furthermore, when basic reproduction number R₀ is one, using center manifold theory and Casillo-Chavez and Song theorem, we showed that the model undergoes transcritical bifurcation. Moreover, numerical simulations are being carried out using MATLAB 2012b to illustrate the theoretical results.Keywords: nonlinear incidence rate, nonlinear treatment rate, stability, time delayed SVIRS epidemic model
Procedia PDF Downloads 14931 An Ensemble Learning Method for Applying Particle Swarm Optimization Algorithms to Systems Engineering Problems
Authors: Ken Hampshire, Thomas Mazzuchi, Shahram Sarkani
Abstract:
As a subset of metaheuristics, nature-inspired optimization algorithms such as particle swarm optimization (PSO) have shown promise both in solving intractable problems and in their extensibility to novel problem formulations due to their general approach requiring few assumptions. Unfortunately, single instantiations of algorithms require detailed tuning of parameters and cannot be proven to be best suited to a particular illustrative problem on account of the “no free lunch” (NFL) theorem. Using these algorithms in real-world problems requires exquisite knowledge of the many techniques and is not conducive to reconciling the various approaches to given classes of problems. This research aims to present a unified view of PSO-based approaches from the perspective of relevant systems engineering problems, with the express purpose of then eliciting the best solution for any problem formulation in an ensemble learning bucket of models approach. The central hypothesis of the research is that extending the PSO algorithms found in the literature to real-world optimization problems requires a general ensemble-based method for all problem formulations but a specific implementation and solution for any instance. The main results are a problem-based literature survey and a general method to find more globally optimal solutions for any systems engineering optimization problem.Keywords: particle swarm optimization, nature-inspired optimization, metaheuristics, systems engineering, ensemble learning
Procedia PDF Downloads 9830 Theoretical Study of Structural, Magnetic, and Magneto-Optical Properties of Ultrathin Films of Fe/Cu (001)
Authors: Mebarek Boukelkoul, Abdelhalim Haroun
Abstract:
By means of the first principle calculation, we have investigated the structural, magnetic and magneto-optical properties of the ultra-thin films of Fen/Cu(001) with (n=1, 2, 3). We adopted a relativistic approach using DFT theorem with local spin density approximation (LSDA). The electronic structure is performed within the framework of the Spin-Polarized Relativistic (SPR) Linear Muffin-Tin Orbitals (LMTO) with the Atomic Sphere Approximation (ASA) method. During the variational principle, the crystal wave function is expressed as a linear combination of the Bloch sums of the so-called relativistic muffin-tin orbitals centered on the atomic sites. The crystalline structure is calculated after an atomic relaxation process using the optimization of the total energy with respect to the atomic interplane distance. A body-centered tetragonal (BCT) pseudomorphic crystalline structure with a tetragonality ratio c/a larger than unity is found. The magnetic behaviour is characterized by an enhanced magnetic moment and a ferromagnetic interplane coupling. The polar magneto-optical Kerr effect spectra are given over a photon energy range extended to 15eV and the microscopic origin of the most interesting features are interpreted by interband transitions. Unlike thin layers, the anisotropy in the ultra-thin films is characterized by a perpendicular magnetization which is perpendicular to the film plane.Keywords: ultrathin films, magnetism, magneto-optics, pseudomorphic structure
Procedia PDF Downloads 33529 Anthropomorphism in the Primate Mind-Reading Debate: A Critique of Sober's Justification Argument
Authors: Boyun Lee
Abstract:
This study aims to discuss whether anthropomorphism some scientists tend to use in cross-species comparison can be justified epistemologically, especially in the primate mind-reading debate. Concretely, this study critically analyzes Elliott Sober’s argument about mind-reading hypothesis (MRH), an anthropomorphic hypothesis which states that nonhuman primates (e.g., chimpanzee) are mind-readers like humans. Although many scientists consider anthropomorphism as an error and choosing anthropomorphic hypothesis like MRH without any definite evidence invalid, Sober advocates that anthropomorphism is supported by cladistic parsimony that suggests choosing the simplest hypothesis postulating the minimum number of evolutionary changes, which can be justified epistemologically in the mind-reading debate. However, his argument has several problems. First, Reichenbach’s theorem which Sober uses in process of showing that MRH has the higher likelihood than its competing hypothesis, behavior-reading hypothesis (BRH), does not fit in the context of inferring the evolutionary relationship. Second, the phylogenetic tree Sober supports is one of the possible scenarios of MRH, and even without this problem, it is difficult to prove that the possibility nonhuman primate species and human share mind-reading ability is higher than the possibility of the other case, considering how evolution occurs. Consequently, it seems hard to justify anthropomorphism of MRH under Sober’s argument. Some scientists and philosophers say that anthropomorphism sometimes helps observe interesting phenomena or make hypotheses in comparative biology. Nonetheless, we cannot determine that it provides answers about why and how the interesting phenomena appear or which of the hypotheses is better, at least the mind-reading debate, under the current state.Keywords: anthropomorphism, cladistic parsimony, comparative biology, mind-reading debate
Procedia PDF Downloads 17228 Voice Liveness Detection Using Kolmogorov Arnold Networks
Authors: Arth J. Shah, Madhu R. Kamble
Abstract:
Voice biometric liveness detection is customized to certify an authentication process of the voice data presented is genuine and not a recording or synthetic voice. With the rise of deepfakes and other equivalently sophisticated spoofing generation techniques, it’s becoming challenging to ensure that the person on the other end is a live speaker or not. Voice Liveness Detection (VLD) system is a group of security measures which detect and prevent voice spoofing attacks. Motivated by the recent development of the Kolmogorov-Arnold Network (KAN) based on the Kolmogorov-Arnold theorem, we proposed KAN for the VLD task. To date, multilayer perceptron (MLP) based classifiers have been used for the classification tasks. We aim to capture not only the compositional structure of the model but also to optimize the values of univariate functions. This study explains the mathematical as well as experimental analysis of KAN for VLD tasks, thereby opening a new perspective for scientists to work on speech and signal processing-based tasks. This study emerges as a combination of traditional signal processing tasks and new deep learning models, which further proved to be a better combination for VLD tasks. The experiments are performed on the POCO and ASVSpoof 2017 V2 database. We used Constant Q-transform, Mel, and short-time Fourier transform (STFT) based front-end features and used CNN, BiLSTM, and KAN as back-end classifiers. The best accuracy is 91.26 % on the POCO database using STFT features with the KAN classifier. In the ASVSpoof 2017 V2 database, the lowest EER we obtained was 26.42 %, using CQT features and KAN as a classifier.Keywords: Kolmogorov Arnold networks, multilayer perceptron, pop noise, voice liveness detection
Procedia PDF Downloads 3927 Derivation of BCK\BCI-Algebras
Authors: Tumadhir Fahim M Alsulami
Abstract:
The concept of this paper builds on connecting between two important notions, fuzzy ideals of BCK-algebras and derivation of BCI-algebras. The result we got is a new concept called derivation fuzzy ideals of BCI-algebras. Followed by various results and important theorems on different types of ideals. In chapter 1: We presented the basic and fundamental concepts of BCK\ BCI- algebras as follows: BCK/BCI-algebras, BCK sub-algebras, bounded BCK-algebras, positive implicative BCK-algebras, commutative BCK-algebras, implicative BCK- algebras. Moreover, we discussed ideals of BCK-algebras, positive implicative ideals, implicative ideals and commutative ideals. In the last section of chapter 1 we proposed the notion of derivation of BCI-algebras, regular derivation of BCI-algebras and basic definitions and properties. In chapter 2: It includes 3 sections as follows: Section 1 contains elementary concepts of fuzzy sets and fuzzy set operations. Section 2 shows O. G. Xi idea, where he applies fuzzy sets concept to BCK-algebras and we studied fuzzy sub-algebras as well. Section 3 contains fuzzy ideals of BCK-algebras basic definitions, closed fuzzy ideals, fuzzy commutative ideals, fuzzy positive implicative ideals, fuzzy implicative ideals, fuzzy H-ideals and fuzzy p-ideals. Moreover, we investigated their concepts in diverse theorems and propositions. In chapter 3: The main concept of our thesis on derivation fuzzy ideals of BCI- algebras is introduced. Chapter 3 splits into 4 sections. We start with General definitions and important theorems on derivation fuzzy ideal theory in section 1. Section 2 and 3 contain derivations fuzzy p-ideals and derivations fuzzy H-ideals of BCI- algebras, several important theorems and propositions were introduced. The last section studied derivations fuzzy implicative ideals of BCI-algebras and it includes new theorems and results. Furthermore, we presented a new theorem that associate derivations fuzzy implicative ideals, derivations fuzzy positive implicative ideals and derivations fuzzy commutative ideals. These concepts and the new results were obtained and introduced in chapter 3 were submitted in two separated articles and accepted for publication.Keywords: BCK, BCI, algebras, derivation
Procedia PDF Downloads 12426 Setting Uncertainty Conditions Using Singular Values for Repetitive Control in State Feedback
Authors: Muhammad A. Alsubaie, Mubarak K. H. Alhajri, Tarek S. Altowaim
Abstract:
A repetitive controller designed to accommodate periodic disturbances via state feedback is discussed. Periodic disturbances can be represented by a time delay model in a positive feedback loop acting on system output. A direct use of the small gain theorem solves the periodic disturbances problem via 1) isolating the delay model, 2) finding the overall system representation around the delay model and 3) designing a feedback controller that assures overall system stability and tracking error convergence. This paper addresses uncertainty conditions for the repetitive controller designed in state feedback in either past error feedforward or current error feedback using singular values. The uncertainty investigation is based on the overall system found and the stability condition associated with it; depending on the scheme used, to set an upper/lower limit weighting parameter. This creates a region that should not be exceeded in selecting the weighting parameter which in turns assures performance improvement against system uncertainty. Repetitive control problem can be described in lifted form. This allows the usage of singular values principle in setting the range for the weighting parameter selection. The Simulation results obtained show a tracking error convergence against dynamic system perturbation if the weighting parameter chosen is within the range obtained. Simulation results also show the advantage of weighting parameter usage compared to the case where it is omitted.Keywords: model mismatch, repetitive control, singular values, state feedback
Procedia PDF Downloads 15525 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function
Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos
Abstract:
Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.Keywords: diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion process, trends functions, bi-parameters weibull density function
Procedia PDF Downloads 30724 Design of Robust and Intelligent Controller for Active Removal of Space Debris
Authors: Shabadini Sampath, Jinglang Feng
Abstract:
With huge kinetic energy, space debris poses a major threat to astronauts’ space activities and spacecraft in orbit if a collision happens. The active removal of space debris is required in order to avoid frequent collisions that would occur. In addition, the amount of space debris will increase uncontrollably, posing a threat to the safety of the entire space system. But the safe and reliable removal of large-scale space debris has been a huge challenge to date. While capturing and deorbiting space debris, the space manipulator has to achieve high control precision. However, due to uncertainties and unknown disturbances, there is difficulty in coordinating the control of the space manipulator. To address this challenge, this paper focuses on developing a robust and intelligent control algorithm that controls joint movement and restricts it on the sliding manifold by reducing uncertainties. A neural network adaptive sliding mode controller (NNASMC) is applied with the objective of finding the control law such that the joint motions of the space manipulator follow the given trajectory. A computed torque control (CTC) is an effective motion control strategy that is used in this paper for computing space manipulator arm torque to generate the required motion. Based on the Lyapunov stability theorem, the proposed intelligent controller NNASMC and CTC guarantees the robustness and global asymptotic stability of the closed-loop control system. Finally, the controllers used in the paper are modeled and simulated using MATLAB Simulink. The results are presented to prove the effectiveness of the proposed controller approach.Keywords: GNC, active removal of space debris, AI controllers, MatLabSimulink
Procedia PDF Downloads 13223 Application of Rapidly Exploring Random Tree Star-Smart and G2 Quintic Pythagorean Hodograph Curves to the UAV Path Planning Problem
Authors: Luiz G. Véras, Felipe L. Medeiros, Lamartine F. Guimarães
Abstract:
This work approaches the automatic planning of paths for Unmanned Aerial Vehicles (UAVs) through the application of the Rapidly Exploring Random Tree Star-Smart (RRT*-Smart) algorithm. RRT*-Smart is a sampling process of positions of a navigation environment through a tree-type graph. The algorithm consists of randomly expanding a tree from an initial position (root node) until one of its branches reaches the final position of the path to be planned. The algorithm ensures the planning of the shortest path, considering the number of iterations tending to infinity. When a new node is inserted into the tree, each neighbor node of the new node is connected to it, if and only if the extension of the path between the root node and that neighbor node, with this new connection, is less than the current extension of the path between those two nodes. RRT*-smart uses an intelligent sampling strategy to plan less extensive routes by spending a smaller number of iterations. This strategy is based on the creation of samples/nodes near to the convex vertices of the navigation environment obstacles. The planned paths are smoothed through the application of the method called quintic pythagorean hodograph curves. The smoothing process converts a route into a dynamically-viable one based on the kinematic constraints of the vehicle. This smoothing method models the hodograph components of a curve with polynomials that obey the Pythagorean Theorem. Its advantage is that the obtained structure allows computation of the curve length in an exact way, without the need for quadratural techniques for the resolution of integrals.Keywords: path planning, path smoothing, Pythagorean hodograph curve, RRT*-Smart
Procedia PDF Downloads 16622 Modeling of Drug Distribution in the Human Vitreous
Authors: Judith Stein, Elfriede Friedmann
Abstract:
The injection of a drug into the vitreous body for the treatment of retinal diseases like wet aged-related macular degeneration (AMD) is the most common medical intervention worldwide. We develop mathematical models for drug transport in the vitreous body of a human eye to analyse the impact of different rheological models of the vitreous on drug distribution. In addition to the convection diffusion equation characterizing the drug spreading, we use porous media modeling for the healthy vitreous with a dense collagen network and include the steady permeating flow of the aqueous humor described by Darcy's law driven by a pressure drop. Additionally, the vitreous body in a healthy human eye behaves like a viscoelastic gel through the collagen fibers suspended in the network of hyaluronic acid and acts as a drug depot for the treatment of retinal diseases. In a completely liquefied vitreous, we couple the drug diffusion with the classical Navier-Stokes flow equations. We prove the global existence and uniqueness of the weak solution of the developed initial-boundary value problem describing the drug distribution in the healthy vitreous considering the permeating aqueous humor flow in the realistic three-dimensional setting. In particular, for the drug diffusion equation, results from the literature are extended from homogeneous Dirichlet boundary conditions to our mixed boundary conditions that describe the eye with the Galerkin's method using Cauchy-Schwarz inequality and trace theorem. Because there is only a small effective drug concentration range and higher concentrations may be toxic, the ability to model the drug transport could improve the therapy by considering patient individual differences and give a better understanding of the physiological and pathological processes in the vitreous.Keywords: coupled PDE systems, drug diffusion, mixed boundary conditions, vitreous body
Procedia PDF Downloads 13721 Derivation of a Risk-Based Level of Service Index for Surface Street Network Using Reliability Analysis
Authors: Chang-Jen Lan
Abstract:
Current Level of Service (LOS) index adopted in Highway Capacity Manual (HCM) for signalized intersections on surface streets is based on the intersection average delay. The delay thresholds for defining LOS grades are subjective and is unrelated to critical traffic condition. For example, an intersection delay of 80 sec per vehicle for failing LOS grade F does not necessarily correspond to the intersection capacity. Also, a specific measure of average delay may result from delay minimization, delay equality, or other meaningful optimization criteria. To that end, a reliability version of the intersection critical degree of saturation (v/c) as the LOS index is introduced. Traditionally, the level of saturation at a signalized intersection is defined as the ratio of critical volume sum (per lane) to the average saturation flow (per lane) during all available effective green time within a cycle. The critical sum is the sum of the maximal conflicting movement-pair volumes in northbound-southbound and eastbound/westbound right of ways. In this study, both movement volume and saturation flow are assumed log-normal distributions. Because, when the conditions of central limit theorem obtain, multiplication of the independent, positive random variables tends to result in a log-normal distributed outcome in the limit, the critical degree of saturation is expected to be a log-normal distribution as well. Derivation of the risk index predictive limits is complex due to the maximum and absolute value operators, as well as the ratio of random variables. A fairly accurate functional form for the predictive limit at a user-specified significant level is yielded. The predictive limit is then compared with the designated LOS thresholds for the intersection critical degree of saturation (denoted as XKeywords: reliability analysis, level of service, intersection critical degree of saturation, risk based index
Procedia PDF Downloads 13120 AI Peer Review Challenge: Standard Model of Physics vs 4D GEM EOS
Authors: David A. Harness
Abstract:
Natural evolution of ATP cognitive systems is to meet AI peer review standards. ATP process of axiom selection from Mizar to prove a conjecture would be further refined, as in all human and machine learning, by solving the real world problem of the proposed AI peer review challenge: Determine which conjecture forms the higher confidence level constructive proof between Standard Model of Physics SU(n) lattice gauge group operation vs. present non-standard 4D GEM EOS SU(n) lattice gauge group spatially extended operation in which the photon and electron are the first two trace angular momentum invariants of a gravitoelectromagnetic (GEM) energy momentum density tensor wavetrain integration spin-stress pressure-volume equation of state (EOS), initiated via 32 lines of Mathematica code. Resulting gravitoelectromagnetic spectrum ranges from compressive through rarefactive of the central cosmological constant vacuum energy density in units of pascals. Said self-adjoint group operation exclusively operates on the stress energy momentum tensor of the Einstein field equations, introducing quantization directly on the 4D spacetime level, essentially reformulating the Yang-Mills virtual superpositioned particle compounded lattice gauge groups quantization of the vacuum—into a single hyper-complex multi-valued GEM U(1) × SU(1,3) lattice gauge group Planck spacetime mesh quantization of the vacuum. Thus the Mizar corpus already contains all of the axioms required for relevant DeepMath premise selection and unambiguous formal natural language parsing in context deep learning.Keywords: automated theorem proving, constructive quantum field theory, information theory, neural networks
Procedia PDF Downloads 17919 The Impact of City Mobility on Propagation of Infectious Diseases: Mathematical Modelling Approach
Authors: Asrat M.Belachew, Tiago Pereira, Institute of Mathematics, Computer Sciences, Avenida Trabalhador São Carlense, 400, São Carlos, 13566-590, Brazil
Abstract:
Infectious diseases are among the most prominent threats to human beings. They cause morbidity and mortality to an individual and collapse the social, economic, and political systems of the whole world collectively. Mathematical models are fundamental tools and provide a comprehensive understanding of how infectious diseases spread and designing the control strategy to mitigate infectious diseases from the host population. Modeling the spread of infectious diseases using a compartmental model of inhomogeneous populations is good in terms of complexity. However, in the real world, there is a situation that accounts for heterogeneity, such as ages, locations, and contact patterns of the population which are ignored in a homogeneous setting. In this work, we study how classical an SEIR infectious disease spreading of the compartmental model can be extended by incorporating the mobility of population between heterogeneous cities during an outbreak of infectious disease. We have formulated an SEIR multi-cities epidemic spreading model using a system of 4k ordinary differential equations to describe the disease transmission dynamics in k-cities during the day and night. We have shownthat the model is epidemiologically (i.e., variables have biological interpretation) and mathematically (i.e., a unique bounded solution exists all the time) well-posed. We constructed the next-generation matrix (NGM) for the model and calculated the basic reproduction number R0for SEIR-epidemic spreading model with cities mobility. R0of the disease depends on the spectral radius mobility operator, and it is a threshold between asymptotic stability of the disease-free equilibrium and disease persistence. Using the eigenvalue perturbation theorem, we showed that sending a fraction of the population between cities decreases the reproduction number of diseases in interconnected cities. As a result, disease transmissiondecreases in the population.Keywords: SEIR-model, mathematical model, city mobility, epidemic spreading
Procedia PDF Downloads 10918 A Location-Based Search Approach According to Users’ Application Scenario
Authors: Shih-Ting Yang, Chih-Yun Lin, Ming-Yu Li, Jhong-Ting Syue, Wei-Ming Huang
Abstract:
Global positioning system (GPS) has become increasing precise in recent years, and the location-based service (LBS) has developed rapidly. Take the example of finding a parking lot (such as Parking apps). The location-based service can offer immediate information about a nearby parking lot, including the information about remaining parking spaces. However, it cannot provide expected search results according to the requirement situations of users. For that reason, this paper develops a “Location-based Search Approach according to Users’ Application Scenario” according to the location-based search and demand determination to help users obtain the information consistent with their requirements. The “Location-based Search Approach based on Users’ Application Scenario” of this paper consists of one mechanism and three kernel modules. First, in the Information Pre-processing Mechanism (IPM), this paper uses the cosine theorem to categorize the locations of users. Then, in the Information Category Evaluation Module (ICEM), the kNN (k-Nearest Neighbor) is employed to classify the browsing records of users. After that, in the Information Volume Level Determination Module (IVLDM), this paper makes a comparison between the number of users’ clicking the information at different locations and the average number of users’ clicking the information at a specific location, so as to evaluate the urgency of demand; then, the two-dimensional space is used to estimate the application situations of users. For the last step, in the Location-based Search Module (LBSM), this paper compares all search results and the average number of characters of the search results, categorizes the search results with the Manhattan Distance, and selects the results according to the application scenario of users. Additionally, this paper develops a Web-based system according to the methodology to demonstrate practical application of this paper. The application scenario-based estimate and the location-based search are used to evaluate the type and abundance of the information expected by the public at specific location, so that information demanders can obtain the information consistent with their application situations at specific location.Keywords: data mining, knowledge management, location-based service, user application scenario
Procedia PDF Downloads 12317 2D Ferromagnetism in Van der Waals Bonded Fe₃GeTe₂
Authors: Ankita Tiwari, Jyoti Saini, Subhasis Ghosh
Abstract:
For many years, researchers have been fascinated by the subject of how properties evolve as dimensionality is lowered. Early on, it was shown that the presence of a significant magnetic anisotropy might compensate for the lack of long-range (LR) magnetic order in a low-dimensional system (d < 3) with continuous symmetry, as proposed by Hohenberg-Mermin and Wagner (HMW). Strong magnetic anisotropy allows an LR magnetic order to stabilize in two dimensions (2D) even in the presence of stronger thermal fluctuations which is responsible for the absence of Heisenberg ferromagnetism in 2D. Van der Waals (vdW) ferromagnets, including CrI₃, CrTe₂, Cr₂X₂Te₆ (X = Si and Ge) and Fe₃GeTe₂, offer a nearly ideal platform for studying ferromagnetism in 2D. Fe₃GeTe₂ is the subject of extensive investigation due to its tunable magnetic properties, high Curie temperature (Tc ~ 220K), and perpendicular magnetic anisotropy. Many applications in the field of spintronics device development have been quite active due to these appealing features of Fe₃GeTe₂. Although it is known that LR-driven ferromagnetism is necessary to get around the HMW theorem in 2D experimental realization, Heisenberg 2D ferromagnetism remains elusive in condensed matter systems. Here, we show that Fe₃GeTe₂ hosts both localized and delocalized spins, resulting in itinerant and local-moment ferromagnetism. The presence of LR itinerant interaction facilitates to stabilize Heisenberg ferromagnet in 2D. With the help of Rhodes-Wohlfarth (RW) and generalized RW-based analysis, Fe₃GeTe₂ has been shown to be a 2D ferromagnet with itinerant magnetism that can be modulated by an external magnetic field. Hence, the presence of both local moment and itinerant magnetism has made this system interesting in terms of research in low dimensions. We have also rigorously performed critical analysis using an improvised method. We show that the variable critical exponents are typical signatures of 2D ferromagnetism in Fe₃GeTe₂. The spontaneous magnetization exponent β changes the universality class from mean-field to 2D Heisenberg with field. We have also confirmed the range of interaction via the renormalization group (RG) theory. According to RG theory, Fe₃GeTe₂ is a 2D ferromagnet with LR interactions.Keywords: Van der Waal ferromagnet, 2D ferromagnetism, phase transition, itinerant ferromagnetism, long range order
Procedia PDF Downloads 7116 The Dressing Field Method of Gauge Symmetries Reduction: Presentation and Examples
Authors: Jeremy Attard, Jordan François, Serge Lazzarini, Thierry Masson
Abstract:
Gauge theories are the natural background for describing geometrically fundamental interactions using principal and associated fiber bundles as dynamical entities. The central notion of these theories is their local gauge symmetry implemented by the local action of a Lie group H. There exist several methods used to reduce the symmetry of a gauge theory, like gauge fixing, bundle reduction theorem or spontaneous symmetry breaking mechanism (SSBM). This paper is a presentation of another method of gauge symmetry reduction, distinct from those three. Given a symmetry group H acting on a fiber bundle and its naturally associated fields (Ehresmann (or Cartan) connection, curvature, matter fields, etc.) there sometimes exists a way to erase (in whole or in part) the H-action by just reconfiguring these fields, i.e. by making a mere change of field variables in order to get new (‘composite‘) fields on which H (in whole or in part) does not act anymore. Two examples: the re-interpretation of the BEHGHK (Higgs) mechanism, on the one hand, and the top-down construction of Tractor and Penrose's Twistor spaces and connections in the framework of conformal Cartan geometry, one the other, will be discussed. They have, of course, nothing to do with each other but the dressing field method can be applied on both to get a new insight. In the first example, it turns out, indeed, that generation of masses in the Standard Model can be separated from the symmetry breaking, the latter being a mere change of field variables, i.e. a dressing. This offers an interpretation in opposition with the one usually found in textbooks. In the second case, the dressing field method applied to the conformal Cartan geometry offer a way of understanding the deep geometric nature of the so-called Tractors and Twistors. The dressing field method, distinct from a gauge transformation (even if it can have apparently the same form), is a systematic way of finding and erasing artificial symmetries of a theory, by a mere change of field variables which redistributes the degrees of freedom of the theories.Keywords: BEHGHK (Higgs) mechanism, conformal gravity, gauge theory, spontaneous symmetry breaking, symmetry reduction, twistors and tractors
Procedia PDF Downloads 23715 Dual Duality for Unifying Spacetime and Internal Symmetry
Authors: David C. Ni
Abstract:
The current efforts for Grand Unification Theory (GUT) can be classified into General Relativity, Quantum Mechanics, String Theory and the related formalisms. In the geometric approaches for extending General Relativity, the efforts are establishing global and local invariance embedded into metric formalisms, thereby additional dimensions are constructed for unifying canonical formulations, such as Hamiltonian and Lagrangian formulations. The approaches of extending Quantum Mechanics adopt symmetry principle to formulate algebra-group theories, which evolved from Maxwell formulation to Yang-Mills non-abelian gauge formulation, and thereafter manifested the Standard model. This thread of efforts has been constructing super-symmetry for mapping fermion and boson as well as gluon and graviton. The efforts of String theory currently have been evolving to so-called gauge/gravity correspondence, particularly the equivalence between type IIB string theory compactified on AdS5 × S5 and N = 4 supersymmetric Yang-Mills theory. Other efforts are also adopting cross-breeding approaches of above three formalisms as well as competing formalisms, nevertheless, the related symmetries, dualities, and correspondences are outlined as principles and techniques even these terminologies are defined diversely and often generally coined as duality. In this paper, we firstly classify these dualities from the perspective of physics. Then examine the hierarchical structure of classes from mathematical perspective referring to Coleman-Mandula theorem, Hidden Local Symmetry, Groupoid-Categorization and others. Based on Fundamental Theorems of Algebra, we argue that rather imposing effective constraints on different algebras and the related extensions, which are mainly constructed by self-breeding or self-mapping methodologies for sustaining invariance, we propose a new addition, momentum-angular momentum duality at the level of electromagnetic duality, for rationalizing the duality algebras, and then characterize this duality numerically with attempt for addressing some unsolved problems in physics and astrophysics.Keywords: general relativity, quantum mechanics, string theory, duality, symmetry, correspondence, algebra, momentum-angular-momentum
Procedia PDF Downloads 39714 Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator
Authors: Wedad Albalawi
Abstract:
The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is defined as a closed subset contains real numbers. Then the inequalities of time scales version have received a lot of attention and has had a major field in both pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on double integrals to obtain new time-scale inequalities of Copson driven by Steklov operator. They will be applied in the solution of the Cauchy problem for the wave equation. The proof can be done by introducing restriction on the operator in several cases. In addition, the obtained inequalities done by using some concepts in time scale version such as time scales calculus, theorem of Fubini and the inequality of H¨older.Keywords: time scales, inequality of Hardy, inequality of Coposon, Steklov operator
Procedia PDF Downloads 7613 Optimization of a High-Growth Investment Portfolio for the South African Market Using Predictive Analytics
Authors: Mia Françoise
Abstract:
This report aims to develop a strategy for assisting short-term investors to benefit from the current economic climate in South Africa by utilizing technical analysis techniques and predictive analytics. As part of this research, value investing and technical analysis principles will be combined to maximize returns for South African investors while optimizing volatility. As an emerging market, South Africa offers many opportunities for high growth in sectors where other developed countries cannot grow at the same rate. Investing in South African companies with significant growth potential can be extremely rewarding. Although the risk involved is more significant in countries with less developed markets and infrastructure, there is more room for growth in these countries. According to recent research, the offshore market is expected to outperform the local market over the long term; however, short-term investments in the local market will likely be more profitable, as the Johannesburg Stock Exchange is predicted to outperform the S&P500 over the short term. The instabilities in the economy contribute to increased market volatility, which can benefit investors if appropriately utilized. Price prediction and portfolio optimization comprise the two primary components of this methodology. As part of this process, statistics and other predictive modeling techniques will be used to predict the future performance of stocks listed on the Johannesburg Stock Exchange. Following predictive data analysis, Modern Portfolio Theory, based on Markowitz's Mean-Variance Theorem, will be applied to optimize the allocation of assets within an investment portfolio. By combining different assets within an investment portfolio, this optimization method produces a portfolio with an optimal ratio of expected risk to expected return. This methodology aims to provide a short-term investment with a stock portfolio that offers the best risk-to-return profile for stocks listed on the JSE by combining price prediction and portfolio optimization.Keywords: financial stocks, optimized asset allocation, prediction modelling, South Africa
Procedia PDF Downloads 9712 Seismic Active Earth Pressure on Retaining Walls with Reinforced Backfill
Authors: Jagdish Prasad Sahoo
Abstract:
The increase in active earth pressure during the event of an earthquake results sliding, overturning and tilting of earth retaining structures. In order to improve upon the stability of structures, the soil mass is often reinforced with various types of reinforcements such as metal strips, geotextiles, and geogrids etc. The stresses generated in the soil mass are transferred to the reinforcements through the interface friction between the earth and the reinforcement, which in turn reduces the lateral earth pressure on the retaining walls. Hence, the evaluation of earth pressure in the presence of seismic forces with an inclusion of reinforcements is important for the design retaining walls in the seismically active zones. In the present analysis, the effect of reinforcing horizontal layers of reinforcements in the form of sheets (Geotextiles and Geogrids) in sand used as backfill, on reducing the active earth pressure due to earthquake body forces has been studied. For carrying out the analysis, pseudo-static approach has been adopted by employing upper bound theorem of limit analysis in combination with finite elements and linear optimization. The computations have been performed with and out reinforcements for different internal friction angle of sand varying from 30 ° to 45 °. The effectiveness of the reinforcement in reducing the active earth pressure on the retaining walls is examined in terms of active earth pressure coefficient for presenting the solutions in a non-dimensional form. The active earth pressure coefficient is expressed as functions of internal friction angle of sand, interface friction angle between sand and reinforcement, soil-wall interface roughness conditions, and coefficient of horizontal seismic acceleration. It has been found that (i) there always exists a certain optimum depth of the reinforcement layers corresponding to which the value of active earth pressure coefficient becomes always the minimum, and (ii) the active earth pressure coefficient decreases significantly with an increase in length of reinforcements only up to a certain length beyond which a further increase in length hardly causes any reduction in the values active earth pressure. The optimum depth of the reinforcement layers and the required length of reinforcements corresponding to the optimum depth of reinforcements have been established. The numerical results developed in this analysis are expected to be useful for purpose of design of retaining walls.Keywords: active, finite elements, limit analysis, presudo-static, reinforcement
Procedia PDF Downloads 36511 Expert Supporting System for Diagnosing Lymphoid Neoplasms Using Probabilistic Decision Tree Algorithm and Immunohistochemistry Profile Database
Authors: Yosep Chong, Yejin Kim, Jingyun Choi, Hwanjo Yu, Eun Jung Lee, Chang Suk Kang
Abstract:
For the past decades, immunohistochemistry (IHC) has been playing an important role in the diagnosis of human neoplasms, by helping pathologists to make a clearer decision on differential diagnosis, subtyping, personalized treatment plan, and finally prognosis prediction. However, the IHC performed in various tumors of daily practice often shows conflicting and very challenging results to interpret. Even comprehensive diagnosis synthesizing clinical, histologic and immunohistochemical findings can be helpless in some twisted cases. Another important issue is that the IHC data is increasing exponentially and more and more information have to be taken into account. For this reason, we reached an idea to develop an expert supporting system to help pathologists to make a better decision in diagnosing human neoplasms with IHC results. We gave probabilistic decision tree algorithm and tested the algorithm with real case data of lymphoid neoplasms, in which the IHC profile is more important to make a proper diagnosis than other human neoplasms. We designed probabilistic decision tree based on Bayesian theorem, program computational process using MATLAB (The MathWorks, Inc., USA) and prepared IHC profile database (about 104 disease category and 88 IHC antibodies) based on WHO classification by reviewing the literature. The initial probability of each neoplasm was set with the epidemiologic data of lymphoid neoplasm in Korea. With the IHC results of 131 patients sequentially selected, top three presumptive diagnoses for each case were made and compared with the original diagnoses. After the review of the data, 124 out of 131 were used for final analysis. As a result, the presumptive diagnoses were concordant with the original diagnoses in 118 cases (93.7%). The major reason of discordant cases was that the similarity of the IHC profile between two or three different neoplasms. The expert supporting system algorithm presented in this study is in its elementary stage and need more optimization using more advanced technology such as deep-learning with data of real cases, especially in differentiating T-cell lymphomas. Although it needs more refinement, it may be used to aid pathological decision making in future. A further application to determine IHC antibodies for a certain subset of differential diagnoses might be possible in near future.Keywords: database, expert supporting system, immunohistochemistry, probabilistic decision tree
Procedia PDF Downloads 22410 Periodicity of Solutions to Impulsive Equations
Authors: Jin Liang, James H. Liu, Ti-Jun Xiao
Abstract:
It is known that there exist many physical phenomena where abrupt or impulsive changes occur either in the system dynamics, for example, ad-hoc network, or in the input forces containing impacts, for example, the bombardment of space antenna by micrometeorites. There are many other examples such as ultra high-speed optical signals over communication networks, the collision of particles, inventory control, government decisions, interest changes, changes in stock price, etc. These are impulsive phenomena. Hence, as a combination of the traditional initial value problems and the short-term perturbations whose duration can be negligible in comparison with the duration of the process, the systems with impulsive conditions (i.e., impulsive systems) are more realistic models for describing the impulsive phenomenon. Such a situation is also suitable for the delay systems, which include some of the past states of the system. So far, there have been a lot of research results in the study of impulsive systems with delay both in finite and infinite dimensional spaces. In this paper, we investigate the periodicity of solutions to the nonautonomous impulsive evolution equations with infinite delay in Banach spaces, where the coefficient operators (possibly unbounded) in the linear part depend on the time, which are impulsive systems in infinite dimensional spaces and come from the optimal control theory. It was indicated that the study of periodic solutions for these impulsive evolution equations with infinite delay was challenging because the fixed point theorems requiring some compactness conditions are not applicable to them due to the impulsive condition and the infinite delay. We are happy to report that after detailed analysis, we are able to combine the techniques developed in our previous papers, and some new ideas in this paper, to attack these impulsive evolution equations and derive periodic solutions. More specifically, by virtue of the related transition operator family (evolution family), we present a Poincaré operator given by the nonautonomous impulsive evolution system with infinite delay, and then show that the operator is a condensing operator with respect to Kuratowski's measure of non-compactness in a phase space by using an Amann's lemma. Finally, we derive periodic solutions from bounded solutions in view of the Sadovskii fixed point theorem. We also present a relationship between the boundedness and the periodicity of the solutions of the nonautonomous impulsive evolution system. The new results obtained here extend some earlier results in this area for evolution equations without impulsive conditions or without infinite delay.Keywords: impulsive, nonautonomous evolution equation, optimal control, periodic solution
Procedia PDF Downloads 2509 Application of Compressed Sensing and Different Sampling Trajectories for Data Reduction of Small Animal Magnetic Resonance Image
Authors: Matheus Madureira Matos, Alexandre Rodrigues Farias
Abstract:
Magnetic Resonance Imaging (MRI) is a vital imaging technique used in both clinical and pre-clinical areas to obtain detailed anatomical and functional information. However, MRI scans can be expensive, time-consuming, and often require the use of anesthetics to keep animals still during the imaging process. Anesthetics are commonly administered to animals undergoing MRI scans to ensure they remain still during the imaging process. However, prolonged or repeated exposure to anesthetics can have adverse effects on animals, including physiological alterations and potential toxicity. Minimizing the duration and frequency of anesthesia is, therefore, crucial for the well-being of research animals. In recent years, various sampling trajectories have been investigated to reduce the number of MRI measurements leading to shorter scanning time and minimizing the duration of animal exposure to the effects of anesthetics. Compressed sensing (CS) and sampling trajectories, such as cartesian, spiral, and radial, have emerged as powerful tools to reduce MRI data while preserving diagnostic quality. This work aims to apply CS and cartesian, spiral, and radial sampling trajectories for the reconstruction of MRI of the abdomen of mice sub-sampled at levels below that defined by the Nyquist theorem. The methodology of this work consists of using a fully sampled reference MRI of a female model C57B1/6 mouse acquired experimentally in a 4.7 Tesla MRI scanner for small animals using Spin Echo pulse sequences. The image is down-sampled by cartesian, radial, and spiral sampling paths and then reconstructed by CS. The quality of the reconstructed images is objectively assessed by three quality assessment techniques RMSE (Root mean square error), PSNR (Peak to Signal Noise Ratio), and SSIM (Structural similarity index measure). The utilization of optimized sampling trajectories and CS technique has demonstrated the potential for a significant reduction of up to 70% of image data acquisition. This result translates into shorter scan times, minimizing the duration and frequency of anesthesia administration and reducing the potential risks associated with it.Keywords: compressed sensing, magnetic resonance, sampling trajectories, small animals
Procedia PDF Downloads 738 Clustering Ethno-Informatics of Naming Village in Java Island Using Data Mining
Authors: Atje Setiawan Abdullah, Budi Nurani Ruchjana, I. Gede Nyoman Mindra Jaya, Eddy Hermawan
Abstract:
Ethnoscience is used to see the culture with a scientific perspective, which may help to understand how people develop various forms of knowledge and belief, initially focusing on the ecology and history of the contributions that have been there. One of the areas studied in ethnoscience is etno-informatics, is the application of informatics in the culture. In this study the science of informatics used is data mining, a process to automatically extract knowledge from large databases, to obtain interesting patterns in order to obtain a knowledge. While the application of culture described by naming database village on the island of Java were obtained from Geographic Indonesia Information Agency (BIG), 2014. The purpose of this study is; first, to classify the naming of the village on the island of Java based on the structure of the word naming the village, including the prefix of the word, syllable contained, and complete word. Second to classify the meaning of naming the village based on specific categories, as well as its role in the community behavioral characteristics. Third, how to visualize the naming of the village to a map location, to see the similarity of naming villages in each province. In this research we have developed two theorems, i.e theorems area as a result of research studies have collected intersection naming villages in each province on the island of Java, and the composition of the wedge theorem sets the provinces in Java is used to view the peculiarities of a location study. The methodology in this study base on the method of Knowledge Discovery in Database (KDD) on data mining, the process includes preprocessing, data mining and post processing. The results showed that the Java community prioritizes merit in running his life, always working hard to achieve a more prosperous life, and love as well as water and environmental sustainment. Naming villages in each location adjacent province has a high degree of similarity, and influence each other. Cultural similarities in the province of Central Java, East Java and West Java-Banten have a high similarity, whereas in Jakarta-Yogyakarta has a low similarity. This research resulted in the cultural character of communities within the meaning of the naming of the village on the island of Java, this character is expected to serve as a guide in the behavior of people's daily life on the island of Java.Keywords: ethnoscience, ethno-informatics, data mining, clustering, Java island culture
Procedia PDF Downloads 2837 Modeling Standpipe Pressure Using Multivariable Regression Analysis by Combining Drilling Parameters and a Herschel-Bulkley Model
Authors: Seydou Sinde
Abstract:
The aims of this paper are to formulate mathematical expressions that can be used to estimate the standpipe pressure (SPP). The developed formulas take into account the main factors that, directly or indirectly, affect the behavior of SPP values. Fluid rheology and well hydraulics are some of these essential factors. Mud Plastic viscosity, yield point, flow power, consistency index, flow rate, drillstring, and annular geometries are represented by the frictional pressure (Pf), which is one of the input independent parameters and is calculated, in this paper, using Herschel-Bulkley rheological model. Other input independent parameters include the rate of penetration (ROP), applied load or weight on the bit (WOB), bit revolutions per minute (RPM), bit torque (TRQ), and hole inclination and direction coupled in the hole curvature or dogleg (DL). The technique of repeating parameters and Buckingham PI theorem are used to reduce the number of the input independent parameters into the dimensionless revolutions per minute (RPMd), the dimensionless torque (TRQd), and the dogleg, which is already in the dimensionless form of radians. Multivariable linear and polynomial regression technique using PTC Mathcad Prime 4.0 is used to analyze and determine the exact relationships between the dependent parameter, which is SPP, and the remaining three dimensionless groups. Three models proved sufficiently satisfactory to estimate the standpipe pressure: multivariable linear regression model 1 containing three regression coefficients for vertical wells; multivariable linear regression model 2 containing four regression coefficients for deviated wells; and multivariable polynomial quadratic regression model containing six regression coefficients for both vertical and deviated wells. Although that the linear regression model 2 (with four coefficients) is relatively more complex and contains an additional term over the linear regression model 1 (with three coefficients), the former did not really add significant improvements to the later except for some minor values. Thus, the effect of the hole curvature or dogleg is insignificant and can be omitted from the input independent parameters without significant losses of accuracy. The polynomial quadratic regression model is considered the most accurate model due to its relatively higher accuracy for most of the cases. Data of nine wells from the Middle East were used to run the developed models with satisfactory results provided by all of them, even if the multivariable polynomial quadratic regression model gave the best and most accurate results. Development of these models is useful not only to monitor and predict, with accuracy, the values of SPP but also to early control and check for the integrity of the well hydraulics as well as to take the corrective actions should any unexpected problems appear, such as pipe washouts, jet plugging, excessive mud losses, fluid gains, kicks, etc.Keywords: standpipe, pressure, hydraulics, nondimensionalization, parameters, regression
Procedia PDF Downloads 846 A Proposal for an Excessivist Social Welfare Ordering
Authors: V. De Sandi
Abstract:
In this paper, we characterize a class of rank-weighted social welfare orderings that we call ”Excessivist.” The Excessivist Social Welfare Ordering (eSWO) judges incomes above a fixed threshold θ as detrimental to society. To accomplish this, the identification of a richness or affluence line is necessary. We employ a fixed, exogenous line of excess. We define an eSWF in the form of a weighted sum of individual’s income. This requires introducing n+1 vectors of weights, one for all possible numbers of individuals below the threshold. To do this, the paper introduces a slight modification of the class of rank weighted class of social welfare function. Indeed, in our excessivist social welfare ordering, we allow the weights to be both positive (for individuals below the line) and negative (for individuals above). Then, we introduce ethical concerns through an axiomatic approach. The following axioms are required: continuity above and below the threshold (Ca, Cb), anonymity (A), absolute aversion to excessive richness (AER), pigou dalton positive weights preserving transfer (PDwpT), sign rank preserving full comparability (SwpFC) and strong pareto below the threshold (SPb). Ca, Cb requires that small changes in two income distributions above and below θ do not lead to changes in their ordering. AER suggests that if two distributions are identical in any respect but for one individual above the threshold, who is richer in the first, then the second should be preferred by society. This means that we do not care about the waste of resources above the threshold; the priority is the reduction of excessive income. According to PDwpT, a transfer from a better-off individual to a worse-off individual despite their relative position to the threshold, without reversing their ranks, leads to an improved distribution if the number of individuals below the threshold is the same after the transfer or the number of individuals below the threshold has increased. SPb holds only for individuals below the threshold. The weakening of strong pareto and our ethics need to be justified; we support them through the notion of comparative egalitarianism and income as a source of power. SwpFC is necessary to ensure that, following a positive affine transformation, an individual does not become excessively rich in only one distribution, thereby reversing the ordering of the distributions. Given the axioms above, we can characterize the class of the eSWO, getting the following result through a proof by contradiction and exhaustion: Theorem 1. A social welfare ordering satisfies the axioms of continuity above and below the threshold, anonymity, sign rank preserving full comparability, aversion to excessive richness, Pigou Dalton positive weight preserving transfer, and strong pareto below the threshold, if and only if it is an Excessivist-social welfare ordering. A discussion about the implementation of different threshold lines reviewing the primary contributions in this field follows. What the commonly implemented social welfare functions have been overlooking is the concern for extreme richness at the top. The characterization of Excessivist Social Welfare Ordering, given the axioms above, aims to fill this gap.Keywords: comparative egalitarianism, excess income, inequality aversion, social welfare ordering
Procedia PDF Downloads 63