Search results for: squeeze theorem
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 178

Search results for: squeeze theorem

58 Open Forging of Cylindrical Blanks Subjected to Lateral Instability

Authors: A. H. Elkholy, D. M. Almutairi

Abstract:

The successful and efficient execution of a forging process is dependent upon the correct analysis of loading and metal flow of blanks. This paper investigates the Upper Bound Technique (UBT) and its application in the analysis of open forging process when a possibility of blank bulging exists. The UBT is one of the energy rate minimization methods for the solution of metal forming process based on the upper bound theorem. In this regards, the kinematically admissible velocity field is obtained by minimizing the total forging energy rate. A computer program is developed in this research to implement the UBT. The significant advantages of this method is the speed of execution while maintaining a fairly high degree of accuracy and the wide prediction capability. The information from this analysis is useful for the design of forging processes and dies. Results for the prediction of forging loads and stresses, metal flow and surface profiles with the assured benefits in terms of press selection and blank preform design are outlined in some detail. The obtained predictions are ready for comparison with both laboratory and industrial results.

Keywords: forging, upper bound technique, metal forming, forging energy, forging die/platen

Procedia PDF Downloads 254
57 Theory and Practice of Wavelets in Signal Processing

Authors: Jalal Karam

Abstract:

The methods of Fourier, Laplace, and Wavelet Transforms provide transfer functions and relationships between the input and the output signals in linear time invariant systems. This paper shows the equivalence among these three methods and in each case presenting an application of the appropriate (Fourier, Laplace or Wavelet) to the convolution theorem. In addition, it is shown that the same holds for a direct integration method. The Biorthogonal wavelets Bior3.5 and Bior3.9 are examined and the zeros distribution of their polynomials associated filters are located. This paper also presents the significance of utilizing wavelets as effective tools in processing speech signals for common multimedia applications in general, and for recognition and compression in particular. Theoretically and practically, wavelets have proved to be effective and competitive. The practical use of the Continuous Wavelet Transform (CWT) in processing and analysis of speech is then presented along with explanations of how the human ear can be thought of as a natural wavelet transformer of speech. This generates a variety of approaches for applying the (CWT) to many paradigms analysing speech, sound and music. For perception, the flexibility of implementation of this transform allows the construction of numerous scales and we include two of them. Results for speech recognition and speech compression are then included.

Keywords: continuous wavelet transform, biorthogonal wavelets, speech perception, recognition and compression

Procedia PDF Downloads 376
56 Vibration of Nanobeam Subjected to Constant Magnetic Field and Ramp-Type Thermal Loading under Non-Fourier Heat Conduction Law of Lord-Shulman

Authors: Hamdy M. Youssef

Abstract:

In this work, the usual Euler–Bernoulli nanobeam has been modeled in the context of Lord-Shulman thermoelastic theorem, which contains non-Fourier heat conduction law. The nanobeam has been subjected to a constant magnetic field and ramp-type thermal loading. The Laplace transform definition has been applied to the governing equations, and the solutions have been obtained by using a direct approach. The inversions of the Laplace transform have been calculated numerically by using Tzou approximation method. The solutions have been applied to a nanobeam made of silicon nitride. The distributions of the temperature increment, lateral deflection, strain, stress, and strain-energy density have been represented in figures with different values of the magnetic field intensity and ramp-time heat parameter. The value of the magnetic field intensity and ramp-time heat parameter have significant effects on all the studied functions, and they could be used as tuners to control the energy which has been generated through the nanobeam.

Keywords: nanobeam, vibration, constant magnetic field, ramp-type thermal loading, non-Fourier heat conduction law

Procedia PDF Downloads 86
55 From Data Processing to Experimental Design and Back Again: A Parameter Identification Problem Based on FRAP Images

Authors: Stepan Papacek, Jiri Jablonsky, Radek Kana, Ctirad Matonoha, Stefan Kindermann

Abstract:

FRAP (Fluorescence Recovery After Photobleaching) is a widely used measurement technique to determine the mobility of fluorescent molecules within living cells. While the experimental setup and protocol for FRAP experiments are usually fixed, data processing part is still under development. In this paper, we formulate and solve the problem of data selection which enhances the processing of FRAP images. We introduce the concept of the irrelevant data set, i.e., the data which are almost not reducing the confidence interval of the estimated parameters and thus could be neglected. Based on sensitivity analysis, we both solve the problem of the optimal data space selection and we find specific conditions for optimizing an important experimental design factor, e.g., the radius of bleach spot. Finally, a theorem announcing less precision of the integrated data approach compared to the full data case is proven; i.e., we claim that the data set represented by the FRAP recovery curve lead to a larger confidence interval compared to the spatio-temporal (full) data.

Keywords: FRAP, inverse problem, parameter identification, sensitivity analysis, optimal experimental design

Procedia PDF Downloads 250
54 A Theorem Related to Sample Moments and Two Types of Moment-Based Density Estimates

Authors: Serge B. Provost

Abstract:

Numerous statistical inference and modeling methodologies are based on sample moments rather than the actual observations. A result justifying the validity of this approach is introduced. More specifically, it will be established that given the first n moments of a sample of size n, one can recover the original n sample points. This implies that a sample of size n and its first associated n moments contain precisely the same amount of information. However, it is efficient to make use of a limited number of initial moments as most of the relevant distributional information is included in them. Two types of density estimation techniques that rely on such moments will be discussed. The first one expresses a density estimate as the product of a suitable base density and a polynomial adjustment whose coefficients are determined by equating the moments of the density estimate to the sample moments. The second one assumes that the derivative of the logarithm of a density function can be represented as a rational function. This gives rise to a system of linear equations involving sample moments, the density estimate is then obtained by solving a differential equation. Unlike kernel density estimation, these methodologies are ideally suited to model ‘big data’ as they only require a limited number of moments, irrespective of the sample size. What is more, they produce simple closed form expressions that are amenable to algebraic manipulations. They also turn out to be more accurate as will be shown in several illustrative examples.

Keywords: density estimation, log-density, polynomial adjustments, sample moments

Procedia PDF Downloads 123
53 End-to-End Pyramid Based Method for Magnetic Resonance Imaging Reconstruction

Authors: Omer Cahana, Ofer Levi, Maya Herman

Abstract:

Magnetic Resonance Imaging (MRI) is a lengthy medical scan that stems from a long acquisition time. Its length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach such as Compress Sensing (CS) or Parallel Imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. To achieve that, two conditions must be satisfied: i) the signal must be sparse under a known transform domain, and ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm must be applied to recover the signal. While the rapid advances in Deep Learning (DL) have had tremendous successes in various computer vision tasks, the field of MRI reconstruction is still in its early stages. In this paper, we present an end-to-end method for MRI reconstruction from k-space to image. Our method contains two parts. The first is sensitivity map estimation (SME), which is a small yet effective network that can easily be extended to a variable number of coils. The second is reconstruction, which is a top-down architecture with lateral connections developed for building high-level refinement at all scales. Our method holds the state-of-art fastMRI benchmark, which is the largest, most diverse benchmark for MRI reconstruction.

Keywords: magnetic resonance imaging, image reconstruction, pyramid network, deep learning

Procedia PDF Downloads 53
52 F-VarNet: Fast Variational Network for MRI Reconstruction

Authors: Omer Cahana, Maya Herman, Ofer Levi

Abstract:

Magnetic resonance imaging (MRI) is a long medical scan that stems from a long acquisition time. This length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach, such as compress sensing (CS) or parallel imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. In order to achieve that, two properties have to exist: i) the signal must be sparse under a known transform domain, ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm needs to be applied to recover the signal. While the rapid advance in the deep learning (DL) field, which has demonstrated tremendous successes in various computer vision task’s, the field of MRI reconstruction is still in an early stage. In this paper, we present an extension of the state-of-the-art model in MRI reconstruction -VarNet. We utilize VarNet by using dilated convolution in different scales, which extends the receptive field to capture more contextual information. Moreover, we simplified the sensitivity map estimation (SME), for it holds many unnecessary layers for this task. Those improvements have shown significant decreases in computation costs as well as higher accuracy.

Keywords: MRI, deep learning, variational network, computer vision, compress sensing

Procedia PDF Downloads 109
51 A New Fuzzy Fractional Order Model of Transmission of Covid-19 With Quarantine Class

Authors: Asma Hanif, A. I. K. Butt, Shabir Ahmad, Rahim Ud Din, Mustafa Inc

Abstract:

This paper is devoted to a study of the fuzzy fractional mathematical model reviewing the transmission dynamics of the infectious disease Covid-19. The proposed dynamical model consists of susceptible, exposed, symptomatic, asymptomatic, quarantine, hospitalized and recovered compartments. In this study, we deal with the fuzzy fractional model defined in Caputo’s sense. We show the positivity of state variables that all the state variables that represent different compartments of the model are positive. Using Gronwall inequality, we show that the solution of the model is bounded. Using the notion of the next-generation matrix, we find the basic reproduction number of the model. We demonstrate the local and global stability of the equilibrium point by using the concept of Castillo-Chavez and Lyapunov theory with the Lasalle invariant principle, respectively. We present the results that reveal the existence and uniqueness of the solution of the considered model through the fixed point theorem of Schauder and Banach. Using the fuzzy hybrid Laplace method, we acquire the approximate solution of the proposed model. The results are graphically presented via MATLAB-17.

Keywords: Caputo fractional derivative, existence and uniqueness, gronwall inequality, Lyapunov theory

Procedia PDF Downloads 74
50 The Martingale Options Price Valuation for European Puts Using Stochastic Differential Equation Models

Authors: H. C. Chinwenyi, H. D. Ibrahim, F. A. Ahmed

Abstract:

In modern financial mathematics, valuing derivatives such as options is often a tedious task. This is simply because their fair and correct prices in the future are often probabilistic. This paper examines three different Stochastic Differential Equation (SDE) models in finance; the Constant Elasticity of Variance (CEV) model, the Balck-Karasinski model, and the Heston model. The various Martingales option price valuation formulas for these three models were obtained using the replicating portfolio method. Also, the numerical solution of the derived Martingales options price valuation equations for the SDEs models was carried out using the Monte Carlo method which was implemented using MATLAB. Furthermore, results from the numerical examples using published data from the Nigeria Stock Exchange (NSE), all share index data show the effect of increase in the underlying asset value (stock price) on the value of the European Put Option for these models. From the results obtained, we see that an increase in the stock price yields a decrease in the value of the European put option price. Hence, this guides the option holder in making a quality decision by not exercising his right on the option.

Keywords: equivalent martingale measure, European put option, girsanov theorem, martingales, monte carlo method, option price valuation formula

Procedia PDF Downloads 103
49 Role of Additional Food Resources in an Ecosystem with Two Discrete Delays

Authors: Ankit Kumar, Balram Dubey

Abstract:

This study proposes a three dimensional prey-predator model with additional food, provided to predator individuals, including gestation delay in predators and delay in supplying the additional food to predators. It is assumed that the interaction between prey and predator is followed by Holling type-II functional response. We discussed the steady states and their local and global asymptotic behavior for the non-delayed system. Hopf-bifurcation phenomenon with respect to different parameters has also been studied. We obtained a range of predator’s tendency factor on provided additional food, in which the periodic solutions occur in the system. We have shown that oscillations can be controlled from the system by increasing the tendency factor. Moreover, the existence of periodic solutions via Hopf-bifurcation is shown with respect to both the delays. Our analysis shows that both delays play an important role in governing the dynamics of the system. It changes the stability behavior into instability behavior. The direction and stability of Hopf-bifurcation are also investigated through the normal form theory and the center manifold theorem. Lastly, some numerical simulations and graphical illustrations have been carried out to validate our analytical findings.

Keywords: additional food, gestation delay, Hopf-bifurcation, prey-predator

Procedia PDF Downloads 97
48 Investigating Safe Operation Condition for Iterative Learning Control under Load Disturbances Effect in Singular Values

Authors: Muhammad A. Alsubaie

Abstract:

An iterative learning control framework designed in state feedback structure suffers a lack in investigating load disturbance considerations. The presented work discusses the controller previously designed, highlights the disturbance problem, finds new conditions using singular value principle to assure safe operation conditions with error convergence and reference tracking under the influence of load disturbance. It is known that periodic disturbances can be represented by a delay model in a positive feedback loop acting on the system input. This model can be manipulated by isolating the delay model and finding a controller for the overall system around the delay model to remedy the periodic disturbances using the small signal theorem. The overall system is the base for control design and load disturbance investigation. The major finding of this work is the load disturbance condition found which clearly sets safe operation condition under the influence of load disturbances such that the error tends to nearly zero as the system keeps operating trial after trial.

Keywords: iterative learning control, singular values, state feedback, load disturbance

Procedia PDF Downloads 139
47 An Empirical Study on Switching Activation Functions in Shallow and Deep Neural Networks

Authors: Apoorva Vinod, Archana Mathur, Snehanshu Saha

Abstract:

Though there exists a plethora of Activation Functions (AFs) used in single and multiple hidden layer Neural Networks (NN), their behavior always raised curiosity, whether used in combination or singly. The popular AFs –Sigmoid, ReLU, and Tanh–have performed prominently well for shallow and deep architectures. Most of the time, AFs are used singly in multi-layered NN, and, to the best of our knowledge, their performance is never studied and analyzed deeply when used in combination. In this manuscript, we experiment with multi-layered NN architecture (both on shallow and deep architectures; Convolutional NN and VGG16) and investigate how well the network responds to using two different AFs (Sigmoid-Tanh, Tanh-ReLU, ReLU-Sigmoid) used alternately against a traditional, single (Sigmoid-Sigmoid, Tanh-Tanh, ReLUReLU) combination. Our results show that using two different AFs, the network achieves better accuracy, substantially lower loss, and faster convergence on 4 computer vision (CV) and 15 Non-CV (NCV) datasets. When using different AFs, not only was the accuracy greater by 6-7%, but we also accomplished convergence twice as fast. We present a case study to investigate the probability of networks suffering vanishing and exploding gradients when using two different AFs. Additionally, we theoretically showed that a composition of two or more AFs satisfies Universal Approximation Theorem (UAT).

Keywords: activation function, universal approximation function, neural networks, convergence

Procedia PDF Downloads 122
46 Reconfigurable Consensus Achievement of Multi Agent Systems Subject to Actuator Faults in a Leaderless Architecture

Authors: F. Amirarfaei, K. Khorasani

Abstract:

In this paper, reconfigurable consensus achievement of a team of agents with marginally stable linear dynamics and single input channel has been considered. The control algorithm is based on a first order linear protocol. After occurrence of a LOE fault in one of the actuators, using the imperfect information of the effectiveness of the actuators from fault detection and identification module, the control gain is redesigned in a way to still reach consensus. The idea is based on the modeling of change in effectiveness as change of Laplacian matrix. Then as special cases of this class of systems, a team of single integrators as well as double integrators are considered and their behavior subject to a LOE fault is considered. The well-known relative measurements consensus protocol is applied to a leaderless team of single integrator as well as double integrator systems, and Gersgorin disk theorem is employed to determine whether fault occurrence has an effect on system stability and team consensus achievement or not. The analyses show that loss of effectiveness fault in actuator(s) of integrator systems affects neither system stability nor consensus achievement.

Keywords: multi-agent system, actuator fault, stability analysis, consensus achievement

Procedia PDF Downloads 304
45 Implicit Transaction Costs and the Fundamental Theorems of Asset Pricing

Authors: Erindi Allaj

Abstract:

This paper studies arbitrage pricing theory in financial markets with transaction costs. We extend the existing theory to include the more realistic possibility that the price at which the investors trade is dependent on the traded volume. The investors in the market always buy at the ask and sell at the bid price. Transaction costs are composed of two terms, one is able to capture the implicit transaction costs and the other the price impact. Moreover, a new definition of a self-financing portfolio is obtained. The self-financing condition suggests that continuous trading is possible, but is restricted to predictable trading strategies which have left and right limit and finite quadratic variation. That is, predictable trading strategies of infinite variation and of finite quadratic variation are allowed in our setting. Within this framework, the existence of an equivalent probability measure is equivalent to the absence of arbitrage opportunities, so that the first fundamental theorem of asset pricing (FFTAP) holds. It is also proved that, when this probability measure is unique, any contingent claim in the market is hedgeable in an L2-sense. The price of any contingent claim is equal to the risk-neutral price. To better understand how to apply the theory proposed we provide an example with linear transaction costs.

Keywords: arbitrage pricing theory, transaction costs, fundamental theorems of arbitrage, financial markets

Procedia PDF Downloads 318
44 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation using PINN

Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy

Abstract:

The physics informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary condition to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful to study various optical phenomena.

Keywords: deep learning, optical Soliton, neural network, partial differential equation

Procedia PDF Downloads 83
43 Globally Attractive Mild Solutions for Non-Local in Time Subdiffusion Equations of Neutral Type

Authors: Jorge Gonzalez Camus, Carlos Lizama

Abstract:

In this work is proved the existence of at least one globally attractive mild solution to the Cauchy problem, for fractional evolution equation of neutral type, involving the fractional derivate in Caputo sense. An almost sectorial operator on a Banach space X and a kernel belonging to a large class appears in the equation, which covers many relevant cases from physics applications, in particular, the important case of time - fractional evolution equations of neutral type. The main tool used in this work was the Hausdorff measure of noncompactness and fixed point theorems, specifically Darbo-type. Initially, the equation is a Cauchy problem, involving a fractional derivate in Caputo sense. Then, is formulated the equivalent integral version, and defining a convenient functional, using the analytic integral resolvent operator, and verifying the hypothesis of the fixed point theorem of Darbo type, give us the existence of mild solution for the initial problem. Furthermore, each mild solution is globally attractive, a property that is desired in asymptotic behavior for that solution.

Keywords: attractive mild solutions, integral Volterra equations, neutral type equations, non-local in time equations

Procedia PDF Downloads 125
42 Formal Specification of Web Services Applications for Digital Reference Services of Library Information System

Authors: Magaji Zainab Musa, Nordin M. A. Rahman, Julaily Aida Jusoh

Abstract:

This paper discusses the formal specification of web services applications for digital reference services (WSDRS). Digital reference service involves a user requesting for help from a reference librarian and a reference librarian responding to the request of a user all by electronic means. In most cases users do not get satisfied while using digital reference service due to delay of response of the librarians. Another may be due to no response or due to librarian giving an irrelevant solution to the problem submitted by the user. WDSRS is an informal model that claims to reduce the problems of digital reference services in libraries. It uses web services technology to provide efficient way of satisfying users’ need in the reference section of libraries. But informal model is in natural language which is inconsistent and ambiguous that may cause difficulties to the developers of the system. In order to solve this problem we decided to convert the informal specifications into formal specifications. This is supposed to reduce the overall development time and cost. Formal specification can be used to provide an unambiguous and precise supplement to natural language descriptions. It can be rigorously validated and verified leading to the early detection of specification errors. We use Z language to develop the formal model and verify it with Z/EVES theorem prover tool.

Keywords: formal, specifications, web services, digital reference services

Procedia PDF Downloads 344
41 Strict Stability of Fuzzy Differential Equations by Lyapunov Functions

Authors: Mustafa Bayram Gücen, Coşkun Yakar

Abstract:

In this study, we have investigated the strict stability of fuzzy differential systems and we compare the classical notion of strict stability criteria of ordinary differential equations and the notion of strict stability of fuzzy differential systems. In addition that, we present definitions of stability and strict stability of fuzzy differential equations and also we have some theorems and comparison results. Strict Stability is a different stability definition and this stability type can give us an information about the rate of decay of the solutions. Lyapunov’s second method is a standard technique used in the study of the qualitative behavior of fuzzy differential systems along with a comparison result that allows the prediction of behavior of a fuzzy differential system when the behavior of the null solution of a fuzzy comparison system is known. This method is a usefull for investigating strict stability of fuzzy systems. First of all, we present definitions and necessary background material. Secondly, we discuss and compare the differences between the classical notion of stability and the recent notion of strict stability. And then, we have a comparison result in which the stability properties of the null solution of the comparison system imply the corresponding stability properties of the fuzzy differential system. Consequently, we give the strict stability results and a comparison theorem. We have used Lyapunov second method and we have proved a comparison result with scalar differential equations.

Keywords: fuzzy systems, fuzzy differential equations, fuzzy stability, strict stability

Procedia PDF Downloads 215
40 The Mathematics of Fractal Art: Using a Derived Cubic Method and the Julia Programming Language to Make Fractal Zoom Videos

Authors: Darsh N. Patel, Eric Olson

Abstract:

Fractals can be found everywhere, whether it be the shape of a leaf or a system of blood vessels. Fractals are used to help study and understand different physical and mathematical processes; however, their artistic nature is also beautiful to simply explore. This project explores fractals generated by a cubically convergent extension to Newton's method. With this iteration as a starting point, a complex plane spanning from -2 to 2 is created with a color wheel mapped onto it. Next, the polynomial whose roots the fractal will generate from is established. From the Fundamental Theorem of Algebra, it is known that any polynomial has as many roots (counted by multiplicity) as its degree. When generating the fractals, each root will receive its own color. The complex plane can then be colored to indicate the basins of attraction that converge to each root. From a computational point of view, this project’s code identifies which points converge to which roots and then obtains fractal images. A zoom path into the fractal was implemented to easily visualize the self-similar structure. This path was obtained by selecting keyframes at different magnifications through which a path is then interpolated. Using parallel processing, many images were generated and condensed into a video. This project illustrates how practical techniques used for scientific visualization can also have an artistic side.

Keywords: fractals, cubic method, Julia programming language, basin of attraction

Procedia PDF Downloads 219
39 Visualization of Wave Propagation in Monocoupled System with Effective Negative Stiffness, Effective Negative Mass, and Inertial Amplifier

Authors: Abhigna Bhatt, Arnab Banerjee

Abstract:

A periodic system with only a single coupling degree of freedom is called a monocoupled system. Monocoupled systems with mechanisms like mass in the mass system generates effective negative mass, mass connected with rigid links generates inertial amplification, and spring-mass connected with a rigid link generateseffective negative stiffness. In this paper, the representative unit cell is introduced, considering all three mechanisms combined. Further, the dynamic stiffness matrix of the unit cell is constructed, and the dispersion relation is obtained by applying the Bloch theorem. The frequency response function is also calculated for the finite length of periodic unit cells. Moreover, the input displacement signal is given to the finite length of periodic structure and using inverse Fourier transform to visualize the wave propagation in the time domain. This visualization explains the sudden attenuation in metamaterial due to energy dissipation by an embedded resonator at the resonance frequency. The visualization created for wave propagation is found necessary to understand the insights of physics behind the attenuation characteristics of the system.

Keywords: mono coupled system, negative effective mass, negative effective stiffness, inertial amplifier, fourier transform

Procedia PDF Downloads 79
38 Analytical Solutions for Tunnel Collapse Mechanisms in Circular Cross-Section Tunnels under Seepage and Seismic Forces

Authors: Zhenyu Yang, Qiunan Chen, Xiaocheng Huang

Abstract:

Reliable prediction of tunnel collapse remains a prominent challenge in the field of civil engineering. In this study, leveraging the nonlinear Hoek-Brown failure criterion and the upper-bound theorem, an analytical solution for the collapse surface of shallowly buried circular tunnels was derived, taking into account the coupled effects of surface loads and pore water pressures. Initially, surface loads and pore water pressures were introduced as external force factors, equating the energy dissipation rate to the external force, yielding our objective function. Subsequently, the variational method was employed for optimization, and the outcomes were juxtaposed with previous research findings. Furthermore, we utilized the deduced equation set to systematically analyze the influence of various rock mass parameters on collapse shape and extent. To validate our analytical solutions, a comparison with prior studies was executed. The corroboration underscored the efficacy of our proposed methodology, offering invaluable insights for collapse risk assessment in practical engineering applications.

Keywords: tunnel roof stability, analytical solution, hoek–brown failure criterion, limit analysis

Procedia PDF Downloads 52
37 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation Using Physics-Informed Neural Network

Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy

Abstract:

The physics-informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on a strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary conditions to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of the Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful in studying various optical phenomena.

Keywords: deep learning, optical soliton, physics informed neural network, partial differential equation

Procedia PDF Downloads 41
36 AS-Geo: Arbitrary-Sized Image Geolocalization with Learnable Geometric Enhancement Resizer

Authors: Huayuan Lu, Chunfang Yang, Ma Zhu, Baojun Qi, Yaqiong Qiao, Jiangqian Xu

Abstract:

Image geolocalization has great application prospects in fields such as autonomous driving and virtual/augmented reality. In practical application scenarios, the size of the image to be located is not fixed; it is impractical to train different networks for all possible sizes. When its size does not match the size of the input of the descriptor extraction model, existing image geolocalization methods usually directly scale or crop the image in some common ways. This will result in the loss of some information important to the geolocalization task, thus affecting the performance of the image geolocalization method. For example, excessive down-sampling can lead to blurred building contour, and inappropriate cropping can lead to the loss of key semantic elements, resulting in incorrect geolocation results. To address this problem, this paper designs a learnable image resizer and proposes an arbitrary-sized image geolocation method. (1) The designed learnable image resizer employs the self-attention mechanism to enhance the geometric features of the resized image. Firstly, it applies bilinear interpolation to the input image and its feature maps to obtain the initial resized image and the resized feature maps. Then, SKNet (selective kernel net) is used to approximate the best receptive field, thus keeping the geometric shapes as the original image. And SENet (squeeze and extraction net) is used to automatically select the feature maps with strong contour information, enhancing the geometric features. Finally, the enhanced geometric features are fused with the initial resized image, to obtain the final resized images. (2) The proposed image geolocalization method embeds the above image resizer as a fronting layer of the descriptor extraction network. It not only enables the network to be compatible with arbitrary-sized input images but also enhances the geometric features that are crucial to the image geolocalization task. Moreover, the triplet attention mechanism is added after the first convolutional layer of the backbone network to optimize the utilization of geometric elements extracted by the first convolutional layer. Finally, the local features extracted by the backbone network are aggregated to form image descriptors for image geolocalization. The proposed method was evaluated on several mainstream datasets, such as Pittsburgh30K, Tokyo24/7, and Places365. The results show that the proposed method has excellent size compatibility and compares favorably to recently mainstream geolocalization methods.

Keywords: image geolocalization, self-attention mechanism, image resizer, geometric feature

Procedia PDF Downloads 171
35 Nonlinear Evolution on Graphs

Authors: Benniche Omar

Abstract:

We are concerned with abstract fully nonlinear differential equations having the form y’(t)=Ay(t)+f(t,y(t)) where A is an m—dissipative operator (possibly multi—valued) defined on a subset D(A) of a Banach space X with values in X and f is a given function defined on I×X with values in X. We consider a graph K in I×X. We recall that K is said to be viable with respect to the above abstract differential equation if for each initial data in K there exists at least one trajectory starting from that initial data and remaining in K at least for a short time. The viability problem has been studied by many authors by using various techniques and frames. If K is closed, it is shown that a tangency condition, which is mainly linked to the dynamic, is crucial for viability. In the case when X is infinite dimensional, compactness and convexity assumptions are needed. In this paper, we are concerned with the notion of near viability for a given graph K with respect to y’(t)=Ay(t)+f(t,y(t)). Roughly speaking, the graph K is said to be near viable with respect to y’(t)=Ay(t)+f(t,y(t)), if for each initial data in K there exists at least one trajectory remaining arbitrary close to K at least for short time. It is interesting to note that the near viability is equivalent to an appropriate tangency condition under mild assumptions on the dynamic. Adding natural convexity and compactness assumptions on the dynamic, we may recover the (exact) viability. Here we investigate near viability for a graph K in I×X with respect to y’(t)=Ay(t)+f(t,y(t)) where A and f are as above. We emphasis that the t—dependence on the perturbation f leads us to introduce a new tangency concept. In the base of a tangency conditions expressed in terms of that tangency concept, we formulate criteria for K to be near viable with respect to y’(t)=Ay(t)+f(t,y(t)). As application, an abstract null—controllability theorem is given.

Keywords: abstract differential equation, graph, tangency condition, viability

Procedia PDF Downloads 115
34 Time Delayed Susceptible-Vaccinated-Infected-Recovered-Susceptible Epidemic Model along with Nonlinear Incidence and Nonlinear Treatment

Authors: Kanica Goel, Nilam

Abstract:

Infectious diseases are a leading cause of death worldwide and hence a great challenge for every nation. Thus, it becomes utmost essential to prevent and reduce the spread of infectious disease among humans. Mathematical models help to better understand the transmission dynamics and spread of infections. For this purpose, in the present article, we have proposed a nonlinear time-delayed SVIRS (Susceptible-Vaccinated-Infected-Recovered-Susceptible) mathematical model with nonlinear type incidence rate and nonlinear type treatment rate. Analytical study of the model shows that model exhibits two types of equilibrium points, namely, disease-free equilibrium and endemic equilibrium. Further, for the long-term behavior of the model, stability of the model is discussed with the help of basic reproduction number R₀ and we showed that disease-free equilibrium is locally asymptotically stable if the basic reproduction number R₀ is less than one and unstable if the basic reproduction number R₀ is greater than one for the time lag τ≥0. Furthermore, when basic reproduction number R₀ is one, using center manifold theory and Casillo-Chavez and Song theorem, we showed that the model undergoes transcritical bifurcation. Moreover, numerical simulations are being carried out using MATLAB 2012b to illustrate the theoretical results.

Keywords: nonlinear incidence rate, nonlinear treatment rate, stability, time delayed SVIRS epidemic model

Procedia PDF Downloads 123
33 An Ensemble Learning Method for Applying Particle Swarm Optimization Algorithms to Systems Engineering Problems

Authors: Ken Hampshire, Thomas Mazzuchi, Shahram Sarkani

Abstract:

As a subset of metaheuristics, nature-inspired optimization algorithms such as particle swarm optimization (PSO) have shown promise both in solving intractable problems and in their extensibility to novel problem formulations due to their general approach requiring few assumptions. Unfortunately, single instantiations of algorithms require detailed tuning of parameters and cannot be proven to be best suited to a particular illustrative problem on account of the “no free lunch” (NFL) theorem. Using these algorithms in real-world problems requires exquisite knowledge of the many techniques and is not conducive to reconciling the various approaches to given classes of problems. This research aims to present a unified view of PSO-based approaches from the perspective of relevant systems engineering problems, with the express purpose of then eliciting the best solution for any problem formulation in an ensemble learning bucket of models approach. The central hypothesis of the research is that extending the PSO algorithms found in the literature to real-world optimization problems requires a general ensemble-based method for all problem formulations but a specific implementation and solution for any instance. The main results are a problem-based literature survey and a general method to find more globally optimal solutions for any systems engineering optimization problem.

Keywords: particle swarm optimization, nature-inspired optimization, metaheuristics, systems engineering, ensemble learning

Procedia PDF Downloads 53
32 Theoretical Study of Structural, Magnetic, and Magneto-Optical Properties of Ultrathin Films of Fe/Cu (001)

Authors: Mebarek Boukelkoul, Abdelhalim Haroun

Abstract:

By means of the first principle calculation, we have investigated the structural, magnetic and magneto-optical properties of the ultra-thin films of Fen/Cu(001) with (n=1, 2, 3). We adopted a relativistic approach using DFT theorem with local spin density approximation (LSDA). The electronic structure is performed within the framework of the Spin-Polarized Relativistic (SPR) Linear Muffin-Tin Orbitals (LMTO) with the Atomic Sphere Approximation (ASA) method. During the variational principle, the crystal wave function is expressed as a linear combination of the Bloch sums of the so-called relativistic muffin-tin orbitals centered on the atomic sites. The crystalline structure is calculated after an atomic relaxation process using the optimization of the total energy with respect to the atomic interplane distance. A body-centered tetragonal (BCT) pseudomorphic crystalline structure with a tetragonality ratio c/a larger than unity is found. The magnetic behaviour is characterized by an enhanced magnetic moment and a ferromagnetic interplane coupling. The polar magneto-optical Kerr effect spectra are given over a photon energy range extended to 15eV and the microscopic origin of the most interesting features are interpreted by interband transitions. Unlike thin layers, the anisotropy in the ultra-thin films is characterized by a perpendicular magnetization which is perpendicular to the film plane.

Keywords: ultrathin films, magnetism, magneto-optics, pseudomorphic structure

Procedia PDF Downloads 303
31 Quantifying Fatigue during Periods of Intensified Competition in Professional Ice Hockey Players: Magnitude of Fatigue in Selected Markers

Authors: Eoin Kirwan, Christopher Nulty, Declan Browne

Abstract:

The professional ice hockey season consists of approximately 60 regular season games with periods of fixture congestion occurring several times in the average season. These periods of congestion provide limited time for recovery, exposing the athletes to the risk of competing whilst not fully recovered. Although a body of research is growing with respect to monitoring fatigue, particularly during periods of congested fixtures in team sports such as rugby and soccer, it has received little to no attention thus far in ice hockey athletes. Consequently, there is limited knowledge on monitoring tools that might effectively detect a fatigue response and the magnitude of fatigue that can accumulate when recovery is limited by competitive fixtures. The benefit of quantifying and establishing fatigue status is the ability to optimise training and provide pertinent information on player health, injury risk, availability and readiness. Some commonly used methods to assess fatigue and recovery status of athletes include the use of perceived fatigue and wellbeing questionnaires, tests of muscular force and ratings of perceive exertion (RPE). These measures are widely used in popular team sports such as soccer and rugby and show promise as assessments of fatigue and recovery status for ice hockey athletes. As part of a larger study, this study explored the magnitude of changes in adductor muscle strength after game play and throughout a period of fixture congestion and examined the relationship between internal game load and perceived wellbeing with adductor muscle strength. Methods 8 professional ice hockey players from a British Elite League club volunteered to participate (age = 29.3 ± 2.49 years, height = 186.15 ± 6.75 cm, body mass = 90.85 ± 8.64 kg). Prior to and after competitive games each player performed trials of the adductor squeeze test at 0˚ hip flexion with the lead investigator using hand-held dynamometry. Rate of perceived exertion was recorded for each game and from data of total ice time individual session RPE was calculated. After each game players completed a 5- point questionnaire to assess perceived wellbeing. Data was collected from six competitive games, 1 practice and 36 hours post the final game, over a 10 – day period. Results Pending final data collection in February Conclusions Pending final data collection in February.

Keywords: Conjested fixtures, fatigue monitoring, ice hockey, readiness

Procedia PDF Downloads 107
30 Anthropomorphism in the Primate Mind-Reading Debate: A Critique of Sober's Justification Argument

Authors: Boyun Lee

Abstract:

This study aims to discuss whether anthropomorphism some scientists tend to use in cross-species comparison can be justified epistemologically, especially in the primate mind-reading debate. Concretely, this study critically analyzes Elliott Sober’s argument about mind-reading hypothesis (MRH), an anthropomorphic hypothesis which states that nonhuman primates (e.g., chimpanzee) are mind-readers like humans. Although many scientists consider anthropomorphism as an error and choosing anthropomorphic hypothesis like MRH without any definite evidence invalid, Sober advocates that anthropomorphism is supported by cladistic parsimony that suggests choosing the simplest hypothesis postulating the minimum number of evolutionary changes, which can be justified epistemologically in the mind-reading debate. However, his argument has several problems. First, Reichenbach’s theorem which Sober uses in process of showing that MRH has the higher likelihood than its competing hypothesis, behavior-reading hypothesis (BRH), does not fit in the context of inferring the evolutionary relationship. Second, the phylogenetic tree Sober supports is one of the possible scenarios of MRH, and even without this problem, it is difficult to prove that the possibility nonhuman primate species and human share mind-reading ability is higher than the possibility of the other case, considering how evolution occurs. Consequently, it seems hard to justify anthropomorphism of MRH under Sober’s argument. Some scientists and philosophers say that anthropomorphism sometimes helps observe interesting phenomena or make hypotheses in comparative biology. Nonetheless, we cannot determine that it provides answers about why and how the interesting phenomena appear or which of the hypotheses is better, at least the mind-reading debate, under the current state.

Keywords: anthropomorphism, cladistic parsimony, comparative biology, mind-reading debate

Procedia PDF Downloads 136
29 Chebyshev Collocation Method for Solving Heat Transfer Analysis for Squeezing Flow of Nanofluid in Parallel Disks

Authors: Mustapha Rilwan Adewale, Salau Ayobami Muhammed

Abstract:

This study focuses on the heat transfer analysis of magneto-hydrodynamics (MHD) squeezing flow between parallel disks, considering a viscous incompressible fluid. The upper disk exhibits both upward and downward motion, while the lower disk remains stationary but permeable. By employing similarity transformations, a system of nonlinear ordinary differential equations is derived to describe the flow behavior. To solve this system, a numerical approach, namely the Chebyshev collocation method, is utilized. The study investigates the influence of flow parameters and compares the obtained results with existing literature. The significance of this research lies in understanding the heat transfer characteristics of MHD squeezing flow, which has practical implications in various engineering and industrial applications. By employing the similarity transformations, the complex governing equations are simplified into a system of nonlinear ordinary differential equations, facilitating the analysis of the flow behavior. To obtain numerical solutions for the system, the Chebyshev collocation method is implemented. This approach provides accurate approximations for the nonlinear equations, enabling efficient computations of the heat transfer properties. The obtained results are compared with existing literature, establishing the validity and consistency of the numerical approach. The study's major findings shed light on the influence of flow parameters on the heat transfer characteristics of the squeezing flow. The analysis reveals the impact of parameters such as magnetic field strength, disk motion amplitude, fluid viscosity on the heat transfer rate between the disks, the squeeze number(S), suction/injection parameter(A), Hartman number(M), Prandtl number(Pr), modified Eckert number(Ec), and the dimensionless length(δ). These findings contribute to a comprehensive understanding of the system's behavior and provide insights for optimizing heat transfer processes in similar configurations. In conclusion, this study presents a thorough heat transfer analysis of magneto-hydrodynamics squeezing flow between parallel disks. The numerical solutions obtained through the Chebyshev collocation method demonstrate the feasibility and accuracy of the approach. The investigation of flow parameters highlights their influence on heat transfer, contributing to the existing knowledge in this field. The agreement of the results with previous literature further strengthens the reliability of the findings. These outcomes have practical implications for engineering applications and pave the way for further research in related areas.

Keywords: squeezing flow, magneto-hydro-dynamics (MHD), chebyshev collocation method(CCA), parallel manifolds, finite difference method (FDM)

Procedia PDF Downloads 41