Search results for: generalized Chebyshev polynomial
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 998

Search results for: generalized Chebyshev polynomial

728 Extreme Value Modelling of Ghana Stock Exchange Indices

Authors: Kwabena Asare, Ezekiel N. N. Nortey, Felix O. Mettle

Abstract:

Modelling of extreme events has always been of interest in fields such as hydrology and meteorology. However, after the recent global financial crises, appropriate models for modelling of such rare events leading to these crises have become quite essential in the finance and risk management fields. This paper models the extreme values of the Ghana Stock Exchange All-Shares indices (2000-2010) by applying the Extreme Value Theory to fit a model to the tails of the daily stock returns data. A conditional approach of the EVT was preferred and hence an ARMA-GARCH model was fitted to the data to correct for the effects of autocorrelation and conditional heteroscedastic terms present in the returns series, before EVT method was applied. The Peak Over Threshold (POT) approach of the EVT, which fits a Generalized Pareto Distribution (GPD) model to excesses above a certain selected threshold, was employed. Maximum likelihood estimates of the model parameters were obtained and the model’s goodness of fit was assessed graphically using Q-Q, P-P and density plots. The findings indicate that the GPD provides an adequate fit to the data of excesses. The size of the extreme daily Ghanaian stock market movements were then computed using the Value at Risk (VaR) and Expected Shortfall (ES) risk measures at some high quantiles, based on the fitted GPD model.

Keywords: extreme value theory, expected shortfall, generalized pareto distribution, peak over threshold, value at risk

Procedia PDF Downloads 515
727 Detecting Earnings Management via Statistical and Neural Networks Techniques

Authors: Mohammad Namazi, Mohammad Sadeghzadeh Maharluie

Abstract:

Predicting earnings management is vital for the capital market participants, financial analysts and managers. The aim of this research is attempting to respond to this query: Is there a significant difference between the regression model and neural networks’ models in predicting earnings management, and which one leads to a superior prediction of it? In approaching this question, a Linear Regression (LR) model was compared with two neural networks including Multi-Layer Perceptron (MLP), and Generalized Regression Neural Network (GRNN). The population of this study includes 94 listed companies in Tehran Stock Exchange (TSE) market from 2003 to 2011. After the results of all models were acquired, ANOVA was exerted to test the hypotheses. In general, the summary of statistical results showed that the precision of GRNN did not exhibit a significant difference in comparison with MLP. In addition, the mean square error of the MLP and GRNN showed a significant difference with the multi variable LR model. These findings support the notion of nonlinear behavior of the earnings management. Therefore, it is more appropriate for capital market participants to analyze earnings management based upon neural networks techniques, and not to adopt linear regression models.

Keywords: earnings management, generalized linear regression, neural networks multi-layer perceptron, Tehran stock exchange

Procedia PDF Downloads 398
726 Bayesian Analysis of Topp-Leone Generalized Exponential Distribution

Authors: Najrullah Khan, Athar Ali Khan

Abstract:

The Topp-Leone distribution was introduced by Topp- Leone in 1955. In this paper, an attempt has been made to fit Topp-Leone Generalized exponential (TPGE) distribution. A real survival data set is used for illustrations. Implementation is done using R and JAGS and appropriate illustrations are made. R and JAGS codes have been provided to implement censoring mechanism using both optimization and simulation tools. The main aim of this paper is to describe and illustrate the Bayesian modelling approach to the analysis of survival data. Emphasis is placed on the modeling of data and the interpretation of the results. Crucial to this is an understanding of the nature of the incomplete or 'censored' data encountered. Analytic approximation and simulation tools are covered here, but most of the emphasis is on Markov chain based Monte Carlo method including independent Metropolis algorithm, which is currently the most popular technique. For analytic approximation, among various optimization algorithms and trust region method is found to be the best. In this paper, TPGE model is also used to analyze the lifetime data in Bayesian paradigm. Results are evaluated from the above mentioned real survival data set. The analytic approximation and simulation methods are implemented using some software packages. It is clear from our findings that simulation tools provide better results as compared to those obtained by asymptotic approximation.

Keywords: Bayesian Inference, JAGS, Laplace Approximation, LaplacesDemon, posterior, R Software, simulation

Procedia PDF Downloads 502
725 Step Height Calibration Using Hamming Window: Band-Pass Filter

Authors: Dahi Ghareab Abdelsalam Ibrahim

Abstract:

Calibration of step heights with high accuracy is needed for many applications in the industry. In general, step height consists of three bands: pass band, transition band (roll-off), and stop band. Abdelsalam used a convolution of the transfer functions of both Chebyshev type 2 and elliptic filters with WFF of the Fresnel transform in the frequency domain for producing a steeper roll-off with the removal of ripples in the pass band- and stop-bands. In this paper, we used a new method based on the Hamming window: band-pass filter for calibration of step heights in terms of perfect adjustment of pass-band, roll-off, and stop-band. The method is applied to calibrate a nominal step height of 40 cm. The step height is measured first by asynchronous dual-wavelength phase-shift interferometry. The measured step height is then calibrated by the simulation of the Hamming window: band-pass filter. The spectrum of the simulated band-pass filter is simulated at N = 881 and f0 = 0.24. We can conclude that the proposed method can calibrate any step height by adjusting only two factors which are N and f0.

Keywords: optical metrology, step heights, hamming window, band-pass filter

Procedia PDF Downloads 56
724 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading

Authors: Robert Caulk

Abstract:

A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.

Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration

Procedia PDF Downloads 64
723 Formulation and Test of a Model to explain the Complexity of Road Accident Events in South Africa

Authors: Dimakatso Machetele, Kowiyou Yessoufou

Abstract:

Whilst several studies indicated that road accident events might be more complex than thought, we have a limited scientific understanding of this complexity in South Africa. The present project proposes and tests a more comprehensive metamodel that integrates multiple causality relationships among variables previously linked to road accidents. This was done by fitting a structural equation model (SEM) to the data collected from various sources. The study also fitted the GARCH Model (Generalized Auto-Regressive Conditional Heteroskedasticity) to predict the future of road accidents in the country. The analysis shows that the number of road accidents has been increasing since 1935. The road fatality rate follows a polynomial shape following the equation: y = -0.0114x²+1.2378x-2.2627 (R²=0.76) with y = death rate and x = year. This trend results in an average death rate of 23.14 deaths per 100,000 people. Furthermore, the analysis shows that the number of crashes could be significantly explained by the total number of vehicles (P < 0.001), number of registered vehicles (P < 0.001), number of unregistered vehicles (P = 0.003) and the population of the country (P < 0.001). As opposed to expectation, the number of driver licenses issued and total distance traveled by vehicles do not correlate significantly with the number of crashes (P > 0.05). Furthermore, the analysis reveals that the number of casualties could be linked significantly to the number of registered vehicles (P < 0.001) and total distance traveled by vehicles (P = 0.03). As for the number of fatal crashes, the analysis reveals that the total number of vehicles (P < 0.001), number of registered (P < 0.001) and unregistered vehicles (P < 0.001), the population of the country (P < 0.001) and the total distance traveled by vehicles (P < 0.001) correlate significantly with the number of fatal crashes. However, the number of casualties and again the number of driver licenses do not seem to determine the number of fatal crashes (P > 0.05). Finally, the number of crashes is predicted to be roughly constant overtime at 617,253 accidents for the next 10 years, with the worse scenario suggesting that this number may reach 1 896 667. The number of casualties was also predicted to be roughly constant at 93 531 overtime, although this number may reach 661 531 in the worst-case scenario. However, although the number of fatal crashes may decrease over time, it is forecasted to reach 11 241 fatal crashes within the next 10 years, with the worse scenario estimated at 19 034 within the same period. Finally, the number of fatalities is also predicted to be roughly constant at 14 739 but may also reach 172 784 in the worse scenario. Overall, the present study reveals the complexity of road accidents and allows us to propose several recommendations aimed to reduce the trend of road accidents, casualties, fatal crashes, and death in South Africa.

Keywords: road accidents, South Africa, statistical modelling, trends

Procedia PDF Downloads 131
722 Optimization of Tundish Geometry for Minimizing Dead Volume Using OpenFOAM

Authors: Prateek Singh, Dilshad Ahmad

Abstract:

Growing demand for high-quality steel products has inspired researchers to investigate the unit operations involved in the manufacturing of these products (slabs, rods, sheets, etc.). One such operation is tundish operation, in which a vessel (tundish) acts as a buffer of molten steel for the solidification operation in mold. It is observed that tundish also plays a crucial role in the quality and cleanliness of the steel produced, besides merely acting as a reservoir for the mold. It facilitates removal of dissolved oxygen (inclusions) from the molten steel thus improving its cleanliness. Inclusion removal can be enhanced by increasing the residence time of molten steel in the tundish by incorporation of flow modifiers like dams, weirs, turbo-pad, etc. These flow modifiers also help in reducing the dead or short circuit zones within the tundish which is significant for maintaining thermal and chemical homogeneity of molten steel. Thus, it becomes important to analyze the flow of molten steel in the tundish for different configuration of flow modifiers. In the present work, effect of varying positions and heights/depths of dam and weir on the dead volume in tundish is studied. Steady state thermal and flow profiles of molten steel within the tundish are obtained using OpenFOAM. Subsequently, Residence Time Distribution analysis is performed to obtain the percentage of dead volume in the tundish. Design of Experiment method is then used to configure different tundish geometries for varying positions and heights/depths of dam and weir, and dead volume for each tundish design is obtained. A second-degree polynomial with two-term interactions of independent variables to predict the dead volume in the tundish with positions and heights/depths of dam and weir as variables are computed using Multiple Linear Regression model. This polynomial is then used in an optimization framework to obtain the optimal tundish geometry for minimizing dead volume using Sequential Quadratic Programming optimization.

Keywords: design of experiments, multiple linear regression, OpenFOAM, residence time distribution, sequential quadratic programming optimization, steel, tundish

Procedia PDF Downloads 176
721 The Generalized Lemaitre-Tolman-Bondi Solutions in Modeling the Cosmological Black Holes

Authors: Elena M. Kopteva, Pavlina Jaluvkova, Zdenek Stuchlik

Abstract:

In spite of the numerous attempts to close the discussion about the influence of cosmological expansion on local gravitationally bounded systems, this question arises in literature again and again and remains still far from its final resolution. Here one of the main problems is the problem of obtaining a physically adequate model of strongly gravitating object immersed in non-static cosmological background. Such objects are usually called ‘cosmological’ black holes and are of great interest in wide set of cosmological and astrophysical areas. In this work the set of new exact solutions of the Einstein equations is derived for the flat space that generalizes the known Lemaitre-Tolman-Bondi solution for the case of nonzero pressure. The solutions obtained are pretending to describe the black hole immersed in nonstatic cosmological background and give a possibility to investigate the hot problems concerning the effects of the cosmological expansion in gravitationally bounded systems, the structure formation in the early universe, black hole thermodynamics and other related problems. It is shown that each of the solutions obtained contains either the Reissner-Nordstrom or the Schwarzschild black hole in the central region of the space. It is demonstrated that the approach of the mass function use in solving of the Einstein equations allows clear physical interpretation of the resulting solutions, that is of much benefit to any their concrete application.

Keywords: exact solutions of the Einstein equations, cosmological black holes, generalized Lemaitre-Tolman-Bondi solutions, nonzero pressure

Procedia PDF Downloads 394
720 Primes as Sums and Differences of Two Binomial Coefficients and Two Powersums

Authors: Benjamin Lee Warren

Abstract:

Many problems exist in additive number theory which is essential to determine the primes that are the sum of two elements from a given single-variable polynomial sequence, and most of them are unattackable in the present day. Here, we determine solutions for this problem to a few certain sequences (certain binomial coefficients and power sums) using only elementary algebra and some algebraic factoring methods (as well as Euclid’s Lemma and Faulhaber’s Formula). In particular, we show that there are finitely many primes as sums of two of these types of elements. Several cases are fully illustrated, and bounds are presented for the cases not fully illustrated.

Keywords: binomial coefficients, power sums, primes, algebra

Procedia PDF Downloads 68
719 Metaheuristics to Solve Tasks Scheduling

Authors: Rachid Ziteuni, Selt Omar

Abstract:

In this paper, we propose a new polynomial metaheuristic elaboration (tabu search) for solving scheduling problems. This method allows us to solve the scheduling problem of n tasks on m identical parallel machines with unavailability periods. This problem is NP-complete in the strong sens and finding an optimal solution appears unlikely. Note that all data in this problem are integer and deterministic. The performance criterion to optimize in this problem which we denote Pm/N-c/summs of (wjCj) is the weighted sum of the end dates of tasks.

Keywords: scheduling, parallel identical machines, unavailability periods, metaheuristic, tabu search

Procedia PDF Downloads 300
718 Joint Optimal Pricing and Lot-Sizing Decisions for an Advance Sales System under Stochastic Conditions

Authors: Maryam Ghoreishi, Christian Larsen

Abstract:

In this paper, we investigate the effect of stochastic inputs on problem of joint optimal pricing and lot-sizing decisions where the inventory cycle is divided into advance and spot sales periods. During the advance sales period, customer can make reservations while customer with reservations can cancel their order. However, during the spot sales period customers receive the order as soon as the order is placed, but they cannot make any reservation or cancellation during that period. We assume that the inter arrival times during the advance sales and spot sales period are exponentially distributed where the arrival rate is decreasing function of price. Moreover, we assume that the number of cancelled reservations is binomially distributed. In addition, we assume that deterioration process follows an exponential distribution. We investigate two cases. First, we consider two-state case where we find the optimal price during the spot sales period and the optimal price during the advance sales period. Next, we develop a generalized case where we extend two-state case also to allow dynamic prices during the spot sales period. We apply the Markov decision theory in order to find the optimal solutions. In addition, for the generalized case, we apply the policy iteration algorithm in order to find the optimal prices, the optimal lot-size and maximum advance sales amount.

Keywords: inventory control, pricing, Markov decision theory, advance sales system

Procedia PDF Downloads 299
717 Complex Network Analysis of Seismicity and Applications to Short-Term Earthquake Forecasting

Authors: Kahlil Fredrick Cui, Marissa Pastor

Abstract:

Earthquakes are complex phenomena, exhibiting complex correlations in space, time, and magnitude. Recently, the concept of complex networks has been used to shed light on the statistical and dynamical characteristics of regional seismicity. In this work, we study the relationships and interactions of seismic regions in Chile, Japan, and the Philippines through weighted and directed complex network analysis. Geographical areas are digitized into cells of fixed dimensions which in turn become the nodes of the network when an earthquake has occurred therein. Nodes are linked if a correlation exists between them as determined and measured by a correlation metric. The networks are found to be scale-free, exhibiting power-law behavior in the distributions of their different centrality measures: the in- and out-degree and the in- and out-strength. The evidence is also found of preferential interaction between seismically active regions through their degree-degree correlations suggesting that seismicity is dictated by the activity of a few active regions. The importance of a seismic region to the overall seismicity is measured using a generalized centrality metric taken to be an indicator of its activity or passivity. The spatial distribution of earthquake activity indicates the areas where strong earthquakes have occurred in the past while the passivity distribution points toward the likely locations an earthquake would occur whenever another one happens elsewhere. Finally, we propose a method that would project the location of the next possible earthquake using the generalized centralities coupled with correlations calculated between the latest earthquakes and a geographical point in the future.

Keywords: complex networks, correlations, earthquake, hazard assessment

Procedia PDF Downloads 183
716 Nonlocal Beam Models for Free Vibration Analysis of Double-Walled Carbon Nanotubes with Various End Supports

Authors: Babak Safaei, Ahmad Ghanbari, Arash Rahmani

Abstract:

In the present study, the free vibration characteristics of double-walled carbon nanotubes (DWCNTs) are investigated. The small-scale effects are taken into account using the Eringen’s nonlocal elasticity theory. The nonlocal elasticity equations are implemented into the different classical beam theories namely as Euler-Bernoulli beam theory (EBT), Timoshenko beam theory (TBT), Reddy beam theory (RBT), and Levinson beam theory (LBT) to analyze the free vibrations of DWCNTs in which each wall of the nanotubes is considered as individual beam with van der Waals interaction forces. Generalized differential quadrature (GDQ) method is utilized to discretize the governing differential equations of each nonlocal beam model along with four commonly used boundary conditions. Then molecular dynamics (MD) simulation is performed for a series of armchair and zigzag DWCNTs with different aspect ratios and boundary conditions, the results of which are matched with those of nonlocal beam models to extract the appropriate values of the nonlocal parameter corresponding to each type of chirality, nonlocal beam model and boundary condition. It is found that the present nonlocal beam models with their proposed correct values of nonlocal parameter have good capability to predict the vibrational behavior of DWCNTs, especially for higher aspect ratios.

Keywords: double-walled carbon nanotubes, nonlocal continuum elasticity, free vibrations, molecular dynamics simulation, generalized differential quadrature method

Procedia PDF Downloads 264
715 Sharp Estimates of Oscillatory Singular Integrals with Rough Kernels

Authors: H. Al-Qassem, L. Cheng, Y. Pan

Abstract:

In this paper, we establish sharp bounds for oscillatory singular integrals with an arbitrary real polynomial phase P. Our kernels are allowed to be rough both on the unit sphere and in the radial direction. We show that the bounds grow no faster than log (deg(P)), which is optimal and was first obtained by Parissis and Papadimitrakis for kernels without any radial roughness. Our results substantially improve many previously known results. Among key ingredients of our methods are an L¹→L² sharp estimate and using extrapolation.

Keywords: oscillatory singular integral, rough kernel, singular integral, orlicz spaces, block spaces, extrapolation, L^{p} boundedness

Procedia PDF Downloads 433
714 Modeling of Turbulent Flow for Two-Dimensional Backward-Facing Step Flow

Authors: Alex Fedoseyev

Abstract:

This study investigates a generalized hydrodynamic equation (GHE) simplified model for the simulation of turbulent flow over a two-dimensional backward-facing step (BFS) at Reynolds number Re=132000. The GHE were derived from the generalized Boltzmann equation (GBE). GBE was obtained by first principles from the chain of Bogolubov kinetic equations and considers particles of finite dimensions. The GHE has additional terms, temporal and spatial fluctuations, compared to the Navier-Stokes equations (NSE). These terms have a timescale multiplier τ, and the GHE becomes the NSE when $\tau$ is zero. The nondimensional τ is a product of the Reynolds number and the squared length scale ratio, τ=Re*(l/L)², where l is the apparent Kolmogorov length scale, and L is a hydrodynamic length scale. The BFS flow modeling results obtained by 2D calculations cannot match the experimental data for Re>450. One or two additional equations are required for the turbulence model to be added to the NSE, which typically has two to five parameters to be tuned for specific problems. It is shown that the GHE does not require an additional turbulence model, whereas the turbulent velocity results are in good agreement with the experimental results. A review of several studies on the simulation of flow over the BFS from 1980 to 2023 is provided. Most of these studies used different turbulence models when Re>1000. In this study, the 2D turbulent flow over a BFS with height H=L/3 (where L is the channel height) at Reynolds number Re=132000 was investigated using numerical solutions of the GHE (by a finite-element method) and compared to the solutions from the Navier-Stokes equations, k–ε turbulence model, and experimental results. The comparison included the velocity profiles at X/L=5.33 (near the end of the recirculation zone, available from the experiment), recirculation zone length, and velocity flow field. The mean velocity of NSE was obtained by averaging the solution over the number of time steps. The solution with a standard k −ε model shows a velocity profile at X/L=5.33, which has no backward flow. A standard k−ε model underpredicts the experimental recirculation zone length X/L=7.0∓0.5 by a substantial amount of 20-25%, and a more sophisticated turbulence model is needed for this problem. The obtained data confirm that the GHE results are in good agreement with the experimental results for turbulent flow over two-dimensional BFS. A turbulence model was not required in this case. The computations were stable. The solution time for the GHE is the same or less than that for the NSE and significantly less than that for the NSE with the turbulence model. The proposed approach was limited to 2D and only one Reynolds number. Further work will extend this approach to 3D flow and a higher Re.

Keywords: backward-facing step, comparison with experimental data, generalized hydrodynamic equations, separation, reattachment, turbulent flow

Procedia PDF Downloads 28
713 Magnetic and Optical Properties of GaFeMnN

Authors: A.Abbad, H.A.Bentounes, W.Benstaali

Abstract:

The full-potential linearized augmented plane wave method (FP-LAPW) within the Generalized Gradient Approximation (GGA) is used to calculate the magnetic and optical properties of quaternary GaFeMnN. The results show that the compound becomes magnetic and half metallic and there is an apparition of peaks at low frequencies for the optical properties.

Keywords: FP-LAPW, LSDA, magnetic moment, reflectivity

Procedia PDF Downloads 494
712 Determinants of the Shadow Economy with an Islamic Orientation: An Application to Organization of Islamic Cooperation and Non-Organization of Islamic Cooperation Countries

Authors: Shabeer Khan

Abstract:

The main objective of Islamic Finance is to promote social justice thorough financial inclusion and redistribution of economic resources between rich and poor. The approach of Islamic finance is more comprehensive in nature and covers both formal and informal sectors of the economy, first, through reducing the gap between both sectors, and second by using specific Islamic values to reallocate the wealth between formal and informal sectors. Applying Generalized Method of Movements (GMM) to the annual data spanning from 1995-2015 for 141 countries, this study explores the determinants of informal business sector in Organization of Islamic Cooperation (OIC) countries and then compares with Non-OIC countries. Economic freedom and institutions variables as well as economic growth and money supply are found to reduce informal business sector in both OIC and Non-OIC nations while government expenditure are found to increase informal business sector in both group of nations. Informal Business sector remain the same in both types of countries but still the majority Muslim population in OIC economies create main difference between both groups of nations and justify the potential role of Islamic Finance in informal business sector in OIC nations. The study suggests that institutions quality should be improved and entrepreneurs’ friendly business environment must be provided. This study refines the main features of informal business sector and discuss their implications on policy designing and implementation, particularly in the context of Islamic finance fight against poverty, inequality and improving living standards of informal sector participants in OIC countries.

Keywords: Islamic finance, informal Business Sector, Generalized Method of Movements (GMM) and OIC

Procedia PDF Downloads 131
711 Flow over an Exponentially Stretching Sheet with Hall and Cross-Diffusion Effects

Authors: Srinivasacharya Darbhasayanam, Jagadeeshwar Pashikanti

Abstract:

This paper analyzes the Soret and Dufour effects on mixed convection flow, heat and mass transfer from an exponentially stretching surface in a viscous fluid with Hall Effect. The governing partial differential equations are transformed into ordinary differential equations using similarity transformations. The nonlinear coupled ordinary differential equations are reduced to a system of linear differential equations using the successive linearization method and then solved the resulting linear system using the Chebyshev pseudo spectral method. The numerical results for the velocity components, temperature and concentration are presented graphically. The obtained results are compared with the previously published results, and are found to be in excellent agreement. It is observed from the present analysis that the primary and secondary velocities and concentration are found to be increasing, and temperature is decreasing with the increase in the values of the Soret parameter. An increase in the Dufour parameter increases both the primary and secondary velocities and temperature and decreases the concentration.

Keywords: Exponentially stretching sheet, Hall current, Heat and Mass transfer, Soret and Dufour Effects

Procedia PDF Downloads 185
710 Molecular Dynamics Simulation for Vibration Analysis at Nanocomposite Plates

Authors: Babak Safaei, A. M. Fattahi

Abstract:

Polymer/carbon nanotube nanocomposites have a wide range of promising applications Due to their enhanced properties. In this work, free vibration analysis of single-walled carbon nanotube-reinforced composite plates is conducted in which carbon nanotubes are embedded in an amorphous polyethylene. The rule of mixture based on various types of plate model namely classical plate theory (CLPT), first-order shear deformation theory (FSDT), and higher-order shear deformation theory (HSDT) was employed to obtain fundamental frequencies of the nanocomposite plates. Generalized differential quadrature (GDQ) method was used to discretize the governing differential equations along with the simply supported and clamped boundary conditions. The material properties of the nanocomposite plates were evaluated using molecular dynamic (MD) simulation corresponding to both short-(10,10) SWCNT and long-(10,10) SWCNT composites. Then the results obtained directly from MD simulations were fitted with those calculated by the rule of mixture to extract appropriate values of carbon nanotube efficiency parameters accounting for the scale-dependent material properties. The selected numerical results are presented to address the influences of nanotube volume fraction and edge supports on the value of fundamental frequency of carbon nanotube-reinforced composite plates corresponding to both long- and short-nanotube composites.

Keywords: nanocomposites, molecular dynamics simulation, free vibration, generalized, differential quadrature (GDQ) method

Procedia PDF Downloads 303
709 Commercial Automobile Insurance: A Practical Approach of the Generalized Additive Model

Authors: Nicolas Plamondon, Stuart Atkinson, Shuzi Zhou

Abstract:

The insurance industry is usually not the first topic one has in mind when thinking about applications of data science. However, the use of data science in the finance and insurance industry is growing quickly for several reasons, including an abundance of reliable customer data, ferocious competition requiring more accurate pricing, etc. Among the top use cases of data science, we find pricing optimization, customer segmentation, customer risk assessment, fraud detection, marketing, and triage analytics. The objective of this paper is to present an application of the generalized additive model (GAM) on a commercial automobile insurance product: an individually rated commercial automobile. These are vehicles used for commercial purposes, but for which there is not enough volume to apply pricing to several vehicles at the same time. The GAM model was selected as an improvement over GLM for its ease of use and its wide range of applications. The model was trained using the largest split of the data to determine model parameters. The remaining part of the data was used as testing data to verify the quality of the modeling activity. We used the Gini coefficient to evaluate the performance of the model. For long-term monitoring, commonly used metrics such as RMSE and MAE will be used. Another topic of interest in the insurance industry is to process of producing the model. We will discuss at a high level the interactions between the different teams with an insurance company that needs to work together to produce a model and then monitor the performance of the model over time. Moreover, we will discuss the regulations in place in the insurance industry. Finally, we will discuss the maintenance of the model and the fact that new data does not come constantly and that some metrics can take a long time to become meaningful.

Keywords: insurance, data science, modeling, monitoring, regulation, processes

Procedia PDF Downloads 54
708 A Generalised Propensity Score Analysis to Investigate the Influence of Agricultural Research Systems on Greenhouse Gas Emissions

Authors: Spada Alessia, Fiore Mariantonietta, Lamonaca Emilia, Contò Francesco

Abstract:

Bioeconomy can give the chance to face new global challenges and can move ahead the transition from a waste economy to an economy based on renewable resources and sustainable consumption. Air pollution is a grave issue in green challenges, mainly caused by anthropogenic factors. The agriculture sector is a great contributor to global greenhouse gases (GHGs) emissions due to lacking efficient management of the resources involved and research policies. In particular, livestock sector contributes to emissions of GHGs, deforestation, and nutrient imbalances. More effective agricultural research systems and technologies are crucial in order to improve farm productivity but also to reduce the GHGs emissions. Using data from FAOSTAT statistics and concern the EU countries; the aim of this research is to evaluate the impact of ASTI R&D (Agricultural Science and Technology Indicators) on GHGs emissions for countries EU in 2015 by generalized propensity score procedures, estimating a dose-response function, also considering a set of covariates. Expected results show the existence of the influence of ASTI R&D on GHGs across EU countries. Implications are crucial: reducing GHGs emissions by means of R&D based policies and correlatively reaching eco-friendly management of required resources by means of green available practices could have a crucial role for fair intra-generational implications.

Keywords: agricultural research systems, dose-response function, generalized propensity score, GHG emissions

Procedia PDF Downloads 256
707 Bayesian Locally Approach for Spatial Modeling of Visceral Leishmaniasis Infection in Northern and Central Tunisia

Authors: Kais Ben-Ahmed, Mhamed Ali-El-Aroui

Abstract:

This paper develops a Local Generalized Linear Spatial Model (LGLSM) to describe the spatial variation of Visceral Leishmaniasis (VL) infection risk in northern and central Tunisia. The response from each region is a number of affected children less than five years of age recorded from 1996 through 2006 from Tunisian pediatric departments and treated as a poison county level data. The model includes climatic factors, namely averages of annual rainfall, extreme values of low temperatures in winter and high temperatures in summer to characterize the climate of each region according to each continentality index, the pluviometric quotient of Emberger (Q2) to characterize bioclimatic regions and component for residual extra-poison variation. The statistical results show the progressive increase in the number of affected children in regions with high continentality index and low mean yearly rainfull. On the other hand, an increase in pluviometric quotient of Emberger contributed to a significant increase in VL incidence rate. When compared with the original GLSM, Bayesian locally modeling is improvement and gives a better approximation of the Tunisian VL risk estimation. According to the Bayesian approach inference, we use vague priors for all parameters model and Markov Chain Monte Carlo method.

Keywords: generalized linear spatial model, local model, extra-poisson variation, continentality index, visceral leishmaniasis, Tunisia

Procedia PDF Downloads 373
706 Representation of the Solution of One Dynamical System on the Plane

Authors: Kushakov Kholmurodjon, Muhammadjonov Akbarshox

Abstract:

This present paper is devoted to a system of second-order nonlinear differential equations with a special right-hand side, exactly, the linear part and a third-order polynomial of a special form. It is shown that for some relations between the parameters, there is a second-order curve in which trajectories leaving the points of this curve remain in the same place. Thus, the curve is invariant with respect to the given system. Moreover, this system is invariant under a non-degenerate linear transformation of variables. The form of this curve, depending on the relations between the parameters and the eigenvalues of the matrix, is proved. All solutions of this system of differential equations are shown analytically.

Keywords: dynamic system, ellipse, hyperbola, Hess system, polar coordinate system

Procedia PDF Downloads 163
705 Automated Natural Hazard Zonation System with Internet-SMS Warning: Distributed GIS for Sustainable Societies Creating Schema and Interface for Mapping and Communication

Authors: Devanjan Bhattacharya, Jitka Komarkova

Abstract:

The research describes the implementation of a novel and stand-alone system for dynamic hazard warning. The system uses all existing infrastructure already in place like mobile networks, a laptop/PC and the small installation software. The geospatial dataset are the maps of a region which are again frugal. Hence there is no need to invest and it reaches everyone with a mobile. A novel architecture of hazard assessment and warning introduced where major technologies in ICT interfaced to give a unique WebGIS based dynamic real time geohazard warning communication system. A never before architecture introduced for integrating WebGIS with telecommunication technology. Existing technologies interfaced in a novel architectural design to address a neglected domain in a way never done before–through dynamically updatable WebGIS based warning communication. The work publishes new architecture and novelty in addressing hazard warning techniques in sustainable way and user friendly manner. Coupling of hazard zonation and hazard warning procedures into a single system has been shown. Generalized architecture for deciphering a range of geo-hazards has been developed. Hence the developmental work presented here can be summarized as the development of internet-SMS based automated geo-hazard warning communication system; integrating a warning communication system with a hazard evaluation system; interfacing different open-source technologies towards design and development of a warning system; modularization of different technologies towards development of a warning communication system; automated data creation, transformation and dissemination over different interfaces. The architecture of the developed warning system has been functionally automated as well as generalized enough that can be used for any hazard and setup requirement has been kept to a minimum.

Keywords: geospatial, web-based GIS, geohazard, warning system

Procedia PDF Downloads 372
704 A Bayesian Parameter Identification Method for Thermorheological Complex Materials

Authors: Michael Anton Kraus, Miriam Schuster, Geralt Siebert, Jens Schneider

Abstract:

Polymers increasingly gained interest in construction materials over the last years in civil engineering applications. As polymeric materials typically show time- and temperature dependent material behavior, which is accounted for in the context of the theory of linear viscoelasticity. Within the context of this paper, the authors show, that some polymeric interlayers for laminated glass can not be considered as thermorheologically simple as they do not follow a simple TTSP, thus a methodology of identifying the thermorheologically complex constitutive bahavioir is needed. ‘Dynamical-Mechanical-Thermal-Analysis’ (DMTA) in tensile and shear mode as well as ‘Differential Scanning Caliometry’ (DSC) tests are carried out on the interlayer material ‘Ethylene-vinyl acetate’ (EVA). A navoel Bayesian framework for the Master Curving Process as well as the detection and parameter identification of the TTSPs along with their associated Prony-series is derived and applied to the EVA material data. To our best knowledge, this is the first time, an uncertainty quantification of the Prony-series in a Bayesian context is shown. Within this paper, we could successfully apply the derived Bayesian methodology to the EVA material data to gather meaningful Master Curves and TTSPs. Uncertainties occurring in this process can be well quantified. We found, that EVA needs two TTSPs with two associated Generalized Maxwell Models. As the methodology is kept general, the derived framework could be also applied to other thermorheologically complex polymers for parameter identification purposes.

Keywords: bayesian parameter identification, generalized Maxwell model, linear viscoelasticity, thermorheological complex

Procedia PDF Downloads 230
703 Initial Periodontal Therapy and Follow-up in a Periodontitis Patient: A Case Report

Authors: Yasir Karabacak

Abstract:

Objective: The aim of periodontal therapy is to control and eliminate inflammation in order halt disease progression. The initial periodontal therapy (IPT) including scaling and root planing (SRP) can control periodontal disease in most cases of periodontitis; also maintaining good oral hygiene by the patient is fundamental. The aim of this case report is to present IPT and to present 3-month follow-up results in a patient with periodontitis. Materials and Methods IPT of a 63-year-old non-smoker male with generalized periodontitis is presented. The patient had no history of systemic disease. The intraoral examination reveals marked gingival inflammation as well as plaque accumulation and significant calculus deposits. On radiographic examination, severe bone loss was evident. The patient was diagnosed with generalized advanced periodontitis. Initial periodontal therapy including oral hygiene instructions and quadrant-based SRP under local anesthesia was performed using hand and ultrasonic instruments. No antibiotics were prescribed. The patient was recalled 4 weeks after IPT. Results Favorable clinical improvement was obtained. Gingival inflammation was resolved significantly. A reduction of the mean probing depth from 2.4 mm at baseline to 1.9 mm was observed. The patient presented with a good standard of oral hygiene. The plaque scores decreased from 54.0% at baseline to 17.0%. In addition, the percentage of sites with bleeding on probing decreased from 80.0% at baseline to 44.0%. The patient was scheduled for maintenance therapy every three months. Conclusion: The level of oral hygiene has a great impact on periodontal treatment outcome and supports periodontal therapy properly.

Keywords: initial periodontal, therapy and follow-up in a periodontitis, patient, a case report

Procedia PDF Downloads 52
702 Evaluation of Newly Synthesized Steroid Derivatives Using In silico Molecular Descriptors and Chemometric Techniques

Authors: Milica Ž. Karadžić, Lidija R. Jevrić, Sanja Podunavac-Kuzmanović, Strahinja Z. Kovačević, Anamarija I. Mandić, Katarina Penov-Gaši, Andrea R. Nikolić, Aleksandar M. Oklješa

Abstract:

This study considered selection of the in silico molecular descriptors and the models for newly synthesized steroid derivatives description and their characterization using chemometric techniques. Multiple linear regression (MLR) models were established and gave the best molecular descriptors for quantitative structure-retention relationship (QSRR) modeling of the retention of the investigated molecules. MLR models were without multicollinearity among the selected molecular descriptors according to the variance inflation factor (VIF) values. Used molecular descriptors were ranked using generalized pair correlation method (GPCM). In this method, the significant difference between independent variables can be noticed regardless almost equal correlation between dependent variable. Generated MLR models were statistically and cross-validated and the best models were kept. Models were ranked using sum of ranking differences (SRD) method. According to this method, the most consistent QSRR model can be found and similarity or dissimilarity between the models could be noticed. In this study, SRD was performed using average values of experimentally observed data as a golden standard. Chemometric analysis was conducted in order to characterize newly synthesized steroid derivatives for further investigation regarding their potential biological activity and further synthesis. This article is based upon work from COST Action (CM1105), supported by COST (European Cooperation in Science and Technology).

Keywords: generalized pair correlation method, molecular descriptors, regression analysis, steroids, sum of ranking differences

Procedia PDF Downloads 314
701 A Polynomial Relationship for Prediction of COD Removal Efficiency of Cyanide-Inhibited Wastewater in Aerobic Systems

Authors: Eze R. Onukwugha

Abstract:

The presence of cyanide in wastewater is known to inhibit the normal functioning of bio-reactors since it has the tendency to poison reactor micro-organisms. Bench scale models of activated sludge reactors with varying aspect ratios were operated for the treatment of cassava wastewater at several values of hydraulic retention time (HRT). The different values of HRT were achieved by the use of a peristaltic pump to vary the rate of introduction of the wastewater into the reactor. The main parameters monitored are the cyanide concentration and respective COD values of the influent and effluent. These observed values were then transformed into a mathematical model for the prediction of treatment efficiency.

Keywords: wastewater, aspect ratio, cyanide-inhibited wastewater, modeling

Procedia PDF Downloads 52
700 Direct CP Violation in Baryonic B-Hadron Decays

Authors: C. Q. Geng, Y. K. Hsiao

Abstract:

We study direct CP-violating asymmetries (CPAs) in the baryonic B decays of B- -> p\bar{p}M and Λb decays of Λb ®pM andΛb -> J/ΨpM with M=π-, K-,ρ-,K*- based on the generalized factorization method in the standard model (SM). In particular, we show that the CPAs in the vector modes of B-®p\bar{p}K* and Λb -> p K*- can be as large as 20%. We also discuss the simplest purely baryonic decays of Λb-> p\bar{p}n, p\bar{p}Λ, Λ\bar{p}Λ, and Λ\bar{Λ}Λ. We point out that some of CPAs are promising to be measured by the current as well as future B facilities.

Keywords: CP violation, B decays, baryonic decays, Λb decays

Procedia PDF Downloads 233
699 Magnetic and Optical Properties of Quaternary GaFeMnN

Authors: B. Bouadjemi, S. Bentata, A. Abbad, W.Benstaali

Abstract:

The full-potential linearized augmented plane wave method (FP-LAPW) within the Generalized Gradient Approximation (GGA) is used to calculate the magnetic and optical properties of quaternary GaFeMnN. The results show that the compound becomes magnetic and half metallic and there is an apparition of peaks at low frequencies for the optical properties.

Keywords: optical properties, DFT, Spintronic, wave

Procedia PDF Downloads 524