Search results for: spiking neuron models
6574 The Martingale Options Price Valuation for European Puts Using Stochastic Differential Equation Models
Authors: H. C. Chinwenyi, H. D. Ibrahim, F. A. Ahmed
Abstract:
In modern financial mathematics, valuing derivatives such as options is often a tedious task. This is simply because their fair and correct prices in the future are often probabilistic. This paper examines three different Stochastic Differential Equation (SDE) models in finance; the Constant Elasticity of Variance (CEV) model, the Balck-Karasinski model, and the Heston model. The various Martingales option price valuation formulas for these three models were obtained using the replicating portfolio method. Also, the numerical solution of the derived Martingales options price valuation equations for the SDEs models was carried out using the Monte Carlo method which was implemented using MATLAB. Furthermore, results from the numerical examples using published data from the Nigeria Stock Exchange (NSE), all share index data show the effect of increase in the underlying asset value (stock price) on the value of the European Put Option for these models. From the results obtained, we see that an increase in the stock price yields a decrease in the value of the European put option price. Hence, this guides the option holder in making a quality decision by not exercising his right on the option.Keywords: equivalent martingale measure, European put option, girsanov theorem, martingales, monte carlo method, option price valuation formula
Procedia PDF Downloads 1326573 The Hyperbolic Smoothing Approach for Automatic Calibration of Rainfall-Runoff Models
Authors: Adilson Elias Xavier, Otto Corrêa Rotunno Filho, Paulo Canedo De Magalhães
Abstract:
This paper addresses the issue of automatic parameter estimation in conceptual rainfall-runoff (CRR) models. Due to threshold structures commonly occurring in CRR models, the associated mathematical optimization problems have the significant characteristic of being strongly non-differentiable. In order to face this enormous task, the resolution method proposed adopts a smoothing strategy using a special C∞ differentiable class function. The final estimation solution is obtained by solving a sequence of differentiable subproblems which gradually approach the original conceptual problem. The use of this technique, called Hyperbolic Smoothing Method (HSM), makes possible the application of the most powerful minimization algorithms, and also allows for the main difficulties presented by the original CRR problem to be overcome. A set of computational experiments is presented for the purpose of illustrating both the reliability and the efficiency of the proposed approach.Keywords: rainfall-runoff models, automatic calibration, hyperbolic smoothing method
Procedia PDF Downloads 1496572 Developing Location-allocation Models in the Three Echelon Supply Chain
Authors: Mehdi Seifbarghy, Zahra Mansouri
Abstract:
In this paper a few location-allocation models are developed in a multi-echelon supply chain including suppliers, manufacturers, distributors and retailers. The objectives are maximizing demand coverage, minimizing the total distance of distributors from suppliers, minimizing some facility establishment costs and minimizing the environmental effects. Since nature of the given models is multi-objective, we suggest a number of goal-based solution techniques such L-P metric, goal programming, multi-choice goal programming and goal attainment in order to solve the problems.Keywords: location, multi-echelon supply chain, covering, goal programming
Procedia PDF Downloads 5596571 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices
Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu
Abstract:
Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction
Procedia PDF Downloads 1056570 Intensive Use of Software in Teaching and Learning Calculus
Authors: Nodelman V.
Abstract:
Despite serious difficulties in the assimilation of the conceptual system of Calculus, software in the educational process is used only occasionally, and even then, mainly for illustration purposes. The following are a few reasons: The non-trivial nature of the studied material, Lack of skills in working with software, Fear of losing time working with software, The variety of the software itself, the corresponding interface, syntax, and the methods of working with the software, The need to find suitable models, and familiarize yourself with working with them, Incomplete compatibility of the found models with the content and teaching methods of the studied material. This paper proposes an active use of the developed non-commercial software VusuMatica, which allows removing these restrictions through Broad support for the studied mathematical material (and not only Calculus). As a result - no need to select the right software, Emphasizing the unity of mathematics, its intrasubject and interdisciplinary relations, User-friendly interface, Absence of special syntax in defining mathematical objects, Ease of building models of the studied material and manipulating them, Unlimited flexibility of models thanks to the ability to redefine objects, which allows exploring objects characteristics, and considering examples and counterexamples of the concepts under study. The construction of models is based on an original approach to the analysis of the structure of the studied concepts. Thanks to the ease of construction, students are able not only to use ready-made models but also to create them on their own and explore the material studied with their help. The presentation includes examples of using VusuMatica in studying the concepts of limit and continuity of a function, its derivative, and integral.Keywords: counterexamples, limitations and requirements, software, teaching and learning calculus, user-friendly interface and syntax
Procedia PDF Downloads 816569 Development and Validation of HPLC Method on Determination of Acesulfame-K in Jelly Drink Product
Authors: Candra Irawan, David Yudianto, Ahsanu Nadiyya, Dewi Anna Br Sitepu, Hanafi, Erna Styani
Abstract:
Jelly drink was produced from a combination of both natural and synthetic materials, such as acesulfame potassium (acesulfame-K) as synthetic sweetener material. Acesulfame-K content in jelly drink could be determined by High-Performance Liquid Chromatography (HPLC), but this method needed validation due to having a change on the reagent addition step which skips the carrez addition and comparison of mix mobile phase (potassium dihydrogen phosphate and acetonitrile) with ratio from 75:25 to 90:10 to be more efficient and cheap. This study was conducted to evaluate the performance of determination method for acesulfame-K content in the jelly drink by HPLC. The method referred to Deutsches Institut fur Normung European Standard International Organization for Standardization (DIN EN ISO):12856 (1999) about Foodstuffs, Determination of acesulfame-K, aspartame and saccharin. The result of the correlation coefficient value (r) on the linearity test was 0.9987 at concentration range 5-100 mg/L. Detection limit value was 0.9153 ppm, while the quantitation limit value was 1.1932 ppm. The recovery (%) value on accuracy test for sample concentration by spiking 100 mg/L was 102-105%. Relative Standard Deviation (RSD) value for precision and homogenization tests were 2.815% and 4.978%, respectively. Meanwhile, the comparative and stability tests were tstat (0.136) < ttable (2.101) and |µ1-µ2| (1.502) ≤ 0.3×CV Horwitz. Obstinacy test value was tstat < ttable. It can be concluded that the HPLC method for the determination of acesulfame-K in jelly drink product by HPLC has been valid and can be used for analysis with good performance.Keywords: acesulfame-K, jelly drink, HPLC, validation
Procedia PDF Downloads 1296568 Nanoparticles on Biological Biomarquers Models: Paramecium Tetraurelia and Helix aspersa
Authors: H. Djebar, L. Khene, M. Boucenna, M. R. Djebar, M. N. Khebbeb, M. Djekoun
Abstract:
Currently in toxicology, use of alternative models permits to understand the mechanisms of toxicity at different levels of cells. Objectives of our research concern the determination of NPs ZnO, TiO2, AlO2, and FeO2 effect on ciliate protist freshwater Paramecium sp and Helix aspersa. The result obtained show that NPs increased antioxidative enzyme activity like catalase, glutathione –S-transferase and level GSH. Also, cells treated with high concentrations of NPs showed a high level of MDA. In conclusion, observations from growth and enzymatic parameters suggest on one hand that treatment with NPs provokes an oxidative stress and on the other that snale and paramecium are excellent alternatives models for ecotoxicological studies.Keywords: NPs, GST, catalase, GSH, MDA, toxicity, snale and paramecium
Procedia PDF Downloads 2816567 Analysis of Sound Loss from the Highway Traffic through Lightweight Insulating Concrete Walls and Artificial Neural Network Modeling of Sound Transmission
Authors: Mustafa Tosun, Kevser Dincer
Abstract:
In this study, analysis on whether the lightweight concrete walled structures used in four climatic regions of Turkey are also capable of insulating sound was conducted. As a new approach, first the wall’s thermal insulation sufficiency’s were calculated and then, artificial neural network (ANN) modeling was used on their cross sections to check if they are sound transmitters too. The ANN was trained and tested by using MATLAB toolbox on a personal computer. ANN input parameters that used were thickness of lightweight concrete wall, frequency and density of lightweight concrete wall, while the transmitted sound was the output parameter. When the results of the TS analysis and those of ANN modeling are evaluated together, it is found from this study, that sound transmit loss increases at higher frequencies, higher wall densities and with larger wall cross sections.Keywords: artificial neuron network, lightweight concrete, sound insulation, sound transmit loss
Procedia PDF Downloads 2526566 A Large Language Model-Driven Method for Automated Building Energy Model Generation
Authors: Yake Zhang, Peng Xu
Abstract:
The development of building energy models (BEM) required for architectural design and analysis is a time-consuming and complex process, demanding a deep understanding and proficient use of simulation software. To streamline the generation of complex building energy models, this study proposes an automated method for generating building energy models using a large language model and the BEM library aimed at improving the efficiency of model generation. This method leverages a large language model to parse user-specified requirements for target building models, extracting key features such as building location, window-to-wall ratio, and thermal performance of the building envelope. The BEM library is utilized to retrieve energy models that match the target building’s characteristics, serving as reference information for the large language model to enhance the accuracy and relevance of the generated model, allowing for the creation of a building energy model that adapts to the user’s modeling requirements. This study enables the automatic creation of building energy models based on natural language inputs, reducing the professional expertise required for model development while significantly decreasing the time and complexity of manual configuration. In summary, this study provides an efficient and intelligent solution for building energy analysis and simulation, demonstrating the potential of a large language model in the field of building simulation and performance modeling.Keywords: artificial intelligence, building energy modelling, building simulation, large language model
Procedia PDF Downloads 256565 A Novel Algorithm for Parsing IFC Models
Authors: Raninder Kaur Dhillon, Mayur Jethwa, Hardeep Singh Rai
Abstract:
Information technology has made a pivotal progress across disparate disciplines, one of which is AEC (Architecture, Engineering and Construction) industry. CAD is a form of computer-aided building modulation that architects, engineers and contractors use to create and view two- and three-dimensional models. The AEC industry also uses building information modeling (BIM), a newer computerized modeling system that can create four-dimensional models; this software can greatly increase productivity in the AEC industry. BIM models generate open source IFC (Industry Foundation Classes) files which aim for interoperability for exchanging information throughout the project lifecycle among various disciplines. The methods developed in previous studies require either an IFC schema or MVD and software applications, such as an IFC model server or a Building Information Modeling (BIM) authoring tool, to extract a partial or complete IFC instance model. This paper proposes an efficient algorithm for extracting a partial and total model from an Industry Foundation Classes (IFC) instance model without an IFC schema or a complete IFC model view definition (MVD). Procedia PDF Downloads 3006564 Forecasting Performance Comparison of Autoregressive Fractional Integrated Moving Average and Jordan Recurrent Neural Network Models on the Turbidity of Stream Flows
Authors: Daniel Fulus Fom, Gau Patrick Damulak
Abstract:
In this study, the Autoregressive Fractional Integrated Moving Average (ARFIMA) and Jordan Recurrent Neural Network (JRNN) models were employed to model the forecasting performance of the daily turbidity flow of White Clay Creek (WCC). The two methods were applied to the log difference series of the daily turbidity flow series of WCC. The measurements of error employed to investigate the forecasting performance of the ARFIMA and JRNN models are the Root Mean Square Error (RMSE) and the Mean Absolute Error (MAE). The outcome of the investigation revealed that the forecasting performance of the JRNN technique is better than the forecasting performance of the ARFIMA technique in the mean square error sense. The results of the ARFIMA and JRNN models were obtained by the simulation of the models using MATLAB version 8.03. The significance of using the log difference series rather than the difference series is that the log difference series stabilizes the turbidity flow series than the difference series on the ARFIMA and JRNN.Keywords: auto regressive, mean absolute error, neural network, root square mean error
Procedia PDF Downloads 2686563 Preliminary Conceptions of 3D Prototyping Model to Experimental Investigation in Hypersonic Shock Tunnels
Authors: Thiago Victor Cordeiro Marcos, Joao Felipe de Araujo Martos, Ronaldo de Lima Cardoso, David Romanelli Pinto, Paulo Gilberto de Paula Toro, Israel da Silveira Rego, Antonio Carlos de Oliveira
Abstract:
Currently, the use of 3D rapid prototyping, also known as 3D printing, has been investigated by some universities around the world as an innovative technique, fast, flexible and cheap for a direct plastic models manufacturing that are lighter and with complex geometries to be tested for hypersonic shock tunnel. Initially, the purpose is integrated prototyped parts with metal models that actually are manufactured through of the conventional machining and hereafter replace them with completely prototyped models. The mechanical design models to be tested in hypersonic shock tunnel are based on conventional manufacturing processes, therefore are limited forms and standard geometries. The use of 3D rapid prototyping offers a range of options that enables geometries innovation and ways to be used for the design new models. The conception and project of a prototyped model for hypersonic shock tunnel should be rethought and adapted when comparing the conventional manufacturing processes, in order to fully exploit the creativity and flexibility that are allowed by the 3D prototyping process. The objective of this paper is to compare the conception and project of a 3D rapid prototyping model and a conventional machining model, while showing the advantages and disadvantages of each process and the benefits that 3D prototyping can bring to the manufacture of models to be tested in hypersonic shock tunnel.Keywords: 3D printing, 3D prototyping, experimental research, hypersonic shock tunnel
Procedia PDF Downloads 4696562 Neural Machine Translation for Low-Resource African Languages: Benchmarking State-of-the-Art Transformer for Wolof
Authors: Cheikh Bamba Dione, Alla Lo, Elhadji Mamadou Nguer, Siley O. Ba
Abstract:
In this paper, we propose two neural machine translation (NMT) systems (French-to-Wolof and Wolof-to-French) based on sequence-to-sequence with attention and transformer architectures. We trained our models on a parallel French-Wolof corpus of about 83k sentence pairs. Because of the low-resource setting, we experimented with advanced methods for handling data sparsity, including subword segmentation, back translation, and the copied corpus method. We evaluate the models using the BLEU score and find that transformer outperforms the classic seq2seq model in all settings, in addition to being less sensitive to noise. In general, the best scores are achieved when training the models on word-level-based units. For subword-level models, using back translation proves to be slightly beneficial in low-resource (WO) to high-resource (FR) language translation for the transformer (but not for the seq2seq) models. A slight improvement can also be observed when injecting copied monolingual text in the target language. Moreover, combining the copied method data with back translation leads to a substantial improvement of the translation quality.Keywords: backtranslation, low-resource language, neural machine translation, sequence-to-sequence, transformer, Wolof
Procedia PDF Downloads 1476561 The Influence of Contact Models on Discrete Element Modeling of the Ballast Layer Subjected to Cyclic Loading
Authors: Peyman Aela, Lu Zong, Guoqing Jing
Abstract:
Recently, there has been growing interest in numerical modeling of ballast railway tracks. A commonly used mechanistic modeling approach for ballast is the discrete element method (DEM). Up to now, the effects of the contact model on ballast particle behavior have not been precisely examined. In this regard, selecting the appropriate contact model is mainly associated with the particle characteristics and the loading condition. Since ballast is cohesionless material, different contact models, including the linear spring, Hertz-Mindlin, and Hysteretic models, could be used to calculate particle-particle or wall-particle contact forces. Moreover, the simulation of a dynamic test is vital to investigate the effect of damping parameters on the ballast deformation. In this study, ballast box tests were simulated by DEM to examine the influence of different contact models on the mechanical behavior of the ballast layer under cyclic loading. This paper shows how the contact model can affect the deformation and damping of a ballast layer subjected to cyclic loading in a ballast box.Keywords: ballast, contact model, cyclic loading, DEM
Procedia PDF Downloads 1966560 Interpretation of Ultrasonic Backscatter of Linear FM Chirp Pulses from Targets Having Frequency-Dependent Scattering
Authors: Stuart Bradley, Mathew Legg, Lilyan Panton
Abstract:
Ultrasonic remote sensing is a useful tool for assessing the interior structure of complex targets. For these methods, significantly enhanced spatial resolution is obtained if the pulse is coded, for example using a linearly changing frequency during the pulse duration. Such pulses have a time-dependent spectral structure. Interpretation of the backscatter from targets is, therefore, complicated if the scattering is frequency-dependent. While analytic models are well established for steady sinusoidal excitations applied to simple shapes such as spheres, such models do not generally exist for temporally evolving excitations. Therefore, models are developed in the current paper for handling such signals so that the properties of the targets can be quantitatively evaluated while maintaining very high spatial resolution. Laboratory measurements on simple shapes are used to confirm the validity of the models.Keywords: linear FM chirp, time-dependent acoustic scattering, ultrasonic remote sensing, ultrasonic scattering
Procedia PDF Downloads 3166559 Aspects Concerning Flame Propagation of Various Fuels in Combustion Chamber of Four Valve Engines
Authors: Zoran Jovanovic, Zoran Masonicic, S. Dragutinovic, Z. Sakota
Abstract:
In this paper, results concerning flame propagation of various fuels in a particular combustion chamber with four tilted valves were elucidated. Flame propagation was represented by the evolution of spatial distribution of temperature in various cut-planes within combustion chamber while the flame front location was determined by dint of zones with maximum temperature gradient. The results presented are only a small part of broader on-going scrutinizing activity in the field of multidimensional modeling of reactive flows in combustion chambers with complicated geometries encompassing various models of turbulence, different fuels and combustion models. In the case of turbulence two different models were applied i.e. standard k-ε model of turbulence and k-ξ-f model of turbulence. In this paper flame propagation results were analyzed and presented for two different hydrocarbon fuels, such as CH4 and C8H18. In the case of combustion all differences ensuing from different turbulence models, obvious for non-reactive flows are annihilated entirely. Namely the interplay between fluid flow pattern and flame propagation is invariant as regards turbulence models and fuels applied. Namely the interplay between fluid flow pattern and flame propagation is entirely invariant as regards fuel variation indicating that the flame propagation through unburned mixture of CH4 and C8H18 fuels is not chemically controlled.Keywords: automotive flows, flame propagation, combustion modelling, CNG
Procedia PDF Downloads 2926558 Decision Support: How Explainable A.I. Can Improve Transparency and Trust with Human Users
Authors: Devon Brown, Liu Chunmei
Abstract:
This paper will present an analysis as part of the researchers dissertation topic focusing on the intersection of affective and analytical directed acyclic graphs (DAGs) in the context of Decision Support Systems (DSS). The researcher’s work involves analyzing decision theory models like Affective and Bayesian Decision theory models and how they could be implemented under an Affective Computing Framework using Information Fusion and Human-Centered Design. Additionally, the researcher is beginning research on an Affective-Analytic Decision Framework (AADF) model for their dissertation research and are looking to merge logic and analytic models with empathetic insights into affective DAGs. Data-collection efforts begin Fall 2024 and in preparation for the efforts this paper looks to analyze previous research in this area and introduce the AADF framework and propose conceptual models for consideration. For this paper, the research emphasis is placed on analyzing Bayesian networks and Markov models which offer probabilistic techniques during uncertainty in decision-making. Ideally, including affect into analytic models will ensure algorithms can increase user trust with algorithms by including emotional states and the user’s experience with the goal of developing emotionally intelligent A.I. systems that can start to navigate the complex fabric of human emotion during decision-making.Keywords: decision support systems, explainable AI, HCAI techniques, affective-analytical decision framework
Procedia PDF Downloads 206557 Towards an Enhanced Compartmental Model for Profiling Malware Dynamics
Authors: Jessemyn Modiini, Timothy Lynar, Elena Sitnikova
Abstract:
We present a novel enhanced compartmental model for malware spread analysis in cyber security. This paper applies cyber security data features to epidemiological compartmental models to model the infectious potential of malware. Compartmental models are most efficient for calculating the infectious potential of a disease. In this paper, we discuss and profile epidemiologically relevant data features from a Domain Name System (DNS) dataset. We then apply these features to epidemiological compartmental models to network traffic features. This paper demonstrates how epidemiological principles can be applied to the novel analysis of key cybersecurity behaviours and trends and provides insight into threat modelling above that of kill-chain analysis. In applying deterministic compartmental models to a cyber security use case, the authors analyse the deficiencies and provide an enhanced stochastic model for cyber epidemiology. This enhanced compartmental model (SUEICRN model) is contrasted with the traditional SEIR model to demonstrate its efficacy.Keywords: cybersecurity, epidemiology, cyber epidemiology, malware
Procedia PDF Downloads 1076556 Determination of Direct Solar Radiation Using Atmospheric Physics Models
Authors: Pattra Pukdeekiat, Siriluk Ruangrungrote
Abstract:
This work was originated to precisely determine direct solar radiation by using atmospheric physics models since the accurate prediction of solar radiation is necessary and useful for solar energy applications including atmospheric research. The possible models and techniques for a calculation of regional direct solar radiation were challenging and compulsory for the case of unavailable instrumental measurement. The investigation was mathematically governed by six astronomical parameters i.e. declination (δ), hour angle (ω), solar time, solar zenith angle (θz), extraterrestrial radiation (Iso) and eccentricity (E0) along with two atmospheric parameters i.e. air mass (mr) and dew point temperature at Bangna meteorological station (13.67° N, 100.61° E) in Bangkok, Thailand. Analyses of five models of solar radiation determination with the assumption of clear sky were applied accompanied by three statistical tests: Mean Bias Difference (MBD), Root Mean Square Difference (RMSD) and Coefficient of determination (R2) in order to validate the accuracy of obtainable results. The calculated direct solar radiation was in a range of 491-505 Watt/m2 with relative percentage error 8.41% for winter and 532-540 Watt/m2 with relative percentage error 4.89% for summer 2014. Additionally, dataset of seven continuous days, representing both seasons were considered with the MBD, RMSD and R2 of -0.08, 0.25, 0.86 and -0.14, 0.35, 3.29, respectively, which belong to Kumar model for winter and CSR model for summer. In summary, the determination of direct solar radiation based on atmospheric models and empirical equations could advantageously provide immediate and reliable values of the solar components for any site in the region without a constraint of actual measurement.Keywords: atmospheric physics models, astronomical parameters, atmospheric parameters, clear sky condition
Procedia PDF Downloads 4096555 Sensitive Analysis of the ZF Model for ABC Multi Criteria Inventory Classification
Authors: Makram Ben Jeddou
Abstract:
The ABC classification is widely used by managers for inventory control. The classical ABC classification is based on the Pareto principle and according to the criterion of the annual use value only. Single criterion classification is often insufficient for a closely inventory control. Multi-criteria inventory classification models have been proposed by researchers in order to take into account other important criteria. From these models, we will consider the ZF model in order to make a sensitive analysis on the composite score calculated for each item. In fact, this score based on a normalized average between a good and a bad optimized index can affect the ABC items classification. We will then focus on the weights assigned to each index and propose a classification compromise.Keywords: ABC classification, multi criteria inventory classification models, ZF-model
Procedia PDF Downloads 5086554 Comparative Study and Parallel Implementation of Stochastic Models for Pricing of European Options Portfolios using Monte Carlo Methods
Authors: Vinayak Bassi, Rajpreet Singh
Abstract:
Over the years, with the emergence of sophisticated computers and algorithms, finance has been quantified using computational prowess. Asset valuation has been one of the key components of quantitative finance. In fact, it has become one of the embryonic steps in determining risk related to a portfolio, the main goal of quantitative finance. This study comprises a drawing comparison between valuation output generated by two stochastic dynamic models, namely Black-Scholes and Dupire’s bi-dimensionality model. Both of these models are formulated for computing the valuation function for a portfolio of European options using Monte Carlo simulation methods. Although Monte Carlo algorithms have a slower convergence rate than calculus-based simulation techniques (like FDM), they work quite effectively over high-dimensional dynamic models. A fidelity gap is analyzed between the static (historical) and stochastic inputs for a sample portfolio of underlying assets. In order to enhance the performance efficiency of the model, the study emphasized the use of variable reduction methods and customizing random number generators to implement parallelization. An attempt has been made to further implement the Dupire’s model on a GPU to achieve higher computational performance. Furthermore, ideas have been discussed around the performance enhancement and bottleneck identification related to the implementation of options-pricing models on GPUs.Keywords: monte carlo, stochastic models, computational finance, parallel programming, scientific computing
Procedia PDF Downloads 1606553 Machine Learning Approach for Predicting Students’ Academic Performance and Study Strategies Based on Their Motivation
Authors: Fidelia A. Orji, Julita Vassileva
Abstract:
This research aims to develop machine learning models for students' academic performance and study strategy prediction, which could be generalized to all courses in higher education. Key learning attributes (intrinsic, extrinsic, autonomy, relatedness, competence, and self-esteem) used in building the models are chosen based on prior studies, which revealed that the attributes are essential in students’ learning process. Previous studies revealed the individual effects of each of these attributes on students’ learning progress. However, few studies have investigated the combined effect of the attributes in predicting student study strategy and academic performance to reduce the dropout rate. To bridge this gap, we used Scikit-learn in python to build five machine learning models (Decision Tree, K-Nearest Neighbour, Random Forest, Linear/Logistic Regression, and Support Vector Machine) for both regression and classification tasks to perform our analysis. The models were trained, evaluated, and tested for accuracy using 924 university dentistry students' data collected by Chilean authors through quantitative research design. A comparative analysis of the models revealed that the tree-based models such as the random forest (with prediction accuracy of 94.9%) and decision tree show the best results compared to the linear, support vector, and k-nearest neighbours. The models built in this research can be used in predicting student performance and study strategy so that appropriate interventions could be implemented to improve student learning progress. Thus, incorporating strategies that could improve diverse student learning attributes in the design of online educational systems may increase the likelihood of students continuing with their learning tasks as required. Moreover, the results show that the attributes could be modelled together and used to adapt/personalize the learning process.Keywords: classification models, learning strategy, predictive modeling, regression models, student academic performance, student motivation, supervised machine learning
Procedia PDF Downloads 1286552 Comparison of Unit Hydrograph Models to Simulate Flood Events at the Field Scale
Authors: Imene Skhakhfa, Lahbaci Ouerdachi
Abstract:
To ensure the overall coherence of simulated results, it is necessary to develop a robust validation process. In many applications, it is no longer content to calibrate and validate the model only in relation to the hydro graph measured at the outlet, but we try to better simulate the functioning of the watershed in space. Therefore the timing also performs compared to other variables such as water level measurements in intermediate stations or groundwater levels. As part of this work, we limit ourselves to modeling flood of short duration for which the process of evapotranspiration is negligible. The main parameters to identify the models are related to the method of unit hydro graph (HU). Three different models were tested: SNYDER, CLARK and SCS. These models differ in their mathematical structure and parameters to be calibrated while hydrological data are the same, the initial water content and precipitation. The models are compared on the basis of their performance in terms six objective criteria, three global criteria and three criteria representing volume, peak flow, and the mean square error. The first type of criteria gives more weight to strong events whereas the second considers all events to be of equal weight. The results show that the calibrated parameter values are dependent and also highlight the problems associated with the simulation of low flow events and intermittent precipitation.Keywords: model calibration, intensity, runoff, hydrograph
Procedia PDF Downloads 4866551 High-Accuracy Satellite Image Analysis and Rapid DSM Extraction for Urban Environment Evaluations (Tripoli-Libya)
Authors: Abdunaser Abduelmula, Maria Luisa M. Bastos, José A. Gonçalves
Abstract:
The modeling of the earth's surface and evaluation of urban environment, with 3D models, is an important research topic. New stereo capabilities of high-resolution optical satellites images, such as the tri-stereo mode of Pleiades, combined with new image matching algorithms, are now available and can be applied in urban area analysis. In addition, photogrammetry software packages gained new, more efficient matching algorithms, such as SGM, as well as improved filters to deal with shadow areas, can achieve denser and more precise results. This paper describes a comparison between 3D data extracted from tri-stereo and dual stereo satellite images, combined with pixel based matching and Wallis filter. The aim was to improve the accuracy of 3D models especially in urban areas, in order to assess if satellite images are appropriate for a rapid evaluation of urban environments. The results showed that 3D models achieved by Pleiades tri-stereo outperformed, both in terms of accuracy and detail, the result obtained from a Geo-eye pair. The assessment was made with reference digital surface models derived from high-resolution aerial photography. This could mean that tri-stereo images can be successfully used for the proposed urban change analyses.Keywords: 3D models, environment, matching, pleiades
Procedia PDF Downloads 3306550 Poisson Type Spherically Symmetric Spacetimes
Authors: Gonzalo García-Reyes
Abstract:
Conformastat spherically symmetric exact solutions of Einstein's field equations representing matter distributions made of fluid both perfect and anisotropic from given solutions of Poisson's equation of Newtonian gravity are investigated. The approach is used in the construction of new relativistic models of thick spherical shells and three-component models of galaxies (bulge, disk, and dark matter halo), writing, in this case, the metric in cylindrical coordinates. In addition, the circular motion of test particles (rotation curves) along geodesics on the equatorial plane of matter configurations and the stability of the orbits against radial perturbations are studied. The models constructed satisfy all the energy conditions.Keywords: general relativity, exact solutions, spherical symmetry, galaxy, kinematics and dynamics, dark matter
Procedia PDF Downloads 876549 Size Effect on Shear Strength of Slender Reinforced Concrete Beams
Authors: Subhan Ahmad, Pradeep Bhargava, Ajay Chourasia
Abstract:
Shear failure in reinforced concrete beams without shear reinforcement leads to loss of property and life since a very little or no warning occurs before failure as in case of flexural failure. Shear strength of reinforced concrete beams decreases as its depth increases. This phenomenon is generally called as the size effect. In this paper, a comparative analysis is performed to estimate the performance of shear strength models in capturing the size effect of reinforced concrete beams made with conventional concrete, self-compacting concrete, and recycled aggregate concrete. Four shear strength models that account for the size effect in shear are selected from the literature and applied on the datasets of slender reinforced concrete beams. Beams prepared with conventional concrete, self-compacting concrete, and recycled aggregate concrete are considered for the analysis. Results showed that all the four models captured the size effect in shear effectively and produced conservative estimates of the shear strength for beams made with normal strength conventional concrete. These models yielded unconservative estimates for high strength conventional concrete beams with larger effective depths ( > 450 mm). Model of Bazant and Kim (1984) captured the size effect precisely and produced conservative estimates of shear strength of self-compacting concrete beams at all the effective depths. Also, shear strength models considered in this study produced unconservative estimates of shear strength for recycled aggregate concrete beams at all effective depths.Keywords: reinforced concrete beams; shear strength; prediction models; size effect
Procedia PDF Downloads 1616548 Is Brain Death Reversal Possible in Near Future: Intrathecal Sodium Nitroprusside (SNP) Superfusion in Brain Death Patients=The 10,000 Fold Effect
Authors: Vinod Kumar Tewari, Mazhar Husain, Hari Kishan Das Gupta
Abstract:
Background: Primary or secondary brain death is also accompanied with vasospasm of the perforators other than tissue disruption & further exaggerates the anoxic damage, in the form of neuropraxia. In normal conditions the excitatory impulse propagates as anterograde neurotransmission (ANT) and at the level of synapse, glutamate activates NMDA receptors on postsynaptic membrane. Nitric oxide (NO) is produced by Nitric oxide Synthetase (NOS) in postsynaptic dendride or cell body and travels backwards across a chemical synapse to bind to the axon terminal of a presynaptic neuron for regulation of ANT this process is called as the retrograde neurotransmission (RNT). Thus the primary function of NO is RNT and the purpose of RNT is regulation of chemical neurotransmission at synapse. For this reason, RNT allows neural circuits to create feedback loops. The haem is the ligand binding site of NO receptor (sGC) at presynaptic membrane. The affinity of haem exhibits > 10,000-fold excess for NO than Oxygen (THE 10,000 FOLD EFFECT). In pathological conditions ANT, normal synaptic activity including RNT is absent. NO donors like sodium nitroprusside (SNP) releases NO by activating NOS at the level of postsynaptic area. NO now travels backwards across a chemical synapse to bind to the haem of NO receptor at axon terminal of a presynaptic neuron as in normal condition. NO now acts as impulse generator (at presynaptic membrane) thus bypasses the normal ANT. Also the arteriolar perforators are having Nitric Oxide Synthetase (NOS) at the adventitial side (outer border) on which sodium nitroprusside (SNP) acts; causing release of Nitric Oxide (NO) which vasodilates the perforators causing gush of blood in brain’s tissue and reversal of brain death. Objective: In brain death cases we only think for various transplantations but this study being a pilot study reverses some criteria of brain death by vasodilating the arteriolar perforators. To study the effect of intrathecal sodium nitroprusside (IT SNP) in cases of brain death in which: 1. Retrograde transmission = assessed by the hyperacute timings of reversal 2. The arteriolar perforator vasodilatation caused by NO and the maintenance of reversal of brain death reversal. Methods: 35 year old male, who became brain death after head injury and has not shown any signs of improvement after every maneuver for 6 hours, a single superfusion done by SNP via transoptic canal route for quadrigeminal cistern and cisternal puncture for IV ventricular with SNP done. Results: He showed spontaneous respiration (7 bouts) with TCD studies showing start of pulsations of various branches of common carotid arteries. Conclusions: In future we can give this SNP via transoptic canal route and in IV ventricle before declaring the body to be utilized for transplantations or dead or in broader way we can say that in near future it is possible to revert back from brain death or we have to modify our criterion.Keywords: brain death, intrathecal sodium nitroprusside, TCD studies, perforators, vasodilatations, retrograde transmission, 10, 000 fold effect
Procedia PDF Downloads 4016547 Determination of MDA by HPLC in Blood of Levofloxacin Treated Rats
Authors: D. S. Mohale, A. P. Dewani, A. S.tripathi, A. V. Chandewar
Abstract:
Present work demonstrates the applicability of high-performance liquid chromatography (HPLC) with UV-Vis detection for the quantification of malondialdehyde as malondialdehyde-thiobarbituric acid complex (MDA-TBA) in-vivo in rats. The HPLC method for MDA-TBA was achieved by isocratic mode on a reverse-phase C18 column (250mm×4.6mm) at a flow rate of 1.0mLmin−1 followed by detection at 532 nm. The chromatographic conditions were optimized by varying the concentration and pH of water followed by changes in percentage of organic phase optimal mobile phase consisted of mixture of water (0.2% triethylamine pH adjusted to 2.3 by ortho-phosphoric acid) and acetonitrile in ratio (80:20v/v). The retention time of MDA-TBA complex was 3.7 min. The developed method was sensitive as limit of detection and quantification (LOD and LOQ) for MDA-TBA complex were (standard deviation and slope of calibration curve) 110 ng/ml and 363 ng/ml respectively. Calibration studies were done by spiking MDA into rat plasma at concentrations ranging from 500 to 1000 ng/ml. The precision of developed method measured in terms of relative standard deviations for intra-day and inter-day studies was 1.6–5.0% and 1.9–3.6% respectively. The HPLC method was applied for monitoring MDA levels in rats subjected to chronic treatment of levofloxacin (LEV) (5mg/kg/day) for 21 days. Results were compared by findings in control group rats. Mean peak areas of both study groups was subjected for statistical treatment to unpaired student t-test to find p-values. The p value was <0.001 indicating significant results and suggesting increased MDA levels in rats subjected to chronic treatment of LEV of 21 days.Keywords: malondialdehyde-thiobarbituric acid complex, levofloxacin, HPLC, oxidative stress
Procedia PDF Downloads 3346546 Modeling Pan Evaporation Using Intelligent Methods of ANN, LSSVM and Tree Model M5 (Case Study: Shahroud and Mayamey Stations)
Authors: Hamidreza Ghazvinian, Khosro Ghazvinian, Touba Khodaiean
Abstract:
The importance of evaporation estimation in water resources and agricultural studies is undeniable. Pan evaporation are used as an indicator to determine the evaporation of lakes and reservoirs around the world due to the ease of interpreting its data. In this research, intelligent models were investigated in estimating pan evaporation on a daily basis. Shahroud and Mayamey were considered as the studied cities. These two cities are located in Semnan province in Iran. The mentioned cities have dry weather conditions that are susceptible to high evaporation potential. Meteorological data of 11 years of synoptic stations of Shahrood and Mayamey cities were used. The intelligent models used in this study are Artificial Neural Network (ANN), Least Squares Support Vector Machine (LSSVM), and M5 tree models. Meteorological parameters of minimum and maximum air temperature (Tmax, Tmin), wind speed (WS), sunshine hours (SH), air pressure (PA), relative humidity (RH) as selected input data and evaporation data from pan (EP) to The output data was considered. 70% of data is used at the education level, and 30 % of the data is used at the test level. Models used with explanation coefficient evaluation (R2) Root of Mean Squares Error (RMSE) and Mean Absolute Error (MAE). The results for the two Shahroud and Mayamey stations showed that the above three models' operations are rather appropriate.Keywords: pan evaporation, intelligent methods, shahroud, mayamey
Procedia PDF Downloads 746545 Multilevel Modeling of the Progression of HIV/AIDS Disease among Patients under HAART Treatment
Authors: Awol Seid Ebrie
Abstract:
HIV results as an incurable disease, AIDS. After a person is infected with virus, the virus gradually destroys all the infection fighting cells called CD4 cells and makes the individual susceptible to opportunistic infections which cause severe or fatal health problems. Several studies show that the CD4 cells count is the most determinant indicator of the effectiveness of the treatment or progression of the disease. The objective of this paper is to investigate the progression of the disease over time among patient under HAART treatment. Two main approaches of the generalized multilevel ordinal models; namely the proportional odds model and the nonproportional odds model have been applied to the HAART data. Also, the multilevel part of both models includes random intercepts and random coefficients. In general, four models are explored in the analysis and then the models are compared using the deviance information criteria. Of these models, the random coefficients nonproportional odds model is selected as the best model for the HAART data used as it has the smallest DIC value. The selected model shows that the progression of the disease increases as the time under the treatment increases. In addition, it reveals that gender, baseline clinical stage and functional status of the patient have a significant association with the progression of the disease.Keywords: nonproportional odds model, proportional odds model, random coefficients model, random intercepts model
Procedia PDF Downloads 421