Search results for: non-linear regression models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10077

Search results for: non-linear regression models

8577 Measurement of CES Production Functions Considering Energy as an Input

Authors: Donglan Zha, Jiansong Si

Abstract:

Because of its flexibility, CES attracts much interest in economic growth and programming models, and the macroeconomics or micro-macro models. This paper focuses on the development, estimating methods of CES production function considering energy as an input. We leave for future research work of relaxing the assumption of constant returns to scale, the introduction of potential input factors, and the generalization method of the optimal nested form of multi-factor production functions.

Keywords: bias of technical change, CES production function, elasticity of substitution, energy input

Procedia PDF Downloads 269
8576 Ontologies for Social Media Digital Evidence

Authors: Edlira Kalemi, Sule Yildirim-Yayilgan

Abstract:

Online Social Networks (OSNs) are nowadays being used widely and intensively for crime investigation and prevention activities. As they provide a lot of information they are used by the law enforcement and intelligence. An extensive review on existing solutions and models for collecting intelligence from this source of information and making use of it for solving crimes has been presented in this article. The main focus is on smart solutions and models where ontologies have been used as the main approach for representing criminal domain knowledge. A framework for a prototype ontology named SC-Ont will be described. This defines terms of the criminal domain ontology and the relations between them. The terms and the relations are extracted during both this review and the discussions carried out with domain experts. The development of SC-Ont is still ongoing work, where in this paper, we report mainly on the motivation for using smart ontology models and the possible benefits of using them for solving crimes.

Keywords: criminal digital evidence, social media, ontologies, reasoning

Procedia PDF Downloads 378
8575 Groundwater Pollution Models for Hebron/Palestine

Authors: Hassan Jebreen

Abstract:

These models of a conservative pollutant in groundwater do not include representation of processes in soils and in the unsaturated zone, or biogeochemical processes in groundwater, These demonstration models can be used as the basis for more detailed simulations of the impacts of pollution sources at a local scale, but such studies should address processes related to specific pollutant species, and should consider local hydrogeology in more detail, particularly in relation to possible impacts on shallow systems which are likely to respond more quickly to changes in pollutant inputs. The results have demonstrated the interaction between groundwater flow fields and pollution sources in abstraction areas, and help to emphasise that wadi development is one of the key elements of water resources planning. The quality of groundwater in the Hebron area indicates a gradual increase in chloride and nitrate with time. Since the aquifers in Hebron districts are highly vulnerable due to their karstic nature, continued disposal of untreated domestic and industrial wastewater into the wadi will lead to unacceptably poor water quality in drinking water, which may ultimately require expensive treatment if significant health problems are to be avoided. Improvements are required in wastewater treatment at the municipal and domestic levels, the latter requiring increased public awareness of the issues, as well as improved understanding of the hydrogeological behaviour of the aquifers.

Keywords: groundwater, models, pollutants, wadis, hebron

Procedia PDF Downloads 421
8574 Modeling of Daily Global Solar Radiation Using Ann Techniques: A Case of Study

Authors: Said Benkaciali, Mourad Haddadi, Abdallah Khellaf, Kacem Gairaa, Mawloud Guermoui

Abstract:

In this study, many experiments were carried out to assess the influence of the input parameters on the performance of multilayer perceptron which is one the configuration of the artificial neural networks. To estimate the daily global solar radiation on the horizontal surface, we have developed some models by using seven combinations of twelve meteorological and geographical input parameters collected from a radiometric station installed at Ghardaïa city (southern of Algeria). For selecting of best combination which provides a good accuracy, six statistical formulas (or statistical indicators) have been evaluated, such as the root mean square errors, mean absolute errors, correlation coefficient, and determination coefficient. We noted that multilayer perceptron techniques have the best performance, except when the sunshine duration parameter is not included in the input variables. The maximum of determination coefficient and correlation coefficient are equal to 98.20 and 99.11%. On the other hand, some empirical models were developed to compare their performances with those of multilayer perceptron neural networks. Results obtained show that the neural networks techniques give the best performance compared to the empirical models.

Keywords: empirical models, multilayer perceptron neural network, solar radiation, statistical formulas

Procedia PDF Downloads 329
8573 E-Consumers’ Attribute Non-Attendance Switching Behavior: Effect of Providing Information on Attributes

Authors: Leonard Maaya, Michel Meulders, Martina Vandebroek

Abstract:

Discrete Choice Experiments (DCE) are used to investigate how product attributes affect decision-makers’ choices. In DCEs, choice situations consisting of several alternatives are presented from which choice-makers select the preferred alternative. Standard multinomial logit models based on random utility theory can be used to estimate the utilities for the attributes. The overarching principle in these models is that respondents understand and use all the attributes when making choices. However, studies suggest that respondents sometimes ignore some attributes (commonly referred to as Attribute Non-Attendance/ANA). The choice modeling literature presents ANA as a static process, i.e., respondents’ ANA behavior does not change throughout the experiment. However, respondents may ignore attributes due to changing factors like availability of information on attributes, learning/fatigue in experiments, etc. We develop a dynamic mixture latent Markov model to model changes in ANA when information on attributes is provided. The model is illustrated on e-consumers’ webshop choices. The results indicate that the dynamic ANA model describes the behavioral changes better than modeling the impact of information using changes in parameters. Further, we find that providing information on attributes leads to an increase in the attendance probabilities for the investigated attributes.

Keywords: choice models, discrete choice experiments, dynamic models, e-commerce, statistical modeling

Procedia PDF Downloads 124
8572 Microstructural Characterization and Mechanical Properties of Al-2Mn-5Fe Ternary Eutectic Alloy

Authors: Emin Çadirli, Izzettin Yilmazer, Uğur Büyük, Hasan Kaya

Abstract:

Al-2Mn-5Fe eutectic alloy (wt.%) was prepared in a graphite crucible under vacuum atmosphere. The samples were directionally solidified upward at a constant temperature gradient in four different of growth rates by using a Bridgman method. The values of eutectic spacing were measured from longitudinal and transverse sections of the samples. The dependence of eutectic spacing on the growth rate was determined by using linear regression analysis. The microhardness and tensile strength of the studied alloy also were measured from directionally solidified samples. The dependency of the microhardness and tensile strength for directionally solidified Al-2Mn-5Fe eutectic alloy on the growth rate were investigated and the relationships between them were experimentally obtained by using regression analysis. The results obtained in present work were compared with the previous similar experimental results obtained for binary and ternary alloys.

Keywords: eutectic alloy, microhardness, microstructure, tensile strength

Procedia PDF Downloads 464
8571 Mathematical Models for Drug Diffusion Through the Compartments of Blood and Tissue Medium

Authors: M. A. Khanday, Aasma Rafiq, Khalid Nazir

Abstract:

This paper is an attempt to establish the mathematical models to understand the distribution of drug administration in the human body through oral and intravenous routes. Three models were formulated based on diffusion process using Fick’s principle and the law of mass action. The rate constants governing the law of mass action were used on the basis of the drug efficacy at different interfaces. The Laplace transform and eigenvalue methods were used to obtain the solution of the ordinary differential equations concerning the rate of change of concentration in different compartments viz. blood and tissue medium. The drug concentration in the different compartments has been computed using numerical parameters. The results illustrate the variation of drug concentration with respect to time using MATLAB software. It has been observed from the results that the drug concentration decreases in the first compartment and gradually increases in other subsequent compartments.

Keywords: Laplace transform, diffusion, eigenvalue method, mathematical model

Procedia PDF Downloads 321
8570 The Influence of Air Temperature Controls in Estimation of Air Temperature over Homogeneous Terrain

Authors: Fariza Yunus, Jasmee Jaafar, Zamalia Mahmud, Nurul Nisa’ Khairul Azmi, Nursalleh K. Chang, Nursalleh K. Chang

Abstract:

Variation of air temperature from one place to another is cause by air temperature controls. In general, the most important control of air temperature is elevation. Another significant independent variable in estimating air temperature is the location of meteorological stations. Distances to coastline and land use type are also contributed to significant variations in the air temperature. On the other hand, in homogeneous terrain direct interpolation of discrete points of air temperature work well to estimate air temperature values in un-sampled area. In this process the estimation is solely based on discrete points of air temperature. However, this study presents that air temperature controls also play significant roles in estimating air temperature over homogenous terrain of Peninsular Malaysia. An Inverse Distance Weighting (IDW) interpolation technique was adopted to generate continuous data of air temperature. This study compared two different datasets, observed mean monthly data of T, and estimation error of T–T’, where T’ estimated value from a multiple regression model. The multiple regression model considered eight independent variables of elevation, latitude, longitude, coastline, and four land use types of water bodies, forest, agriculture and build up areas, to represent the role of air temperature controls. Cross validation analysis was conducted to review accuracy of the estimation values. Final results show, estimation values of T–T’ produced lower errors for mean monthly mean air temperature over homogeneous terrain in Peninsular Malaysia.

Keywords: air temperature control, interpolation analysis, peninsular Malaysia, regression model, air temperature

Procedia PDF Downloads 365
8569 Deep Learning Approach for Chronic Kidney Disease Complications

Authors: Mario Isaza-Ruget, Claudia C. Colmenares-Mejia, Nancy Yomayusa, Camilo A. González, Andres Cely, Jossie Murcia

Abstract:

Quantification of risks associated with complications development from chronic kidney disease (CKD) through accurate survival models can help with patient management. A retrospective cohort that included patients diagnosed with CKD from a primary care program and followed up between 2013 and 2018 was carried out. Time-dependent and static covariates associated with demographic, clinical, and laboratory factors were included. Deep Learning (DL) survival analyzes were developed for three CKD outcomes: CKD stage progression, >25% decrease in Estimated Glomerular Filtration Rate (eGFR), and Renal Replacement Therapy (RRT). Models were evaluated and compared with Random Survival Forest (RSF) based on concordance index (C-index) metric. 2.143 patients were included. Two models were developed for each outcome, Deep Neural Network (DNN) model reported C-index=0.9867 for CKD stage progression; C-index=0.9905 for reduction in eGFR; C-index=0.9867 for RRT. Regarding the RSF model, C-index=0.6650 was reached for CKD stage progression; decreased eGFR C-index=0.6759; RRT C-index=0.8926. DNN models applied in survival analysis context with considerations of longitudinal covariates at the start of follow-up can predict renal stage progression, a significant decrease in eGFR and RRT. The success of these survival models lies in the appropriate definition of survival times and the analysis of covariates, especially those that vary over time.

Keywords: artificial intelligence, chronic kidney disease, deep neural networks, survival analysis

Procedia PDF Downloads 122
8568 Sorghum Grains Grading for Food, Feed, and Fuel Using NIR Spectroscopy

Authors: Irsa Ejaz, Siyang He, Wei Li, Naiyue Hu, Chaochen Tang, Songbo Li, Meng Li, Boubacar Diallo, Guanghui Xie, Kang Yu

Abstract:

Background: Near-infrared spectroscopy (NIR) is a non-destructive, fast, and low-cost method to measure the grain quality of different cereals. Previously reported NIR model calibrations using the whole grain spectra had moderate accuracy. Improved predictions are achievable by using the spectra of whole grains, when compared with the use of spectra collected from the flour samples. However, the feasibility for determining the critical biochemicals, related to the classifications for food, feed, and fuel products are not adequately investigated. Objectives: To evaluate the feasibility of using NIRS and the influence of four sample types (whole grains, flours, hulled grain flours, and hull-less grain flours) on the prediction of chemical components to improve the grain sorting efficiency for human food, animal feed, and biofuel. Methods: NIR was applied in this study to determine the eight biochemicals in four types of sorghum samples: hulled grain flours, hull-less grain flours, whole grains, and grain flours. A total of 20 hybrids of sorghum grains were selected from the two locations in China. Followed by NIR spectral and wet-chemically measured biochemical data, partial least squares regression (PLSR) was used to construct the prediction models. Results: The results showed that sorghum grain morphology and sample format affected the prediction of biochemicals. Using NIR data of grain flours generally improved the prediction compared with the use of NIR data of whole grains. In addition, using the spectra of whole grains enabled comparable predictions, which are recommended when a non-destructive and rapid analysis is required. Compared with the hulled grain flours, hull-less grain flours allowed for improved predictions for tannin, cellulose, and hemicellulose using NIR data. Conclusion: The established PLSR models could enable food, feed, and fuel producers to efficiently evaluate a large number of samples by predicting the required biochemical components in sorghum grains without destruction.

Keywords: FT-NIR, sorghum grains, biochemical composition, food, feed, fuel, PLSR

Procedia PDF Downloads 51
8567 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation Using Physics-Informed Neural Network

Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy

Abstract:

The physics-informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on a strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary conditions to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of the Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful in studying various optical phenomena.

Keywords: deep learning, optical soliton, physics informed neural network, partial differential equation

Procedia PDF Downloads 61
8566 The Relationship between Parenting Style, Nonattachment and Inferiority

Authors: Yu-Chien Huang, Shu-Chen Yang

Abstract:

Introduction: Parenting style, non-attachment, and inferiority are important topics in psychology, but the related research on nonattachment is still lacking. Therefore, the purposes of this study were to explore the relationship between parenting style, nonattachment, and inferiority. Methods: We conducted a correlational study, and three instruments were utilized to collect data: parenting style scale, nonattachment scale, and inferiority scale. The inter-reliability Cronbach's α used in this research indicated good inter item reliability and the test-retest reliability that showed a good consistency. The data were analyzed using the descriptive statistics, Chi-square test, one way ANOVA, Pearson’s correlation, and regression analysis. Results: A total of 200 participators were tested in this research. As a result of the study, inferiority had a positive correlation with authoritarian parenting style; nonattachment had a negative correlation with authoritarian parenting style; and with inferiority, the hypothesis was supported. In the linear mediation models, nonattachment was found to be partially mediated the relationship between authoritarian parenting style and inferiority. Conclusion: These findings imply that interventions aimed at enhancing nonattachment as a way to improve inferiority are a good strategy.

Keywords: inferiority, nonattachment, parenting style, psychology

Procedia PDF Downloads 115
8565 DNA Nano Wires: A Charge Transfer Approach

Authors: S. Behnia, S. Fathizadeh, A. Akhshani

Abstract:

In the recent decades, DNA has increasingly interested in the potential technological applications that not directly related to the coding for functional proteins that is the expressed in form of genetic information. One of the most interesting applications of DNA is related to the construction of nanostructures of high complexity, design of functional nanostructures in nanoelectronical devices, nanosensors and nanocercuits. In this field, DNA is of fundamental interest to the development of DNA-based molecular technologies, as it possesses ideal structural and molecular recognition properties for use in self-assembling nanodevices with a definite molecular architecture. Also, the robust, one-dimensional flexible structure of DNA can be used to design electronic devices, serving as a wire, transistor switch, or rectifier depending on its electronic properties. In order to understand the mechanism of the charge transport along DNA sequences, numerous studies have been carried out. In this regard, conductivity properties of DNA molecule could be investigated in a simple, but chemically specific approach that is intimately related to the Su-Schrieffer-Heeger (SSH) model. In SSH model, the non-diagonal matrix element dependence on intersite displacements is considered. In this approach, the coupling between the charge and lattice deformation is along the helix. This model is a tight-binding linear nanoscale chain established to describe conductivity phenomena in doped polyethylene. It is based on the assumption of a classical harmonic interaction between sites, which is linearly coupled to a tight-binding Hamiltonian. In this work, the Hamiltonian and corresponding motion equations are nonlinear and have high sensitivity to initial conditions. Then, we have tried to move toward the nonlinear dynamics and phase space analysis. Nonlinear dynamics and chaos theory, regardless of any approximation, could open new horizons to understand the conductivity mechanism in DNA. For a detailed study, we have tried to study the current flowing in DNA and investigated the characteristic I-V diagram. As a result, It is shown that there are the (quasi-) ohmic areas in I-V diagram. On the other hand, the regions with a negative differential resistance (NDR) are detectable in diagram.

Keywords: DNA conductivity, Landauer resistance, negative di erential resistance, Chaos theory, mean Lyapunov exponent

Procedia PDF Downloads 411
8564 Machine Learning in Agriculture: A Brief Review

Authors: Aishi Kundu, Elhan Raza

Abstract:

"Necessity is the mother of invention" - Rapid increase in the global human population has directed the agricultural domain toward machine learning. The basic need of human beings is considered to be food which can be satisfied through farming. Farming is one of the major revenue generators for the Indian economy. Agriculture is not only considered a source of employment but also fulfils humans’ basic needs. So, agriculture is considered to be the source of employment and a pillar of the economy in developing countries like India. This paper provides a brief review of the progress made in implementing Machine Learning in the agricultural sector. Accurate predictions are necessary at the right time to boost production and to aid the timely and systematic distribution of agricultural commodities to make their availability in the market faster and more effective. This paper includes a thorough analysis of various machine learning algorithms applied in different aspects of agriculture (crop management, soil management, water management, yield tracking, livestock management, etc.).Due to climate changes, crop production is affected. Machine learning can analyse the changing patterns and come up with a suitable approach to minimize loss and maximize yield. Machine Learning algorithms/ models (regression, support vector machines, bayesian models, artificial neural networks, decision trees, etc.) are used in smart agriculture to analyze and predict specific outcomes which can be vital in increasing the productivity of the Agricultural Food Industry. It is to demonstrate vividly agricultural works under machine learning to sensor data. Machine Learning is the ongoing technology benefitting farmers to improve gains in agriculture and minimize losses. This paper discusses how the irrigation and farming management systems evolve in real-time efficiently. Artificial Intelligence (AI) enabled programs to emerge with rich apprehension for the support of farmers with an immense examination of data.

Keywords: machine Learning, artificial intelligence, crop management, precision farming, smart farming, pre-harvesting, harvesting, post-harvesting

Procedia PDF Downloads 91
8563 Models of Environmental, Crack Propagation of Some Aluminium Alloys (7xxx)

Authors: H. A. Jawan

Abstract:

This review describes the models of environmental-related crack propagation of aluminum alloys (7xxx) during the last few decades. Acknowledge on effects of different factors on the susceptibility to SCC permits to propose valuable mechanisms on crack advancement. The reliable mechanism of cracking give a possibility to propose the optimum chemical composition and thermal treatment conditions resulting in microstructure the most suitable for real environmental condition and stress state.

Keywords: microstructure, environmental, propagation, mechanism

Procedia PDF Downloads 408
8562 Regional Flood Frequency Analysis in Narmada Basin: A Case Study

Authors: Ankit Shah, R. K. Shrivastava

Abstract:

Flood and drought are two main features of hydrology which affect the human life. Floods are natural disasters which cause millions of rupees’ worth of damage each year in India and the whole world. Flood causes destruction in form of life and property. An accurate estimate of the flood damage potential is a key element to an effective, nationwide flood damage abatement program. Also, the increase in demand of water due to increase in population, industrial and agricultural growth, has let us know that though being a renewable resource it cannot be taken for granted. We have to optimize the use of water according to circumstances and conditions and need to harness it which can be done by construction of hydraulic structures. For their safe and proper functioning of hydraulic structures, we need to predict the flood magnitude and its impact. Hydraulic structures play a key role in harnessing and optimization of flood water which in turn results in safe and maximum use of water available. Mainly hydraulic structures are constructed on ungauged sites. There are two methods by which we can estimate flood viz. generation of Unit Hydrographs and Flood Frequency Analysis. In this study, Regional Flood Frequency Analysis has been employed. There are many methods for estimating the ‘Regional Flood Frequency Analysis’ viz. Index Flood Method. National Environmental and Research Council (NERC Methods), Multiple Regression Method, etc. However, none of the methods can be considered universal for every situation and location. The Narmada basin is located in Central India. It is drained by most of the tributaries, most of which are ungauged. Therefore it is very difficult to estimate flood on these tributaries and in the main river. As mentioned above Artificial Neural Network (ANN)s and Multiple Regression Method is used for determination of Regional flood Frequency. The annual peak flood data of 20 sites gauging sites of Narmada Basin is used in the present study to determine the Regional Flood relationships. Homogeneity of the considered sites is determined by using the Index Flood Method. Flood relationships obtained by both the methods are compared with each other, and it is found that ANN is more reliable than Multiple Regression Method for the present study area.

Keywords: artificial neural network, index flood method, multi layer perceptrons, multiple regression, Narmada basin, regional flood frequency

Procedia PDF Downloads 405
8561 Application of the Micropolar Beam Theory for the Construction of the Discrete-Continual Model of Carbon Nanotubes

Authors: Samvel H. Sargsyan

Abstract:

Together with the study of electron-optical properties of nanostructures and proceeding from experiment-based data, the study of the mechanical properties of nanostructures has become quite actual. For the study of the mechanical properties of fullerene, carbon nanotubes, graphene and other nanostructures one of the crucial issues is the construction of their adequate mathematical models. Among all mathematical models of graphene or carbon nano-tubes, this so-called discrete-continuous model is specifically important. It substitutes the interactions between atoms by elastic beams or springs. The present paper demonstrates the construction of the discrete-continual beam model for carbon nanotubes or graphene, where the micropolar beam model based on the theory of moment elasticity is accepted. With the account of the energy balance principle, the elastic moment constants for the beam model, expressed by the physical and geometrical parameters of carbon nanotube or graphene, are determined. By switching from discrete-continual beam model to the continual, the models of micropolar elastic cylindrical shell and micropolar elastic plate are confirmed as continual models for carbon nanotube and graphene respectively.

Keywords: carbon nanotube, discrete-continual, elastic, graphene, micropolar, plate, shell

Procedia PDF Downloads 143
8560 New Approach for Load Modeling

Authors: Slim Chokri

Abstract:

Load forecasting is one of the central functions in power systems operations. Electricity cannot be stored, which means that for electric utility, the estimate of the future demand is necessary in managing the production and purchasing in an economically reasonable way. A majority of the recently reported approaches are based on neural network. The attraction of the methods lies in the assumption that neural networks are able to learn properties of the load. However, the development of the methods is not finished, and the lack of comparative results on different model variations is a problem. This paper presents a new approach in order to predict the Tunisia daily peak load. The proposed method employs a computational intelligence scheme based on the Fuzzy neural network (FNN) and support vector regression (SVR). Experimental results obtained indicate that our proposed FNN-SVR technique gives significantly good prediction accuracy compared to some classical techniques.

Keywords: neural network, load forecasting, fuzzy inference, machine learning, fuzzy modeling and rule extraction, support vector regression

Procedia PDF Downloads 423
8559 Using Linear Logistic Regression to Evaluation the Patient and System Delay and Effective Factors in Mortality of Patients with Acute Myocardial Infarction

Authors: Firouz Amani, Adalat Hoseinian, Sajjad Hakimian

Abstract:

Background: The mortality due to Myocardial Infarction (MI) is often occur during the first hours after onset of symptom. So, for taking the necessary treatment and decreasing the mortality rate, timely visited of the hospital could be effective in this regard. The aim of this study was to investigate the impact of effective factors in mortality of MI patients by using Linear Logistic Regression. Materials and Methods: In this case-control study, all patients with Acute MI who referred to the Ardabil city hospital were studied. All of died patients were considered as the case group (n=27) and we select 27 matched patients without Acute MI as a control group. Data collected for all patients in two groups by a same checklist and then analyzed by SPSS version 24 software using statistical methods. We used the linear logistic regression model to determine the effective factors on mortality of MI patients. Results: The mean age of patients in case group was significantly higher than control group (75.1±11.7 vs. 63.1±11.6, p=0.001).The history of non-cardinal diseases in case group with 44.4% significantly higher than control group with 7.4% (p=0.002).The number of performed PCIs in case group with 40.7% significantly lower than control group with 74.1% (P=0.013). The time distance between hospital admission and performed PCI in case group with 110.9 min was significantly upper than control group with 56 min (P=0.001). The mean of delay time from Onset of symptom to hospital admission (patient delay) and the mean of delay time from hospital admissions to receive treatment (system delay) was similar between two groups. By using logistic regression model we revealed that history of non-cardinal diseases (OR=283) and the number of performed PCIs (OR=24.5) had significant impact on mortality of MI patients in compare to other factors. Conclusion: Results of this study showed that of all studied factors, the number of performed PCIs, history of non-cardinal illness and the interval between onset of symptoms and performed PCI have significant relation with morality of MI patients and other factors were not meaningful. So, doing more studies with a large sample and investigated other involved factors such as smoking, weather and etc. is recommended in future.

Keywords: acute MI, mortality, heart failure, arrhythmia

Procedia PDF Downloads 114
8558 Strengthening Evaluation of Steel Girder Bridge under Load Rating Analysis: Case Study

Authors: Qudama Albu-Jasim, Majdi Kanaan

Abstract:

A case study about the load rating and strengthening evaluation of the six-span of steel girders bridge in Colton city of State of California is investigated. To simulate the load rating strengthening assessment for the Colton Overhead bridge, a three-dimensional finite element model built in the CSiBridge program is simulated. Three-dimensional finite-element models of the bridge are established considering the nonlinear behavior of critical bridge components to determine the feasibility and strengthening capacity under load rating analysis. The bridge was evaluated according to Caltrans Bridge Load Rating Manual 1st edition for rating the superstructure using the Load and Resistance Factor Rating (LRFR) method. The analysis for the bridge was based on load rating to determine the largest loads that can be safely placed on existing I-girder steel members and permitted to pass over the bridge. Through extensive numerical simulations, the bridge is identified to be deficient in flexural and shear capacities, and therefore strengthening for reducing the risk is needed. An in-depth parametric study is considered to evaluate the sensitivity of the bridge’s load rating response to variations in its structural parameters. The parametric analysis has exhibited that uncertainties associated with the steel’s yield strength, the superstructure’s weight, and the diaphragm configurations should be considered during the fragility analysis of the bridge system.

Keywords: load rating, CSIBridge, strengthening, uncertainties, case study

Procedia PDF Downloads 201
8557 Pricing European Options under Jump Diffusion Models with Fast L-stable Padé Scheme

Authors: Salah Alrabeei, Mohammad Yousuf

Abstract:

The goal of option pricing theory is to help the investors to manage their money, enhance returns and control their financial future by theoretically valuing their options. Modeling option pricing by Black-School models with jumps guarantees to consider the market movement. However, only numerical methods can solve this model. Furthermore, not all the numerical methods are efficient to solve these models because they have nonsmoothing payoffs or discontinuous derivatives at the exercise price. In this paper, the exponential time differencing (ETD) method is applied for solving partial integrodifferential equations arising in pricing European options under Merton’s and Kou’s jump-diffusion models. Fast Fourier Transform (FFT) algorithm is used as a matrix-vector multiplication solver, which reduces the complexity from O(M2) into O(M logM). A partial fraction form of Pad`e schemes is used to overcome the complexity of inverting polynomial of matrices. These two tools guarantee to get efficient and accurate numerical solutions. We construct a parallel and easy to implement a version of the numerical scheme. Numerical experiments are given to show how fast and accurate is our scheme.

Keywords: Integral differential equations, , L-stable methods, pricing European options, Jump–diffusion model

Procedia PDF Downloads 136
8556 Islamic Equity Markets Response to Volatility of Bitcoin

Authors: Zakaria S. G. Hegazy, Walid M. A. Ahmed

Abstract:

This paper examines the dependence structure of Islamic stock markets on Bitcoin’s realized volatility components in bear, normal, and bull market periods. A quantile regression approach is employed, after adjusting raw returns with respect to a broad set of relevant global factors and accounting for structural breaks in the data. The results reveal that upside volatility tends to exert negative influences on Islamic developed-market returns more in bear than in bull market conditions, while downside volatility positively affects returns during bear and bull conditions. For emerging markets, we find that the upside (downside) component exerts lagged negative (positive) effects on returns in bear (all) market regimes. By and large, the dependence structures turn out to be asymmetric. Our evidence provides essential implications for investors.

Keywords: cryptocurrency markets, bitcoin, realized volatility measures, asymmetry, quantile regression

Procedia PDF Downloads 172
8555 Estimation of Dynamic Characteristics of a Middle Rise Steel Reinforced Concrete Building Using Long-Term

Authors: Fumiya Sugino, Naohiro Nakamura, Yuji Miyazu

Abstract:

In earthquake resistant design of buildings, evaluation of vibration characteristics is important. In recent years, due to the increment of super high-rise buildings, the evaluation of response is important for not only the first mode but also higher modes. The knowledge of vibration characteristics in buildings is mostly limited to the first mode and the knowledge of higher modes is still insufficient. In this paper, using earthquake observation records of a SRC building by applying frequency filter to ARX model, characteristics of first and second modes were studied. First, we studied the change of the eigen frequency and the damping ratio during the 3.11 earthquake. The eigen frequency gradually decreases from the time of earthquake occurrence, and it is almost stable after about 150 seconds have passed. At this time, the decreasing rates of the 1st and 2nd eigen frequencies are both about 0.7. Although the damping ratio has more large error than the eigen frequency, both the 1st and 2nd damping ratio are 3 to 5%. Also, there is a strong correlation between the 1st and 2nd eigen frequency, and the regression line is y=3.17x. In the damping ratio, the regression line is y=0.90x. Therefore 1st and 2nd damping ratios are approximately the same degree. Next, we study the eigen frequency and damping ratio from 1998 after 3.11 earthquakes, the final year is 2014. In all the considered earthquakes, they are connected in order of occurrence respectively. The eigen frequency slowly declined from immediately after completion, and tend to stabilize after several years. Although it has declined greatly after the 3.11 earthquake. Both the decresing rate of the 1st and 2nd eigen frequencies until about 7 years later are about 0.8. For the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1% and the 2nd increases by less than 1%. For the eigen frequency, there is a strong correlation between the 1st and 2nd, and the regression line is y=3.17x. For the damping ratio, the regression line is y=1.01x. Therefore, it can be said that the 1st and 2nd damping ratio is approximately the same degree. Based on the above results, changes in eigen frequency and damping ratio are summarized as follows. In the long-term study of the eigen frequency, both the 1st and 2nd gradually declined from immediately after completion, and tended to stabilize after a few years. Further it declined after the 3.11 earthquake. In addition, there is a strong correlation between the 1st and 2nd, and the declining time and the decreasing rate are the same degree. In the long-term study of the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1%, the 2nd increases by less than 1%. Also, the 1st and 2nd are approximately the same degree.

Keywords: eigenfrequency, damping ratio, ARX model, earthquake observation records

Procedia PDF Downloads 206
8554 Modeling and Simulation Methods Using MATLAB/Simulink

Authors: Jamuna Konda, Umamaheswara Reddy Karumuri, Sriramya Muthugi, Varun Pishati, Ravi Shakya,

Abstract:

This paper investigates the challenges involved in mathematical modeling of plant simulation models ensuring the performance of the plant models much closer to the real time physical model. The paper includes the analysis performed and investigation on different methods of modeling, design and development for plant model. Issues which impact the design time, model accuracy as real time model, tool dependence are analyzed. The real time hardware plant would be a combination of multiple physical models. It is more challenging to test the complete system with all possible test scenarios. There are possibilities of failure or damage of the system due to any unwanted test execution on real time.

Keywords: model based design (MBD), MATLAB, Simulink, stateflow, plant model, real time model, real-time workshop (RTW), target language compiler (TLC)

Procedia PDF Downloads 334
8553 Application of Human Biomonitoring and Physiologically-Based Pharmacokinetic Modelling to Quantify Exposure to Selected Toxic Elements in Soil

Authors: Eric Dede, Marcus Tindall, John W. Cherrie, Steve Hankin, Christopher Collins

Abstract:

Current exposure models used in contaminated land risk assessment are highly conservative. Use of these models may lead to over-estimation of actual exposures, possibly resulting in negative financial implications due to un-necessary remediation. Thus, we are carrying out a study seeking to improve our understanding of human exposure to selected toxic elements in soil: arsenic (As), cadmium (Cd), chromium (Cr), nickel (Ni), and lead (Pb) resulting from allotment land-use. The study employs biomonitoring and physiologically-based pharmacokinetic (PBPK) modelling to quantify human exposure to these elements. We recruited 37 allotment users (adults > 18 years old) in Scotland, UK, to participate in the study. Concentrations of the elements (and their bioaccessibility) were measured in allotment samples (soil and allotment produce). Amount of produce consumed by the participants and participants’ biological samples (urine and blood) were collected for up to 12 consecutive months. Ethical approval was granted by the University of Reading Research Ethics Committee. PBPK models (coded in MATLAB) were used to estimate the distribution and accumulation of the elements in key body compartments, thus indicating the internal body burden. Simulating low element intake (based on estimated ‘doses’ from produce consumption records), predictive models suggested that detection of these elements in urine and blood was possible within a given period of time following exposure. This information was used in planning biomonitoring, and is currently being used in the interpretation of test results from biological samples. Evaluation of the models is being carried out using biomonitoring data, by comparing model predicted concentrations and measured biomarker concentrations. The PBPK models will be used to generate bioavailability values, which could be incorporated in contaminated land exposure models. Thus, the findings from this study will promote a more sustainable approach to contaminated land management.

Keywords: biomonitoring, exposure, PBPK modelling, toxic elements

Procedia PDF Downloads 310
8552 Mapping Man-Induced Soil Degradation in Armenia's High Mountain Pastures through Remote Sensing Methods: A Case Study

Authors: A. Saghatelyan, Sh. Asmaryan, G. Tepanosyan, V. Muradyan

Abstract:

One of major concern to Armenia has been soil degradation emerged as a result of unsustainable management and use of grasslands, this in turn largely impacting environment, agriculture and finally human health. Hence, assessment of soil degradation is an essential and urgent objective set out to measure its possible consequences and develop a potential management strategy. Since recently, an essential tool for assessing pasture degradation has been remote sensing (RS) technologies. This research was done with an intention to measure preciseness of Linear spectral unmixing (LSU) and NDVI-SMA methods to estimate soil surface components related to degradation (fractional vegetation cover-FVC, bare soils fractions, surface rock cover) and determine appropriateness of these methods for mapping man-induced soil degradation in high mountain pastures. Taking into consideration a spatially complex and heterogeneous biogeophysical structure of the studied site, we used high resolution multispectral QuickBird imagery of a pasture site in one of Armenia’s rural communities - Nerkin Sasoonashen. The accuracy assessment was done by comparing between the land cover abundance data derived through RS methods and the ground truth land cover abundance data. A significant regression was established between ground truth FVC estimate and both NDVI-LSU and LSU - produced vegetation abundance data (R2=0.636, R2=0.625, respectively). For bare soil fractions linear regression produced a general coefficient of determination R2=0.708. Because of poor spectral resolution of the QuickBird imagery LSU failed with assessment of surface rock abundance (R2=0.015). It has been well documented by this particular research, that reduction in vegetation cover runs in parallel with increase in man-induced soil degradation, whereas in the absence of man-induced soil degradation a bare soil fraction does not exceed a certain level. The outcomes show that the proposed method of man-induced soil degradation assessment through FVC, bare soil fractions and field data adequately reflects the current status of soil degradation throughout the studied pasture site and may be employed as an alternate of more complicated models for soil degradation assessment.

Keywords: Armenia, linear spectral unmixing, remote sensing, soil degradation

Procedia PDF Downloads 318
8551 Comparisons of Co-Seismic Gravity Changes between GRACE Observations and the Predictions from the Finite-Fault Models for the 2012 Mw = 8.6 Indian Ocean Earthquake Off-Sumatra

Authors: Armin Rahimi

Abstract:

The Gravity Recovery and Climate Experiment (GRACE) has been a very successful project in determining math redistribution within the Earth system. Large deformations caused by earthquakes are in the high frequency band. Unfortunately, GRACE is only capable to provide reliable estimate at the low-to-medium frequency band for the gravitational changes. In this study, we computed the gravity changes after the 2012 Mw8.6 Indian Ocean earthquake off-Sumatra using the GRACE Level-2 monthly spherical harmonic (SH) solutions released by the University of Texas Center for Space Research (UTCSR). Moreover, we calculated gravity changes using different fault models derived from teleseismic data. The model predictions showed non-negligible discrepancies in gravity changes. However, after removing high-frequency signals, using Gaussian filtering 350 km commensurable GRACE spatial resolution, the discrepancies vanished, and the spatial patterns of total gravity changes predicted from all slip models became similar at the spatial resolution attainable by GRACE observations, and predicted-gravity changes were consistent with the GRACE-detected gravity changes. Nevertheless, the fault models, in which give different slip amplitudes, proportionally lead to different amplitude in the predicted gravity changes.

Keywords: undersea earthquake, GRACE observation, gravity change, dislocation model, slip distribution

Procedia PDF Downloads 343
8550 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Primary Distant Metastases Growth

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

Finding algorithms to predict the growth of tumors has piqued the interest of researchers ever since the early days of cancer research. A number of studies were carried out as an attempt to obtain reliable data on the natural history of breast cancer growth. Mathematical modeling can play a very important role in the prognosis of tumor process of breast cancer. However, mathematical models describe primary tumor growth and metastases growth separately. Consequently, we propose a mathematical growth model for primary tumor and primary metastases which may help to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoM-IV and corresponding software. We are interested in: 1) modelling the whole natural history of primary tumor and primary metastases; 2) developing adequate and precise CoM-IV which reflects relations between PT and MTS; 3) analyzing the CoM-IV scope of application; 4) implementing the model as a software tool. The CoM-IV is based on exponential tumor growth model and consists of a system of determinate nonlinear and linear equations; corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and primary metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for primary metastases; 3) ‘visible period’ for primary metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-IV model and predictive software: a) detect different growth periods of primary tumor and primary metastases; b) make forecast of the period of primary metastases appearance; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of BC and facilitate optimization of diagnostic tests. The following are calculated by CoM-IV: the number of doublings for ‘nonvisible’ and ‘visible’ growth period of primary metastases; tumor volume doubling time (days) for ‘nonvisible’ and ‘visible’ growth period of primary metastases. The CoM-IV enables, for the first time, to predict the whole natural history of primary tumor and primary metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-IV describes correctly primary tumor and primary distant metastases growth of IV (T1-4N0-3M1) stage with (N1-3) or without regional metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and manifestation of primary metastases.

Keywords: breast cancer, exponential growth model, mathematical modelling, primary metastases, primary tumor, survival

Procedia PDF Downloads 326
8549 Impact on Cost of Equity of Accounting and Disclosures

Authors: Abhishek Ranga

Abstract:

The study examined the effect of accounting choice and level of disclosure on the firm’s implied cost of equity in Indian environment. For the study accounting choice was classified as aggressive or conservative depending upon the firm’s choice of accounting methods, accounting policies and accounting estimates. Level of disclosure is the quantum of financial and non-financial information disclosed in firm’s annual report, essentially in note to accounts section, schedules forming part of financial statements and Management Discussion and Analysis report. Regression models were developed with cost of equity as a dependent variable and accounting choice, level of disclosure as an independent variable along with selected control variables. Cost of equity was measured using Edward-Bell-Ohlson (EBO) valuation model, to measure accounting choice Modified-Jones-Model (MJM) was used and level of disclosure was measured using a disclosure index essentially drawn from Botosan study. Results indicated a negative association between the implied cost of equity and conservative accounting choice and also between level of disclosure and cost of equity.

Keywords: aggressive accounting choice, conservative accounting choice, disclosure, implied cost of equity

Procedia PDF Downloads 452
8548 A Demonstration of How to Employ and Interpret Binary IRT Models Using the New IRT Procedure in SAS 9.4

Authors: Ryan A. Black, Stacey A. McCaffrey

Abstract:

Over the past few decades, great strides have been made towards improving the science in the measurement of psychological constructs. Item Response Theory (IRT) has been the foundation upon which statistical models have been derived to increase both precision and accuracy in psychological measurement. These models are now being used widely to develop and refine tests intended to measure an individual's level of academic achievement, aptitude, and intelligence. Recently, the field of clinical psychology has adopted IRT models to measure psychopathological phenomena such as depression, anxiety, and addiction. Because advances in IRT measurement models are being made so rapidly across various fields, it has become quite challenging for psychologists and other behavioral scientists to keep abreast of the most recent developments, much less learn how to employ and decide which models are the most appropriate to use in their line of work. In the same vein, IRT measurement models vary greatly in complexity in several interrelated ways including but not limited to the number of item-specific parameters estimated in a given model, the function which links the expected response and the predictor, response option formats, as well as dimensionality. As a result, inferior methods (a.k.a. Classical Test Theory methods) continue to be employed in efforts to measure psychological constructs, despite evidence showing that IRT methods yield more precise and accurate measurement. To increase the use of IRT methods, this study endeavors to provide a comprehensive overview of binary IRT models; that is, measurement models employed on test data consisting of binary response options (e.g., correct/incorrect, true/false, agree/disagree). Specifically, this study will cover the most basic binary IRT model, known as the 1-parameter logistic (1-PL) model dating back to over 50 years ago, up until the most recent complex, 4-parameter logistic (4-PL) model. Binary IRT models will be defined mathematically and the interpretation of each parameter will be provided. Next, all four binary IRT models will be employed on two sets of data: 1. Simulated data of N=500,000 subjects who responded to four dichotomous items and 2. A pilot analysis of real-world data collected from a sample of approximately 770 subjects who responded to four self-report dichotomous items pertaining to emotional consequences to alcohol use. Real-world data were based on responses collected on items administered to subjects as part of a scale-development study (NIDA Grant No. R44 DA023322). IRT analyses conducted on both the simulated data and analyses of real-world pilot will provide a clear demonstration of how to construct, evaluate, and compare binary IRT measurement models. All analyses will be performed using the new IRT procedure in SAS 9.4. SAS code to generate simulated data and analyses will be available upon request to allow for replication of results.

Keywords: instrument development, item response theory, latent trait theory, psychometrics

Procedia PDF Downloads 338