Search results for: simple exponential smoothing model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19414

Search results for: simple exponential smoothing model

19264 Numerical Investigation of Poling Vector Angle on Adaptive Sandwich Plate Deflection

Authors: Alireza Pouladkhan, Mohammad Yavari Foroushani, Ali Mortazavi

Abstract:

This paper presents a finite element model for a sandwich plate containing a piezoelectric core. A sandwich plate with a piezoelectric core is constructed using the shear mode of piezoelectric materials. The orientation of poling vector has a significant effect on deflection and stress induced in the piezo-actuated adaptive sandwich plate. In the present study, the influence of this factor for a clamped-clamped-free-free and simple-simple-free-free square sandwich plate is investigated using Finite Element Method. The study uses ABAQUS (v.6.7) software to derive the finite element model of the sandwich plate. By using this model, the study gives the influences of the poling vector angle on the response of the smart structure and determines the maximum transverse displacement and maximum stress induced.

Keywords: finite element method, sandwich plate, poling vector, piezoelectric materials, smart structure, electric enthalpy

Procedia PDF Downloads 233
19263 Design and Development of Real-Time Optimal Energy Management System for Hybrid Electric Vehicles

Authors: Masood Roohi, Amir Taghavipour

Abstract:

This paper describes a strategy to develop an energy management system (EMS) for a charge-sustaining power-split hybrid electric vehicle. This kind of hybrid electric vehicles (HEVs) benefit from the advantages of both parallel and series architecture. However, it gets relatively more complicated to manage power flow between the battery and the engine optimally. The applied strategy in this paper is based on nonlinear model predictive control approach. First of all, an appropriate control-oriented model which was accurate enough and simple was derived. Towards utilization of this controller in real-time, the problem was solved off-line for a vast area of reference signals and initial conditions and stored the computed manipulated variables inside look-up tables. Look-up tables take a little amount of memory. Also, the computational load dramatically decreased, because to find required manipulated variables the controller just needed a simple interpolation between tables.

Keywords: hybrid electric vehicles, energy management system, nonlinear model predictive control, real-time

Procedia PDF Downloads 352
19262 Generalized Additive Model for Estimating Propensity Score

Authors: Tahmidul Islam

Abstract:

Propensity Score Matching (PSM) technique has been widely used for estimating causal effect of treatment in observational studies. One major step of implementing PSM is estimating the propensity score (PS). Logistic regression model with additive linear terms of covariates is most used technique in many studies. Logistics regression model is also used with cubic splines for retaining flexibility in the model. However, choosing the functional form of the logistic regression model has been a question since the effectiveness of PSM depends on how accurately the PS been estimated. In many situations, the linearity assumption of linear logistic regression may not hold and non-linear relation between the logit and the covariates may be appropriate. One can estimate PS using machine learning techniques such as random forest, neural network etc for more accuracy in non-linear situation. In this study, an attempt has been made to compare the efficacy of Generalized Additive Model (GAM) in various linear and non-linear settings and compare its performance with usual logistic regression. GAM is a non-parametric technique where functional form of the covariates can be unspecified and a flexible regression model can be fitted. In this study various simple and complex models have been considered for treatment under several situations (small/large sample, low/high number of treatment units) and examined which method leads to more covariate balance in the matched dataset. It is found that logistic regression model is impressively robust against inclusion quadratic and interaction terms and reduces mean difference in treatment and control set equally efficiently as GAM does. GAM provided no significantly better covariate balance than logistic regression in both simple and complex models. The analysis also suggests that larger proportion of controls than treatment units leads to better balance for both of the methods.

Keywords: accuracy, covariate balances, generalized additive model, logistic regression, non-linearity, propensity score matching

Procedia PDF Downloads 367
19261 Prediction of Malawi Rainfall from Global Sea Surface Temperature Using a Simple Multiple Regression Model

Authors: Chisomo Patrick Kumbuyo, Katsuyuki Shimizu, Hiroshi Yasuda, Yoshinobu Kitamura

Abstract:

This study deals with a way of predicting Malawi rainfall from global sea surface temperature (SST) using a simple multiple regression model. Monthly rainfall data from nine stations in Malawi grouped into two zones on the basis of inter-station rainfall correlations were used in the study. Zone 1 consisted of Karonga and Nkhatabay stations, located in northern Malawi; and Zone 2 consisted of Bolero, located in northern Malawi; Kasungu, Dedza, Salima, located in central Malawi; Mangochi, Makoka and Ngabu stations located in southern Malawi. Links between Malawi rainfall and SST based on statistical correlations were evaluated and significant results selected as predictors for the regression models. The predictors for Zone 1 model were identified from the Atlantic, Indian and Pacific oceans while those for Zone 2 were identified from the Pacific Ocean. The correlation between the fit of predicted and observed rainfall values of the models were satisfactory with r=0.81 and 0.54 for Zone 1 and 2 respectively (significant at less than 99.99%). The results of the models are in agreement with other findings that suggest that SST anomalies in the Atlantic, Indian and Pacific oceans have an influence on the rainfall patterns of Southern Africa.

Keywords: Malawi rainfall, forecast model, predictors, SST

Procedia PDF Downloads 389
19260 Moving Beyond the Limits of Disability Inclusion: Using the Concept of Belonging Through Friendship to Improve the Outcome of the Social Model of Disability

Authors: Luke S. Carlos A. Thompson

Abstract:

The medical model of disability, though beneficial for the medical professional, is often exclusionary, restrictive and dehumanizing when applied to the lived experience of disability. As a result, a critique of this model was constructed called the social model of disability. Much of the language used to articulate the purpose behind the social model of disability can be summed up within the word inclusion. However, this essay asserts that inclusiveness is an incomplete aspiration. The social model, as it currently stands, does not aid in creating a society where those with impairments actually belong. Rather, the social model aids in lessening the visibility, or negative consequence of, difference. Therefore, the social model does not invite society to welcome those with physical and intellectual impairments. It simply aids society in ignoring the existence of impairment by removing explicit forms of exclusion. Rather than simple inclusion, then, this essay uses John Swinton’s concept of friendship and Jean Vanier’s understanding of belonging to better articulate the intended outcome of the social model—a society where everyone can belong.

Keywords: belong, community, differently-able, disability, exclusion, friendship, inclusion, normality

Procedia PDF Downloads 448
19259 Analytical and Numerical Investigation of Friction-Restricted Growth and Buckling of Elastic Fibers

Authors: Peter L. Varkonyi, Andras A. Sipos

Abstract:

The quasi-static growth of elastic fibers is studied in the presence of distributed contact with an immobile surface, subject to isotropic dry or viscous friction. Unlike classical problems of elastic stability modelled by autonomous dynamical systems with multiple time scales (slowly varying bifurcation parameter, and fast system dynamics), this problem can only be formulated as a non-autonomous system without time scale separation. It is found that the fibers initially converge to a trivial, straight configuration, which is later replaced by divergence reminiscent of buckling phenomena. In order to capture the loss of stability, a new definition of exponential stability against infinitesimal perturbations for systems defined over finite time intervals is developed. A semi-analytical method for the determination of the critical length based on eigenvalue analysis is proposed. The post-critical behavior of the fibers is studied numerically by using variational methods. The emerging post-critical shapes and the asymptotic behavior as length goes to infinity are identified for simple spatial distributions of growth. Comparison with physical experiments indicates reasonable accuracy of the theoretical model. Some applications from modeling plant root growth to the design of soft manipulators in robotics are briefly discussed.

Keywords: buckling, elastica, friction, growth

Procedia PDF Downloads 190
19258 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models

Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg

Abstract:

Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.

Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction

Procedia PDF Downloads 309
19257 Impact of Map Generalization in Spatial Analysis

Authors: Lin Li, P. G. R. N. I. Pussella

Abstract:

When representing spatial data and their attributes on different types of maps, the scale plays a key role in the process of map generalization. The process is consisted with two main operators such as selection and omission. Once some data were selected, they would undergo of several geometrical changing processes such as elimination, simplification, smoothing, exaggeration, displacement, aggregation and size reduction. As a result of these operations at different levels of data, the geometry of the spatial features such as length, sinuosity, orientation, perimeter and area would be altered. This would be worst in the case of preparation of small scale maps, since the cartographer has not enough space to represent all the features on the map. What the GIS users do is when they wanted to analyze a set of spatial data; they retrieve a data set and does the analysis part without considering very important characteristics such as the scale, the purpose of the map and the degree of generalization. Further, the GIS users use and compare different maps with different degrees of generalization. Sometimes, GIS users are going beyond the scale of the source map using zoom in facility and violate the basic cartographic rule 'it is not suitable to create a larger scale map using a smaller scale map'. In the study, the effect of map generalization for GIS analysis would be discussed as the main objective. It was used three digital maps with different scales such as 1:10000, 1:50000 and 1:250000 which were prepared by the Survey Department of Sri Lanka, the National Mapping Agency of Sri Lanka. It was used common features which were on above three maps and an overlay analysis was done by repeating the data with different combinations. Road data, River data and Land use data sets were used for the study. A simple model, to find the best place for a wild life park, was used to identify the effects. The results show remarkable effects on different degrees of generalization processes. It can see that different locations with different geometries were received as the outputs from this analysis. The study suggests that there should be reasonable methods to overcome this effect. It can be recommended that, as a solution, it would be very reasonable to take all the data sets into a common scale and do the analysis part.

Keywords: generalization, GIS, scales, spatial analysis

Procedia PDF Downloads 328
19256 Data Driven Infrastructure Planning for Offshore Wind farms

Authors: Isha Saxena, Behzad Kazemtabrizi, Matthias C. M. Troffaes, Christopher Crabtree

Abstract:

The calculations done at the beginning of the life of a wind farm are rarely reliable, which makes it important to conduct research and study the failure and repair rates of the wind turbines under various conditions. This miscalculation happens because the current models make a simplifying assumption that the failure/repair rate remains constant over time. This means that the reliability function is exponential in nature. This research aims to create a more accurate model using sensory data and a data-driven approach. The data cleaning and data processing is done by comparing the Power Curve data of the wind turbines with SCADA data. This is then converted to times to repair and times to failure timeseries data. Several different mathematical functions are fitted to the times to failure and times to repair data of the wind turbine components using Maximum Likelihood Estimation and the Posterior expectation method for Bayesian Parameter Estimation. Initial results indicate that two parameter Weibull function and exponential function produce almost identical results. Further analysis is being done using the complex system analysis considering the failures of each electrical and mechanical component of the wind turbine. The aim of this project is to perform a more accurate reliability analysis that can be helpful for the engineers to schedule maintenance and repairs to decrease the downtime of the turbine.

Keywords: reliability, bayesian parameter inference, maximum likelihood estimation, weibull function, SCADA data

Procedia PDF Downloads 86
19255 The Spherical Geometric Model of Absorbed Particles: Application to the Electron Transport Study

Authors: A. Bentabet, A. Aydin, N. Fenineche

Abstract:

The mean penetration depth has a most important in the absorption transport phenomena. Analytical model of light ion backscattering coefficients from solid targets have been made by Vicanek and Urbassek. In the present work, we showed a mathematical expression (deterministic model) for Z1/2. In advantage, in the best of our knowledge, relatively only one analytical model exit for electron or positron mean penetration depth in solid targets. In this work, we have presented a simple geometric spherical model of absorbed particles based on CSDA scheme. In advantage, we have showed an analytical expression of the mean penetration depth by combination between our model and the Vicanek and Urbassek theory. For this, we have used the Relativistic Partial Wave Expansion Method (RPWEM) and the optical dielectric model to calculate the elastic cross sections and the ranges respectively. Good agreement was found with the experimental and theoretical data.

Keywords: Bentabet spherical geometric model, continuous slowing down approximation, stopping powers, ranges, mean penetration depth

Procedia PDF Downloads 641
19254 A Super-Efficiency Model for Evaluating Efficiency in the Presence of Time Lag Effect

Authors: Yanshuang Zhang, Byungho Jeong

Abstract:

In many cases, there is a time lag between the consumption of inputs and the production of outputs. This time lag effect should be considered in evaluating the performance of organizations. Recently, a couple of DEA models were developed for considering time lag effect in efficiency evaluation of research activities. Multi-periods input(MpI) and Multi-periods output(MpO) models are integrated models to calculate simple efficiency considering time lag effect. However, these models can’t discriminate efficient DMUs because of the nature of basic DEA model in which efficiency scores are limited to ‘1’. That is, efficient DMUs can’t be discriminated because their efficiency scores are same. Thus, this paper suggests a super-efficiency model for efficiency evaluation under the consideration of time lag effect based on the MpO model. A case example using a long-term research project is given to compare the suggested model with the MpO model.

Keywords: DEA, super-efficiency, time lag, multi-periods input

Procedia PDF Downloads 473
19253 Taylor’s Law and Relationship between Life Expectancy at Birth and Variance in Age at Death in Period Life Table

Authors: David A. Swanson, Lucky M. Tedrow

Abstract:

Taylor’s Law is a widely observed empirical pattern that relates variances to means in sets of non-negative measurements via an approximate power function, which has found application to human mortality. This study adds to this research by showing that Taylor’s Law leads to a model that reasonably describes the relationship between life expectancy at birth (e0, which also is equal to mean age at death in a life table) and variance at age of death in seven World Bank regional life tables measured at two points in time, 1970 and 2000. Using as a benchmark a non-random sample of four Japanese female life tables covering the period from 1950 to 2004, the study finds that the simple linear model provides reasonably accurate estimates of variance in age at death in a life table from e0, where the latter range from 60.9 to 85.59 years. Employing 2017 life tables from the Human Mortality Database, the simple linear model is used to provide estimates of variance at age in death for six countries, three of which have high e0 values and three of which have lower e0 values. The paper provides a substantive interpretation of Taylor’s Law relative to e0 and concludes by arguing that reasonably accurate estimates of variance in age at death in a period life table can be calculated using this approach, which also can be used where e0 itself is estimated rather than generated through the construction of a life table, a useful feature of the model.

Keywords: empirical pattern, mean age at death in a life table, mean age of a stationary population, stationary population

Procedia PDF Downloads 330
19252 Joint Simulation and Estimation for Geometallurgical Modeling of Crushing Consumption Energy in the Mineral Processing Plants

Authors: Farzaneh Khorram, Xavier Emery

Abstract:

In this paper, it is aimed to create a crushing consumption energy (CCE) block model and determine the blocks with the potential to have the maximum grinding process energy consumption for the study area. For this purpose, a joint estimate (co-kriging) and joint simulation (turning band method and plurigaussian methods) to predict the CCE based on its correlation with SAG power index (SPI), A×B, and ball mill bond work Index (BWI). The analysis shows that TBCOSIM and plurigaussian have the more realistic results compared to cokriging. It seems logical due to the nature of the data geometallurgical and the linearity of the kriging method and the smoothing effect of kriging.

Keywords: plurigaussian, turning band, cokriging, geometallurgy

Procedia PDF Downloads 70
19251 Analysis and Prediction of COVID-19 by Using Recurrent LSTM Neural Network Model in Machine Learning

Authors: Grienggrai Rajchakit

Abstract:

As we all know that coronavirus is announced as a pandemic in the world by WHO. It is speeded all over the world with few days of time. To control this spreading, every citizen maintains social distance and self-preventive measures are the best strategies. As of now, many researchers and scientists are continuing their research in finding out the exact vaccine. The machine learning model finds that the coronavirus disease behaves in an exponential manner. To abolish the consequence of this pandemic, an efficient step should be taken to analyze this disease. In this paper, a recurrent neural network model is chosen to predict the number of active cases in a particular state. To make this prediction of active cases, we need a database. The database of COVID-19 is downloaded from the KAGGLE website and is analyzed by applying a recurrent LSTM neural network with univariant features to predict the number of active cases of patients suffering from the corona virus. The downloaded database is divided into training and testing the chosen neural network model. The model is trained with the training data set and tested with a testing dataset to predict the number of active cases in a particular state; here, we have concentrated on Andhra Pradesh state.

Keywords: COVID-19, coronavirus, KAGGLE, LSTM neural network, machine learning

Procedia PDF Downloads 160
19250 Transition Pay vs. Liquidity Holdings: A Comparative Analysis on Consumption Smoothing using Bank Transaction Data

Authors: Nora Neuteboom

Abstract:

This study investigates household financial behaviors during unemployment spells in the Netherlands using high-frequency transaction data through a event study specification integrating propensity score matching. In our specification, we contrasted treated individuals, who underwent job loss, with non-treated individuals possessing comparable financial characteristics. The initial onset of unemployment triggers a substantial surge in income, primarily attributed to transition payments, but swiftly drops post-unemployment, with unemployment benefits covering slightly over half of former salary earnings. Despite a re-employment rate of around half within six months, the treatment group experiences a persistent average monthly earnings reduction of approximately 600 EUR by month. Spending patterns fluctuate significantly, surging before unemployment due to transition payments and declining below non-treated individuals post-unemployment, indicating challenges to fully smooth consumption after job loss. Furthermore, our study disentangles the effects of transition payments and liquidity holdings on spending, revealing that transition payments exert a more pronounced and prolonged impact on consumption smoothing than liquidity holdings. Transition payments significantly stimulate spending, particularly in pin and iDEAL categories, contrasting a much smaller relative spending impact of liquidity holdings.

Keywords: household consumption, transaction data, big data, propensity score matching

Procedia PDF Downloads 18
19249 Marginalized Two-Part Joint Models for Generalized Gamma Family of Distributions

Authors: Mohadeseh Shojaei Shahrokhabadi, Ding-Geng (Din) Chen

Abstract:

Positive continuous outcomes with a substantial number of zero values and incomplete longitudinal follow-up are quite common in medical cost data. To jointly model semi-continuous longitudinal cost data and survival data and to provide marginalized covariate effect estimates, a marginalized two-part joint model (MTJM) has been developed for outcome variables with lognormal distributions. In this paper, we propose MTJM models for outcome variables from a generalized gamma (GG) family of distributions. The GG distribution constitutes a general family that includes approximately all of the most frequently used distributions like the Gamma, Exponential, Weibull, and Log Normal. In the proposed MTJM-GG model, the conditional mean from a conventional two-part model with a three-parameter GG distribution is parameterized to provide the marginal interpretation for regression coefficients. In addition, MTJM-gamma and MTJM-Weibull are developed as special cases of MTJM-GG. To illustrate the applicability of the MTJM-GG, we applied the model to a set of real electronic health record data recently collected in Iran, and we provided SAS code for application. The simulation results showed that when the outcome distribution is unknown or misspecified, which is usually the case in real data sets, the MTJM-GG consistently outperforms other models. The GG family of distribution facilitates estimating a model with improved fit over the MTJM-gamma, standard Weibull, or Log-Normal distributions.

Keywords: marginalized two-part model, zero-inflated, right-skewed, semi-continuous, generalized gamma

Procedia PDF Downloads 176
19248 Source Identification Model Based on Label Propagation and Graph Ordinary Differential Equations

Authors: Fuyuan Ma, Yuhan Wang, Junhe Zhang, Ying Wang

Abstract:

Identifying the sources of information dissemination is a pivotal task in the study of collective behaviors in networks, enabling us to discern and intercept the critical pathways through which information propagates from its origins. This allows for the control of the information’s dissemination impact in its early stages. Numerous methods for source detection rely on pre-existing, underlying propagation models as prior knowledge. Current models that eschew prior knowledge attempt to harness label propagation algorithms to model the statistical characteristics of propagation states or employ Graph Neural Networks (GNNs) for deep reverse modeling of the diffusion process. These approaches are either deficient in modeling the propagation patterns of information or are constrained by the over-smoothing problem inherent in GNNs, which limits the stacking of sufficient model depth to excavate global propagation patterns. Consequently, we introduce the ODESI model. Initially, the model employs a label propagation algorithm to delineate the distribution density of infected states within a graph structure and extends the representation of infected states from integers to state vectors, which serve as the initial states of nodes. Subsequently, the model constructs a deep architecture based on GNNs-coupled Ordinary Differential Equations (ODEs) to model the global propagation patterns of continuous propagation processes. Addressing the challenges associated with solving ODEs on graphs, we approximate the analytical solutions to reduce computational costs. Finally, we conduct simulation experiments on two real-world social network datasets, and the results affirm the efficacy of our proposed ODESI model in source identification tasks.

Keywords: source identification, ordinary differential equations, label propagation, complex networks

Procedia PDF Downloads 20
19247 Technical Evaluation of Upgrading a Simple Gas Turbine Fired by Diesel to a Combined Cycle Power Plant in Kingdom of Suadi Arabistan Using WinSim Design II Software

Authors: Salman Obaidoon, Mohamed Hassan, Omer Bakather

Abstract:

As environmental regulations increase, the need for a clean and inexpensive energy is becoming necessary these days using an available raw material with high efficiency and low emissions of toxic gases. This paper presents a study on modifying a gas turbine power plant fired by diesel, which is located in Saudi Arabia in order to increase the efficiency and capacity of the station as well as decrease the rate of emissions. The studied power plant consists of 30 units with different capacities and total net power is 1470 MW. The study was conducted on unit number 25 (GT-25) which produces 72.3 MW with 29.5% efficiency. In the beginning, the unit was modeled and simulated by using WinSim Design II software. In this step, actual unit data were used in order to test the validity of the model. The net power and efficiency obtained from software were 76.4 MW and 32.2% respectively. A difference of about 6% was found in the simulated power plant compared to the actual station which means that the model is valid. After the validation of the model, the simple gas turbine power plant was converted to a combined cycle power plant (CCPP). In this case, the exhausted gas released from the gas turbine was introduced to a heat recovery steam generator (HRSG), which consists of three heat exchangers: an economizer, an evaporator and a superheater. In this proposed model, many scenarios were conducted in order to get the optimal operating conditions. The net power of CCPP was increased to 116.4 MW while the overall efficiency of the unit was reached to 49.02%, consuming the same amount of fuel for the gas turbine power plant. For the purpose of comparing the rate of emissions of carbon dioxide on each model. It was found that the rate of CO₂ emissions was decreased from 15.94 kg/s to 9.22 kg/s by using the combined cycle power model as a result of reducing of the amount of diesel from 5.08 kg/s to 2.94 kg/s needed to produce 76.5 MW. The results indicate that the rate of emissions of carbon dioxide was decreased by 42.133% in CCPP compared to the simple gas turbine power plant.

Keywords: combined cycle power plant, efficiency, heat recovery steam generator, simulation, validation, WinSim design II software

Procedia PDF Downloads 275
19246 Target and Biomarker Identification Platform to Design New Drugs against Aging and Age-Related Diseases

Authors: Peter Fedichev

Abstract:

We studied fundamental aspects of aging to develop a mathematical model of gene regulatory network. We show that aging manifests itself as an inherent instability of gene network leading to exponential accumulation of regulatory errors with age. To validate our approach we studied age-dependent omic data such as transcriptomes, metabolomes etc. of different model organisms and humans. We build a computational platform based on our model to identify the targets and biomarkers of aging to design new drugs against aging and age-related diseases. As biomarkers of aging, we choose the rate of aging and the biological age since they completely determine the state of the organism. Since rate of aging rapidly changes in response to an external stress, this kind of biomarker can be useful as a tool for quantitative efficacy assessment of drugs, their combinations, dose optimization, chronic toxicity estimate, personalized therapies selection, clinical endpoints achievement (within clinical research), and death risk assessments. According to our model, we propose a method for targets identification for further interventions against aging and age-related diseases. Being a biotech company, we offer a complete pipeline to develop an anti-aging drug-candidate.

Keywords: aging, longevity, biomarkers, senescence

Procedia PDF Downloads 274
19245 A Small Signal Model for Resonant Tunneling Diode

Authors: Rania M. Abdallah, Ahmed A. S. Dessouki, Moustafa H. Aly

Abstract:

This paper has presented a new simple small signal model for a resonant tunnelling diode device. The resonant tunnelling diode equivalent circuit elements were calculated and the results led to good agreement between the calculated equivalent circuit elements and the measurement results.

Keywords: resonant tunnelling diode, small signal model, negative differential conductance, electronic engineering

Procedia PDF Downloads 443
19244 Estimation of Location and Scale Parameters of Extended Exponential Distribution Based on Record Statistics

Authors: E. Krishna

Abstract:

An Extended form of exponential distribution using Marshall and Olkin method is introduced.The location scale family of these distributions is considered. For location scale free family, exact expressions for single and product moments of upper record statistics are derived. The mean, variance and covariance of record values are computed for various values of the shape parameter. Using these the BLUE's of location and scale parameters are derived.The variances and covariance of estimates are obtained.Through Monte Carlo simulation the con dence intervals for location and scale parameters are constructed.The Best liner unbiased Predictor (BLUP) of future records are also discussed.

Keywords: BLUE, BLUP, con dence interval, Marshall-Olkin distribution, Monte Carlo simulation, prediction of future records, record statistics

Procedia PDF Downloads 417
19243 Estimation of Rare and Clustered Population Mean Using Two Auxiliary Variables in Adaptive Cluster Sampling

Authors: Muhammad Nouman Qureshi, Muhammad Hanif

Abstract:

Adaptive cluster sampling (ACS) is specifically developed for the estimation of highly clumped populations and applied to a wide range of situations like animals of rare and endangered species, uneven minerals, HIV patients and drug users. In this paper, we proposed a generalized semi-exponential estimator with two auxiliary variables under the framework of ACS design. The expressions of approximate bias and mean square error (MSE) of the proposed estimator are derived. Theoretical comparisons of the proposed estimator have been made with existing estimators. A numerical study is conducted on real and artificial populations to demonstrate and compare the efficiencies of the proposed estimator. The results indicate that the proposed generalized semi-exponential estimator performed considerably better than all the adaptive and non-adaptive estimators considered in this paper.

Keywords: auxiliary information, adaptive cluster sampling, clustered populations, Hansen-Hurwitz estimation

Procedia PDF Downloads 238
19242 Effect of the Distance Between the Cold Surface and the Hot Surface on the Production of a Simple Solar Still

Authors: Hiba Akrout, Khaoula Hidouri, Béchir Chaouachi, Romdhane Ben Slama

Abstract:

A simple solar distiller has been constructed in order to desalt water via the solar distillation process. An experimental study has been conducted in June. The aim of this work is to study the effect of the distance between the cold condensing surface and the hot steam generation surface in order to optimize the geometric characteristics of a simple solar still. To do this, we have developed a mathematical model based on thermal and mass equations system. Subsequently, the equations system resolution has been made through a program developed on MATLAB software, which allowed us to evaluate the production of this system as a function of the distance separating the two surfaces. In addition, this model allowed us to determine the evolution of the humid air temperature inside the solar still as well as the humidity ratio profile all over the day. Simulations results show that the solar distiller production, as well as the humid air temperature, are proportional to the global solar radiation. It was also found that the air humidity ratio inside the solar still has a similar evolution of that of solar radiation. Moreover, the solar distiller average height augmentation, for constant water depth, induces the diminution of the production. However, increasing the water depth for a fixed average height of solar distiller reduces the production.

Keywords: distillation, solar energy, heat transfer, mass transfer, average height

Procedia PDF Downloads 143
19241 A Blind Three-Dimensional Meshes Watermarking Using the Interquartile Range

Authors: Emad E. Abdallah, Alaa E. Abdallah, Bajes Y. Alskarnah

Abstract:

We introduce a robust three-dimensional watermarking algorithm for copyright protection and indexing. The basic idea behind our technique is to measure the interquartile range or the spread of the 3D model vertices. The algorithm starts by converting all the vertices to spherical coordinate followed by partitioning them into small groups. The proposed algorithm is slightly altering the interquartile range distribution of the small groups based on predefined watermark. The experimental results on several 3D meshes prove perceptual invisibility and the robustness of the proposed technique against the most common attacks including compression, noise, smoothing, scaling, rotation as well as combinations of these attacks.

Keywords: watermarking, three-dimensional models, perceptual invisibility, interquartile range, 3D attacks

Procedia PDF Downloads 474
19240 Kou Jump Diffusion Model: An Application to the SP 500; Nasdaq 100 and Russell 2000 Index Options

Authors: Wajih Abbassi, Zouhaier Ben Khelifa

Abstract:

The present research points towards the empirical validation of three options valuation models, the ad-hoc Black-Scholes model as proposed by Berkowitz (2001), the constant elasticity of variance model of Cox and Ross (1976) and the Kou jump-diffusion model (2002). Our empirical analysis has been conducted on a sample of 26,974 options written on three indexes, the S&P 500, Nasdaq 100 and the Russell 2000 that were negotiated during the year 2007 just before the sub-prime crisis. We start by presenting the theoretical foundations of the models of interest. Then we use the technique of trust-region-reflective algorithm to estimate the structural parameters of these models from cross-section of option prices. The empirical analysis shows the superiority of the Kou jump-diffusion model. This superiority arises from the ability of this model to portray the behavior of market participants and to be closest to the true distribution that characterizes the evolution of these indices. Indeed the double-exponential distribution covers three interesting properties that are: the leptokurtic feature, the memory less property and the psychological aspect of market participants. Numerous empirical studies have shown that markets tend to have both overreaction and under reaction over good and bad news respectively. Despite of these advantages there are not many empirical studies based on this model partly because probability distribution and option valuation formula are rather complicated. This paper is the first to have used the technique of nonlinear curve-fitting through the trust-region-reflective algorithm and cross-section options to estimate the structural parameters of the Kou jump-diffusion model.

Keywords: jump-diffusion process, Kou model, Leptokurtic feature, trust-region-reflective algorithm, US index options

Procedia PDF Downloads 429
19239 Video Foreground Detection Based on Adaptive Mixture Gaussian Model for Video Surveillance Systems

Authors: M. A. Alavianmehr, A. Tashk, A. Sodagaran

Abstract:

Modeling background and moving objects are significant techniques for video surveillance and other video processing applications. This paper presents a foreground detection algorithm that is robust against illumination changes and noise based on adaptive mixture Gaussian model (GMM), and provides a novel and practical choice for intelligent video surveillance systems using static cameras. In the previous methods, the image of still objects (background image) is not significant. On the contrary, this method is based on forming a meticulous background image and exploiting it for separating moving objects from their background. The background image is specified either manually, by taking an image without vehicles, or is detected in real-time by forming a mathematical or exponential average of successive images. The proposed scheme can offer low image degradation. The simulation results demonstrate high degree of performance for the proposed method.

Keywords: image processing, background models, video surveillance, foreground detection, Gaussian mixture model

Procedia PDF Downloads 516
19238 Stabilizing Effect of Magnetic Field in a Thermally Modulated Porous Layer

Authors: M. Meenasaranya, S. Saravanan

Abstract:

Nonlinear stability analysis is carried out to determine the effect of surface temperature modulation in an infinite horizontal porous layer heated from below. The layer is saturated by an electrically conducting, viscous, incompressible and Newtonian fluid. The Brinkman model is used for momentum equation, and the Boussinesq approximation is invoked. The system is assumed to be bounded by rigid boundaries. The energy theory is implemented to find the global exponential stability region of the considered system. The results are analysed for arbitrary values of modulation frequency and amplitude. The existence of subcritical instability region is confirmed by comparing the obtained result with the known linear result. The vertical magnetic field is found to stabilize the system.

Keywords: Brinkman model, energy method, magnetic field, surface temperature modulation

Procedia PDF Downloads 395
19237 Non-Linear Regression Modeling for Composite Distributions

Authors: Mostafa Aminzadeh, Min Deng

Abstract:

Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.

Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions

Procedia PDF Downloads 33
19236 Simulating the Dynamics of E-waste Production from Mobile Phone: Model Development and Case Study of Rwanda

Authors: Rutebuka Evariste, Zhang Lixiao

Abstract:

Mobile phone sales and stocks showed an exponential growth in the past years globally and the number of mobile phones produced each year was surpassing one billion in 2007, this soaring growth of related e-waste deserves sufficient attentions paid to it regionally and globally as long as 40% of its total weight is made from metallic which 12 elements are identified to be highly hazardous and 12 are less harmful. Different research and methods have been used to estimate the obsolete mobile phones but none has developed a dynamic model and handle the discrepancy resulting from improper approach and error in the input data. The study aim was to develop a comprehensive dynamic system model for simulating the dynamism of e-waste production from mobile phone regardless the country or region and prevail over the previous errors. The logistic model method combined with STELLA program has been used to carry out this study. Then the simulation for Rwanda has been conducted and compared with others countries’ results as model testing and validation. Rwanda is about 1.5 million obsoletes mobile phone with 125 tons of waste in 2014 with e-waste production peak in 2017. It is expected to be 4.17 million obsoletes with 351.97 tons by 2020 along with environmental impact intensity of 21times to 2005. Thus, it is concluded through the model testing and validation that the present dynamic model is competent and able deal with mobile phone e-waste production the fact that it has responded to the previous studies questions from Czech Republic, Iran, and China.

Keywords: carrying capacity, dematerialization, logistic model, mobile phone, obsolescence, similarity, Stella, system dynamics

Procedia PDF Downloads 344
19235 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design

Authors: Mohammad Bagher Anvari, Arman Shojaei

Abstract:

Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.

Keywords: incremental launching, bridge construction, finite element model, optimization

Procedia PDF Downloads 102