Search results for: estimated model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17843

Search results for: estimated model

17783 Housing Price Dynamics: Comparative Study of 1980-1999 and the New Millenium

Authors: Janne Engblom, Elias Oikarinen

Abstract:

The understanding of housing price dynamics is of importance to a great number of agents: to portfolio investors, banks, real estate brokers and construction companies as well as to policy makers and households. A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models is dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Common Correlated Effects estimator (CCE) of dynamic panel data which also accounts for cross-sectional dependence which is caused by common structures of the economy. In presence of cross-sectional dependence standard OLS gives biased estimates. In this study, U.S housing price dynamics were examined empirically using the dynamic CCE estimator with first-difference of housing price as the dependent and first-differences of per capita income, interest rate, housing stock and lagged price together with deviation of housing prices from their long-run equilibrium level as independents. These deviations were also estimated from the data. The aim of the analysis was to provide estimates with comparisons of estimates between 1980-1999 and 2000-2012. Based on data of 50 U.S cities over 1980-2012 differences of short-run housing price dynamics estimates were mostly significant when two time periods were compared. Significance tests of differences were provided by the model containing interaction terms of independents and time dummy variable. Residual analysis showed very low cross-sectional correlation of the model residuals compared with the standard OLS approach. This means a good fit of CCE estimator model. Estimates of the dynamic panel data model were in line with the theory of housing price dynamics. Results also suggest that dynamics of a housing market is evolving over time.

Keywords: dynamic model, panel data, cross-sectional dependence, interaction model

Procedia PDF Downloads 230
17782 Estimation of the Temperatures in an Asynchronous Machine Using Extended Kalman Filter

Authors: Yi Huang, Clemens Guehmann

Abstract:

In order to monitor the thermal behavior of an asynchronous machine with squirrel cage rotor, a 9th-order extended Kalman filter (EKF) algorithm is implemented to estimate the temperatures of the stator windings, the rotor cage and the stator core. The state-space equations of EKF are established based on the electrical, mechanical and the simplified thermal models of an asynchronous machine. The asynchronous machine with simplified thermal model in Dymola is compiled as DymolaBlock, a physical model in MATLAB/Simulink. The coolant air temperature, three-phase voltages and currents are exported from the physical model and are processed by EKF estimator as inputs. Compared to the temperatures exported from the physical model of the machine, three parts of temperatures can be estimated quite accurately by the EKF estimator. The online EKF estimator is independent from the machine control algorithm and can work under any speed and load condition if the stator current is nonzero current system.

Keywords: asynchronous machine, extended Kalman filter, resistance, simulation, temperature estimation, thermal model

Procedia PDF Downloads 260
17781 Monte Carlo Simulation of Thyroid Phantom Imaging Using Geant4-GATE

Authors: Parimalah Velo, Ahmad Zakaria

Abstract:

Introduction: Monte Carlo simulations of preclinical imaging systems allow opportunity to enable new research that could range from designing hardware up to discovery of new imaging application. The simulation system which could accurately model an imaging modality provides a platform for imaging developments that might be inconvenient in physical experiment systems due to the expense, unnecessary radiation exposures and technological difficulties. The aim of present study is to validate the Monte Carlo simulation of thyroid phantom imaging using Geant4-GATE for Siemen’s e-cam single head gamma camera. Upon the validation of the gamma camera simulation model by comparing physical characteristic such as energy resolution, spatial resolution, sensitivity, and dead time, the GATE simulation of thyroid phantom imaging is carried out. Methods: A thyroid phantom is defined geometrically which comprises of 2 lobes with 80mm in diameter, 1 hot spot, and 3 cold spots. This geometry accurately resembling the actual dimensions of thyroid phantom. A planar image of 500k counts with 128x128 matrix size was acquired using simulation model and in actual experimental setup. Upon image acquisition, quantitative image analysis was performed by investigating the total number of counts in image, the contrast of the image, radioactivity distributions on image and the dimension of hot spot. Algorithm for each quantification is described in detail. The difference in estimated and actual values for both simulation and experimental setup is analyzed for radioactivity distribution and dimension of hot spot. Results: The results show that the difference between contrast level of simulation image and experimental image is within 2%. The difference in the total count between simulation and actual study is 0.4%. The results of activity estimation show that the relative difference between estimated and actual activity for experimental and simulation is 4.62% and 3.03% respectively. The deviation in estimated diameter of hot spot for both simulation and experimental study are similar which is 0.5 pixel. In conclusion, the comparisons show good agreement between the simulation and experimental data.

Keywords: gamma camera, Geant4 application of tomographic emission (GATE), Monte Carlo, thyroid imaging

Procedia PDF Downloads 249
17780 Estimation of Fuel Cost Function Characteristics Using Cuckoo Search

Authors: M. R. Al-Rashidi, K. M. El-Naggar, M. F. Al-Hajri

Abstract:

The fuel cost function describes the electric power generation-cost relationship in thermal plants, hence, it sheds light on economical aspects of power industry. Different models have been proposed to describe this relationship with the quadratic function model being the most popular one. Parameters of second order fuel cost function are estimated in this paper using cuckoo search algorithm. It is a new population based meta-heuristic optimization technique that has been used in this study primarily as an accurate estimation tool. Its main features are flexibility, simplicity, and effectiveness when compared to other estimation techniques. The parameter estimation problem is formulated as an optimization one with the goal being minimizing the error associated with the estimated parameters. A case study is considered in this paper to illustrate cuckoo search promising potential as a valuable estimation and optimization technique.

Keywords: cuckoo search, parameters estimation, fuel cost function, economic dispatch

Procedia PDF Downloads 549
17779 A Reinforcement Learning Approach for Evaluation of Real-Time Disaster Relief Demand and Network Condition

Authors: Ali Nadi, Ali Edrissi

Abstract:

Relief demand and transportation links availability is the essential information that is needed for every natural disaster operation. This information is not in hand once a disaster strikes. Relief demand and network condition has been evaluated based on prediction method in related works. Nevertheless, prediction seems to be over or under estimated due to uncertainties and may lead to a failure operation. Therefore, in this paper a stochastic programming model is proposed to evaluate real-time relief demand and network condition at the onset of a natural disaster. To address the time sensitivity of the emergency response, the proposed model uses reinforcement learning for optimization of the total relief assessment time. The proposed model is tested on a real size network problem. The simulation results indicate that the proposed model performs well in the case of collecting real-time information.

Keywords: disaster management, real-time demand, reinforcement learning, relief demand

Procedia PDF Downloads 277
17778 Estimation of Solar Radiation Power Using Reference Evaluation of Solar Transmittance, 2 Bands Model: Case Study of Semarang, Central Java, Indonesia

Authors: Benedictus Asriparusa

Abstract:

Solar radiation is a green renewable energy which has the potential to answer the needs of energy problems on the period. Knowing how to estimate the strength of the solar radiation force may be one solution of sustainable energy development in an integrated manner. Unfortunately, a fairly extensive area of Indonesia is still very low availability of solar radiation data. Therefore, we need a method to estimate the exact strength of solar radiation. In this study, author used a model Reference Evaluation of Solar Transmittance, 2 Bands (REST 2). Validation of REST 2 model has been performed in Spain, India, Colorado, Saudi Arabia, and several other areas. But it is not widely used in Indonesia. Indonesian region study area is represented by the area of Semarang, Central Java. Solar radiation values estimated using REST 2 model was then verified by field data and gives average RMSE value of 6.53%. Based on the value, it can be concluded that the model REST 2 can be used to estimate the value of solar radiation in clear sky conditions in parts of Indonesia.

Keywords: estimation, solar radiation power, REST 2, solar transmittance

Procedia PDF Downloads 397
17777 Comparison of Wake Oscillator Models to Predict Vortex-Induced Vibration of Tall Chimneys

Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta

Abstract:

The present study compares the semi-empirical wake-oscillator models that are used to predict vortex-induced vibration of structures. These models include those proposed by Facchinetti, Farshidian, and Dolatabadi, and Skop and Griffin. These models combine a wake oscillator model resembling the Van der Pol oscillator model and a single degree of freedom oscillation model. In order to use these models for estimating the top displacement of chimneys, the first mode vibration of the chimneys is only considered. The modal equation of the chimney constitutes the single degree of freedom model (SDOF). The equations of the wake oscillator model and the SDOF are simultaneously solved using an iterative procedure. The empirical parameters used in the wake-oscillator models are estimated using a newly developed approach, and response is compared with experimental data, which appeared comparable. For carrying out the iterative solution, the ode solver of MATLAB is used. To carry out the comparative study, a tall concrete chimney of height 210m has been chosen with the base diameter as 28m, top diameter as 20m, and thickness as 0.3m. The responses of the chimney are also determined using the linear model proposed by E. Simiu and the deterministic model given in Eurocode. It is observed from the comparative study that the responses predicted by the Facchinetti model and the model proposed by Skop and Griffin are nearly the same, while the model proposed by Fashidian and Dolatabadi predicts a higher response. The linear model without considering the aero-elastic phenomenon provides a less response as compared to the non-linear models. Further, for large damping, the prediction of the response by the Euro code is relatively well compared to those of non-linear models.

Keywords: chimney, deterministic model, van der pol, vortex-induced vibration

Procedia PDF Downloads 194
17776 Effect of Outliers in Assessing Significant Wave Heights Through a Time-Dependent GEV Model

Authors: F. Calderón-Vega, A. D. García-Soto, C. Mösso

Abstract:

Recorded significant wave heights sometimes exhibit large uncommon values (outliers) that can be associated with extreme phenomena such as hurricanes and cold fronts. In this study, some extremely large wave heights recorded in NOAA buoys (National Data Buoy Center, noaa.gov) are used to investigate their effect in the prediction of future wave heights associated with given return periods. Extreme waves are predicted through a time-dependent model based on the so-called generalized extreme value distribution. It is found that the outliers do affect the estimated wave heights. It is concluded that a detailed inspection of outliers is envisaged to determine whether they are real recorded values since this will impact defining design wave heights for coastal protection purposes.

Keywords: GEV model, non-stationary, seasonality, outliers

Procedia PDF Downloads 167
17775 Frailty Models for Modeling Heterogeneity: Simulation Study and Application to Quebec Pension Plan

Authors: Souad Romdhane, Lotfi Belkacem

Abstract:

When referring to actuarial analysis of lifetime, only models accounting for observable risk factors have been developed. Within this context, Cox proportional hazards model (CPH model) is commonly used to assess the effects of observable covariates as gender, age, smoking habits, on the hazard rates. These covariates may fail to fully account for the true lifetime interval. This may be due to the existence of another random variable (frailty) that is still being ignored. The aim of this paper is to examine the shared frailty issue in the Cox proportional hazard model by including two different parametric forms of frailty into the hazard function. Four estimated methods are used to fit them. The performance of the parameter estimates is assessed and compared between the classical Cox model and these frailty models through a real-life data set from the Quebec Pension Plan and then using a more general simulation study. This performance is investigated in terms of the bias of point estimates and their empirical standard errors in both fixed and random effect parts. Both the simulation and the real dataset studies showed differences between classical Cox model and shared frailty model.

Keywords: life insurance-pension plan, survival analysis, risk factors, cox proportional hazards model, multivariate failure-time data, shared frailty, simulations study

Procedia PDF Downloads 332
17774 The Use of Performance Indicators for Evaluating Models of Drying Jackfruit (Artocarpus heterophyllus L.): Page, Midilli, and Lewis

Authors: D. S. C. Soares, D. G. Costa, J. T. S., A. K. S. Abud, T. P. Nunes, A. M. Oliveira Júnior

Abstract:

Mathematical models of drying are used for the purpose of understanding the drying process in order to determine important parameters for design and operation of the dryer. The jackfruit is a fruit with high consumption in the Northeast and perishability. It is necessary to apply techniques to improve their conservation for longer in order to diffuse it by regions with low consumption. This study aimed to analyse several mathematical models (Page, Lewis, and Midilli) to indicate one that best fits the conditions of convective drying process using performance indicators associated with each model: accuracy (Af) and noise factors (Bf), mean square error (RMSE) and standard error of prediction (% SEP). Jackfruit drying was carried out in convective type tray dryer at a temperature of 50°C for 9 hours. It is observed that the model Midili was more accurate with Af: 1.39, Bf: 1.33, RMSE: 0.01%, and SEP: 5.34. However, the use of the Model Midilli is not appropriate for purposes of control process due to need four tuning parameters. With the performance indicators used in this paper, the Page model showed similar results with only two parameters. It is concluded that the best correlation between the experimental and estimated data is given by the Page’s model.

Keywords: drying, models, jackfruit, biotechnology

Procedia PDF Downloads 351
17773 The Origins of Inflation in Tunisia

Authors: Narimen Rdhaounia Mohamed Kouni

Abstract:

Our aim in this paper is to identify the origins of inflation in Tunisia on the period from 1988 to 2018. In order to estimate the model, an ARDL methodology is used. We studied also the effect of informal economy on inflation. Indeed, we estimated the size of the informal economy in Tunisia based on Gutmann method. The results showed that there are three main origins of inflation. In fact, the first origin is the fiscal policy adopted by Tunisia, particularly after revolution. The second origin is the increase of monetary variables. Finally, informal economy played an important role in inflation.

Keywords: inflation, consumer price index, informal, gutmann method, ARDL model

Procedia PDF Downloads 54
17772 Breast Cancer Incidence Estimation in Castilla-La Mancha (CLM) from Mortality and Survival Data

Authors: C. Romero, R. Ortega, P. Sánchez-Camacho, P. Aguilar, V. Segur, J. Ruiz, G. Gutiérrez

Abstract:

Introduction: Breast cancer is a leading cause of death in CLM. (2.8% of all deaths in women and 13,8% of deaths from tumors in womens). It is the most tumor incidence in CLM region with 26.1% from all tumours, except nonmelanoma skin (Cancer Incidence in Five Continents, Volume X, IARC). Cancer registries are a good information source to estimate cancer incidence, however the data are usually available with a lag which makes difficult their use for health managers. By contrast, mortality and survival statistics have less delay. In order to serve for resource planning and responding to this problem, a method is presented to estimate the incidence of mortality and survival data. Objectives: To estimate the incidence of breast cancer by age group in CLM in the period 1991-2013. Comparing the data obtained from the model with current incidence data. Sources: Annual number of women by single ages (National Statistics Institute). Annual number of deaths by all causes and breast cancer. (Mortality Registry CLM). The Breast cancer relative survival probability. (EUROCARE, Spanish registries data). Methods: A Weibull Parametric survival model from EUROCARE data is obtained. From the model of survival, the population and population data, Mortality and Incidence Analysis MODel (MIAMOD) regression model is obtained to estimate the incidence of cancer by age (1991-2013). Results: The resulting model is: Ix,t = Logit [const + age1*x + age2*x2 + coh1*(t – x) + coh2*(t-x)2] Where: Ix,t is the incidence at age x in the period (year) t; the value of the parameter estimates is: const (constant term in the model) = -7.03; age1 = 3.31; age2 = -1.10; coh1 = 0.61 and coh2 = -0.12. It is estimated that in 1991 were diagnosed in CLM 662 cases of breast cancer (81.51 per 100,000 women). An estimated 1,152 cases (112.41 per 100,000 women) were diagnosed in 2013, representing an increase of 40.7% in gross incidence rate (1.9% per year). The annual average increases in incidence by age were: 2.07% in women aged 25-44 years, 1.01% (45-54 years), 1.11% (55-64 years) and 1.24% (65-74 years). Cancer registries in Spain that send data to IARC declared 2003-2007 the average annual incidence rate of 98.6 cases per 100,000 women. Our model can obtain an incidence of 100.7 cases per 100,000 women. Conclusions: A sharp and steady increase in the incidence of breast cancer in the period 1991-2013 is observed. The increase was seen in all age groups considered, although it seems more pronounced in young women (25-44 years). With this method you can get a good estimation of the incidence.

Keywords: breast cancer, incidence, cancer registries, castilla-la mancha

Procedia PDF Downloads 288
17771 Estimation of Chronic Kidney Disease Using Artificial Neural Network

Authors: Ilker Ali Ozkan

Abstract:

In this study, an artificial neural network model has been developed to estimate chronic kidney failure which is a common disease. The patients’ age, their blood and biochemical values, and 24 input data which consists of various chronic diseases are used for the estimation process. The input data have been subjected to preprocessing because they contain both missing values and nominal values. 147 patient data which was obtained from the preprocessing have been divided into as 70% training and 30% testing data. As a result of the study, artificial neural network model with 25 neurons in the hidden layer has been found as the model with the lowest error value. Chronic kidney failure disease has been able to be estimated accurately at the rate of 99.3% using this artificial neural network model. The developed artificial neural network has been found successful for the estimation of chronic kidney failure disease using clinical data.

Keywords: estimation, artificial neural network, chronic kidney failure disease, disease diagnosis

Procedia PDF Downloads 415
17770 Robust Speed Sensorless Control to Estimated Error for PMa-SynRM

Authors: Kyoung-Jin Joo, In-Gun Kim, Hyun-Seok Hong, Dong-Woo Kang, Ju Lee

Abstract:

Recently, the permanent magnet-assisted synchronous reluctance motor (PMa-SynRM) that can be substituted for the induction motor has been studying because of the needs of the development of the premium high efficiency motor for the minimum energy performance standard (MEPS). PMa-SynRM is required to the speed and position information for motor speed and torque controls. However, to apply the sensors has many problems that are sensor mounting space shortage and additional cost, etc. Therefore, in this paper, speed-sensorless control based on model reference adaptive system (MRAS) is introduced to eliminate the sensor. The sensorless method is constructed in a reference model as standard and an adaptive model as the state observer. The proposed algorithm is verified by the simulation.

Keywords: PMa-SynRM, sensorless control, robust estimation, MRAS method

Procedia PDF Downloads 372
17769 Downtime Modelling for the Post-Earthquake Building Assessment Phase

Authors: S. Khakurel, R. P. Dhakal, T. Z. Yeow

Abstract:

Downtime is one of the major sources (alongside damage and injury/death) of financial loss incurred by a structure in an earthquake. The length of downtime associated with a building after an earthquake varies depending on the time taken for the reaction (to the earthquake), decision (on the future course of action) and execution (of the decided course of action) phases. Post-earthquake assessment of buildings is a key step in the decision making process to decide the appropriate safety placarding as well as to decide whether a damaged building is to be repaired or demolished. The aim of the present study is to develop a model to quantify downtime associated with the post-earthquake building-assessment phase in terms of two parameters; i) duration of the different assessment phase; and ii) probability of different colour tagging. Post-earthquake assessment of buildings includes three stages; Level 1 Rapid Assessment including a fast external inspection shortly after the earthquake, Level 2 Rapid Assessment including a visit inside the building and Detailed Engineering Evaluation (if needed). In this study, the durations of all three assessment phases are first estimated from the total number of damaged buildings, total number of available engineers and the average time needed for assessing each building. Then, probability of different tag colours is computed from the 2010-11 Canterbury earthquake Sequence database. Finally, a downtime model for the post-earthquake building inspection phase is proposed based on the estimated phase length and probability of tag colours. This model is expected to be used for rapid estimation of seismic downtime within the Loss Optimisation Seismic Design (LOSD) framework.

Keywords: assessment, downtime, LOSD, Loss Optimisation Seismic Design, phase length, tag color

Procedia PDF Downloads 158
17768 Effect of Drag Coefficient Models concerning Global Air-Sea Momentum Flux in Broad Wind Range including Extreme Wind Speeds

Authors: Takeshi Takemoto, Naoya Suzuki, Naohisa Takagaki, Satoru Komori, Masako Terui, George Truscott

Abstract:

Drag coefficient is an important parameter in order to correctly estimate the air-sea momentum flux. However, The parameterization of the drag coefficient hasn’t been established due to the variation in the field data. Instead, a number of drag coefficient model formulae have been proposed, even though almost all these models haven’t discussed the extreme wind speed range. With regards to such models, it is unclear how the drag coefficient changes in the extreme wind speed range as the wind speed increased. In this study, we investigated the effect of the drag coefficient models concerning the air-sea momentum flux in the extreme wind range on a global scale, comparing two different drag coefficient models. Interestingly, one model didn’t discuss the extreme wind speed range while the other model considered it. We found that the difference of the models in the annual global air-sea momentum flux was small because the occurrence frequency of strong wind was approximately 1% with a wind speed of 20m/s or more. However, we also discovered that the difference of the models was shown in the middle latitude where the annual mean air-sea momentum flux was large and the occurrence frequency of strong wind was high. In addition, the estimated data showed that the difference of the models in the drag coefficient was large in the extreme wind speed range and that the largest difference became 23% with a wind speed of 35m/s or more. These results clearly show that the difference of the two models concerning the drag coefficient has a significant impact on the estimation of a regional air-sea momentum flux in an extreme wind speed range such as that seen in a tropical cyclone environment. Furthermore, we estimated each air-sea momentum flux using several kinds of drag coefficient models. We will also provide data from an observation tower and result from CFD (Computational Fluid Dynamics) concerning the influence of wind flow at and around the place.

Keywords: air-sea interaction, drag coefficient, air-sea momentum flux, CFD (Computational Fluid Dynamics)

Procedia PDF Downloads 343
17767 3D Liver Segmentation from CT Images Using a Level Set Method Based on a Shape and Intensity Distribution Prior

Authors: Nuseiba M. Altarawneh, Suhuai Luo, Brian Regan, Guijin Tang

Abstract:

Liver segmentation from medical images poses more challenges than analogous segmentations of other organs. This contribution introduces a liver segmentation method from a series of computer tomography images. Overall, we present a novel method for segmenting liver by coupling density matching with shape priors. Density matching signifies a tracking method which operates via maximizing the Bhattacharyya similarity measure between the photometric distribution from an estimated image region and a model photometric distribution. Density matching controls the direction of the evolution process and slows down the evolving contour in regions with weak edges. The shape prior improves the robustness of density matching and discourages the evolving contour from exceeding liver’s boundaries at regions with weak boundaries. The model is implemented using a modified distance regularized level set (DRLS) model. The experimental results show that the method achieves a satisfactory result. By comparing with the original DRLS model, it is evident that the proposed model herein is more effective in addressing the over segmentation problem. Finally, we gauge our performance of our model against matrices comprising of accuracy, sensitivity and specificity.

Keywords: Bhattacharyya distance, distance regularized level set (DRLS) model, liver segmentation, level set method

Procedia PDF Downloads 292
17766 Cybernetic Modeling of Growth Dynamics of Debaryomyces nepalensis NCYC 3413 and Xylitol Production in Batch Reactor

Authors: J. Sharon Mano Pappu, Sathyanarayana N. Gummadi

Abstract:

Growth of Debaryomyces nepalensis on mixed substrates in batch culture follows diauxic pattern of completely utilizing glucose during the first exponential growth phase, followed by an intermediate lag phase and a second exponential growth phase consuming xylose. The present study deals with the development of cybernetic mathematical model for prediction of xylitol production and yield. Production of xylitol from xylose in batch fermentation is investigated in the presence of glucose as the co-substrate. Different ratios of glucose and xylose concentrations are assessed to study the impact of multi substrate on production of xylitol in batch reactors. The parameters in the model equations were estimated from experimental observations using integral method. The model equations were solved simultaneously by numerical technique using MATLAB. The developed cybernetic model of xylose fermentation in the presence of a co-substrate can provide answers about how the ratio of glucose to xylose influences the yield and rate of production of xylitol. This model is expected to accurately predict the growth of microorganism on mixed substrate, duration of intermediate lag phase, consumption of substrate, production of xylitol. The model developed based on cybernetic modelling framework can be helpful to simulate the dynamic competition between the metabolic pathways.

Keywords: co-substrate, cybernetic model, diauxic growth, xylose, xylitol

Procedia PDF Downloads 302
17765 A New Nonlinear State-Space Model and Its Application

Authors: Abdullah Eqal Al Mazrooei

Abstract:

In this work, a new nonlinear model will be introduced. The model is in the state-space form. The nonlinearity of this model is in the state equation where the state vector is multiplied by its self. This technique makes our model generalizes many famous models as Lotka-Volterra model and Lorenz model which have many applications in the real life. We will apply our new model to estimate the wind speed by using a new nonlinear estimator which suitable to work with our model.

Keywords: nonlinear systems, state-space model, Kronecker product, nonlinear estimator

Procedia PDF Downloads 661
17764 Efficiency Measurement of Turkish via the Stochastic Frontier Model

Authors: Yeliz Mert Kantar, İsmail Yeni̇lmez, Ibrahim Arik

Abstract:

In this study, the efficiency measurement of the top fifty Turkish Universities has been conducted. The top fifty Turkish Universities are listed by The Scientific and Technological Research Council of Turkey (TÜBITAK) according to the Entrepreneur and Innovative University Index every year. The index is calculated based on four components since 2018. Four components are scientific and technological research competency, intellectual property pool, cooperation and interaction, and economic and social contribution. The four components consist of twenty-three sub-components. The 2021 list announced in January 2022 is discussed in this study. Efficiency analysis have been carried out using the Stochastic Frontier Model. Statistical significance of the sub-components that make up the index with certain weights has been examined in terms of the efficiency measurement calculated through the Stochastic Frontier Model. The relationship between the efficiency ranking estimated based on the Stochastic Frontier Model and the Entrepreneur and Innovative University Index ranking is discussed in detail.

Keywords: efficiency, entrepreneur and innovative universities, turkish universities, stochastic frontier model, tübi̇tak

Procedia PDF Downloads 65
17763 Long-Term Exposure Assessments for Cooking Workers Exposed to Polycyclic Aromatic Hydrocarbons and Aldehydes Containing in Cooking Fumes

Authors: Chun-Yu Chen, Kua-Rong Wu, Yu-Cheng Chen, Perng-Jy Tsai

Abstract:

Cooking fumes are known containing polycyclic aromatic hydrocarbons (PAHs) and aldehydes, and some of them have been proven carcinogenic or possibly carcinogenic to humans. Considering their chronic health effects, long-term exposure data is required for assessing cooking workers’ lifetime health risks. Previous exposure assessment studies, due to both time and cost constraints, mostly were based on the cross-sectional data. Therefore, establishing a long-term exposure data has become an important issue for conducting health risk assessment for cooking workers. An approach was proposed in this study. Here, the generation rates of both PAHs and aldehydes from a cooking process were determined by placing a sampling train exactly under the under the exhaust fan under the both the total enclosure condition and normal operating condition, respectively. Subtracting the concentration collected by the former (representing the total emitted concentration) from that of the latter (representing the hood collected concentration), the fugitive emitted concentration was determined. The above data was further converted to determine the generation rates based on the flow rates specified for the exhaust fan. The determinations of the above generation rates were conducted in a testing chamber with a selected cooking process (deep-frying chicken nuggets under 3 L peanut oil at 200°C). The sampling train installed under the exhaust fan consisted respectively an IOM inhalable sampler with a glass fiber filter for collecting particle-phase PAHs, followed by a XAD-2 tube for gas-phase PAHs. The above was also used to sample aldehydes, however, installed with a filter pre-coated with DNPH, and followed by a 2,4-DNPH-cartridge for collecting particle-phase and gas-phase aldehydes, respectively. PAHs and aldehydes samples were analyzed by GC/MS-MS (Agilent 7890B), and HPLC-UV (HITACHI L-7100), respectively. The obtained generation rates of both PAHs and aldehydes were applied to the near-field/ far-field exposure model to estimate the exposures of cooks (the estimated near-field concentration), and helpers (the estimated far-field concentration). For validating purposes, both PAHs and aldehydes samplings were conducted simultaneously using the same sampling train at both near-field and far-field sites of the testing chamber. The sampling results, together with the use of the mixed-effect model, were used to calibrate the estimated near-field/ far-field exposures. In the present study, the obtained emission rates were further converted to emission factor of both PAHs and aldehydes according to the amount of food oil consumed. Applying the long-term food oil consumption records, the emission rates for both PAHs and aldehydes were determined, and the long-term exposure databanks for cooks (the estimated near-field concentration), and helpers (the estimated far-field concentration) were then determined. Results show that the proposed approach was adequate to determine the generation rates of both PAHs and aldehydes under various fan exhaust flow rate conditions. The estimated near-field/ far-field exposures, though were significantly different from that obtained from the field, can be calibrated using the mixed effect model. Finally, the established long-term data bank could provide a useful basis for conducting long-term exposure assessments for cooking workers exposed to PAHs and aldehydes.

Keywords: aldehydes, cooking oil fumes, long-term exposure assessment, modeling, polycyclic aromatic hydrocarbons (PAHs)

Procedia PDF Downloads 115
17762 The Impact of Natural Resources on Financial Development: The Global Perspective

Authors: Remy Jonkam Oben

Abstract:

Using a time series approach, this study investigates how natural resources impact financial development from a global perspective over the 1980-2019 period. Some important determinants of financial development (economic growth, trade openness, population growth, and investment) have been added to the model as control variables. Unit root tests have revealed that all the variables are integrated into order one. Johansen's cointegration test has shown that the variables are in a long-run equilibrium relationship. The vector error correction model (VECM) has estimated the coefficient of the error correction term (ECT), which suggests that the short-run values of natural resources, economic growth, trade openness, population growth, and investment contribute to financial development converging to its long-run equilibrium level by a 23.63% annual speed of adjustment. The estimated coefficients suggest that global natural resource rent has a statistically-significant negative impact on global financial development in the long-run (thereby validating the financial resource curse) but not in the short-run. Causality test results imply that neither global natural resource rent nor global financial development Granger-causes each other.

Keywords: financial development, natural resources, resource curse hypothesis, time series analysis, Granger causality, global perspective

Procedia PDF Downloads 122
17761 Magneto-Rheological Damper Based Semi-Active Robust H∞ Control of Civil Structures with Parametric Uncertainties

Authors: Vedat Senol, Gursoy Turan, Anders Helmersson, Vortechz Andersson

Abstract:

In developing a mathematical model of a real structure, the simulation results of the model may not match the real structural response. This is a general problem that arises during dynamic motion of the structure, which may be modeled by means of parameter variations in the stiffness, damping, and mass matrices. These changes in parameters need to be estimated, and the mathematical model is updated to obtain higher control performances and robustness. In this study, a linear fractional transformation (LFT) is utilized for uncertainty modeling. Further, a general approach to the design of an H∞ control of a magneto-rheological damper (MRD) for vibration reduction in a building with mass, damping, and stiffness uncertainties is presented.

Keywords: uncertainty modeling, structural control, MR Damper, H∞, robust control

Procedia PDF Downloads 118
17760 Pressure-Detecting Method for Estimating Levitation Gap Height of Swirl Gripper

Authors: Kaige Shi, Chao Jiang, Xin Li

Abstract:

The swirl gripper is an electrically activated noncontact handling device that uses swirling airflow to generate a lifting force. This force can be used to pick up a workpiece placed underneath the swirl gripper without any contact. It is applicable, for example, in the semiconductor wafer production line, where contact must be avoided during the handling and moving of a workpiece to minimize damage. When a workpiece levitates underneath a swirl gripper, the gap height between them is crucial for safe handling. Therefore, in this paper, we propose a method to estimate the levitation gap height by detecting pressure at two points. The method is based on theoretical model of the swirl gripper, and has been experimentally verified. Furthermore, the force between the gripper and the workpiece can also be estimated using the detected pressure. As a result, the nonlinear relationship between the force and gap height can be linearized by adjusting the rotating speed of the fan in the swirl gripper according to the estimated force and gap height. The linearized relationship is expected to enhance handling stability of the workpiece.

Keywords: swirl gripper, noncontact handling, levitation, gap height estimation

Procedia PDF Downloads 110
17759 Modern Imputation Technique for Missing Data in Linear Functional Relationship Model

Authors: Adilah Abdul Ghapor, Yong Zulina Zubairi, Rahmatullah Imon

Abstract:

Missing value problem is common in statistics and has been of interest for years. This article considers two modern techniques in handling missing data for linear functional relationship model (LFRM) namely the Expectation-Maximization (EM) algorithm and Expectation-Maximization with Bootstrapping (EMB) algorithm using three performance indicators; namely the mean absolute error (MAE), root mean square error (RMSE) and estimated biased (EB). In this study, we applied the methods of imputing missing values in the LFRM. Results of the simulation study suggest that EMB algorithm performs much better than EM algorithm in both models. We also illustrate the applicability of the approach in a real data set.

Keywords: expectation-maximization, expectation-maximization with bootstrapping, linear functional relationship model, performance indicators

Procedia PDF Downloads 364
17758 Application of Artificial Neural Network in Initiating Cleaning Of Photovoltaic Solar Panels

Authors: Mohamed Mokhtar, Mostafa F. Shaaban

Abstract:

Among the challenges facing solar photovoltaic (PV) systems in the United Arab Emirates (UAE), dust accumulation on solar panels is considered the most severe problem that faces the growth of solar power plants. The accumulation of dust on the solar panels significantly degrades output from these panels. Hence, solar PV panels have to be cleaned manually or using costly automated cleaning methods. This paper focuses on initiating cleaning actions when required to reduce maintenance costs. The cleaning actions are triggered only when the dust level exceeds a threshold value. The amount of dust accumulated on the PV panels is estimated using an artificial neural network (ANN). Experiments are conducted to collect the required data, which are used in the training of the ANN model. Then, this ANN model will be fed by the output power from solar panels, ambient temperature, and solar irradiance, and thus, it will be able to estimate the amount of dust accumulated on solar panels at these conditions. The model was tested on different case studies to confirm the accuracy of the developed model.

Keywords: machine learning, dust, PV panels, renewable energy

Procedia PDF Downloads 117
17757 An Optimal Approach for Full-Detailed Friction Model Identification of Reaction Wheel

Authors: Ghasem Sharifi, Hamed Shahmohamadi Ousaloo, Milad Azimi, Mehran Mirshams

Abstract:

The ever-increasing use of satellites demands a search for increasingly accurate and reliable pointing systems. Reaction wheels are rotating devices used commonly for the attitude control of the spacecraft since provide a wide range of torque magnitude and high reliability. The numerical modeling of this device can significantly enhance the accuracy of the satellite control in space. Modeling the wheel rotation in the presence of the various frictions is one of the critical parts of this approach. This paper presents a Dynamic Model Control of a Reaction Wheel (DMCR) in the current control mode. In current-mode, the required current is delivered to the coils in order to achieve the desired torque. During this research, all the friction parameters as viscous and coulomb, motor coefficient, resistance and voltage constant are identified. In order to model identification of a reaction wheel, numerous varying current commands apply on the particular wheel to verify the estimated model. All the parameters of DMCR are identified by classical Levenberg-Marquardt (CLM) optimization method. The experimental results demonstrate that the developed model has an appropriate precise and can be used in the satellite control simulation.

Keywords: experimental modeling, friction parameters, model identification, reaction wheel

Procedia PDF Downloads 207
17756 Estimate of Maximum Expected Intensity of One-Half-Wave Lines Dancing

Authors: A. Bekbaev, M. Dzhamanbaev, R. Abitaeva, A. Karbozova, G. Nabyeva

Abstract:

In this paper, the regression dependence of dancing intensity from wind speed and length of span was established due to the statistic data obtained from multi-year observations on line wires dancing accumulated by power systems of Kazakhstan and the Russian Federation. The lower and upper limitations of the equations parameters were estimated, as well as the adequacy of the regression model. The constructed model will be used in research of dancing phenomena for the development of methods and means of protection against dancing and for zoning plan of the territories of line wire dancing.

Keywords: power lines, line wire dancing, dancing intensity, regression equation, dancing area intensity

Procedia PDF Downloads 288
17755 Estimation of Transition and Emission Probabilities

Authors: Aakansha Gupta, Neha Vadnere, Tapasvi Soni, M. Anbarsi

Abstract:

Protein secondary structure prediction is one of the most important goals pursued by bioinformatics and theoretical chemistry; it is highly important in medicine and biotechnology. Some aspects of protein functions and genome analysis can be predicted by secondary structure prediction. This is used to help annotate sequences, classify proteins, identify domains, and recognize functional motifs. In this paper, we represent protein secondary structure as a mathematical model. To extract and predict the protein secondary structure from the primary structure, we require a set of parameters. Any constants appearing in the model are specified by these parameters, which also provide a mechanism for efficient and accurate use of data. To estimate these model parameters there are many algorithms out of which the most popular one is the EM algorithm or called the Expectation Maximization Algorithm. These model parameters are estimated with the use of protein datasets like RS126 by using the Bayesian Probabilistic method (data set being categorical). This paper can then be extended into comparing the efficiency of EM algorithm to the other algorithms for estimating the model parameters, which will in turn lead to an efficient component for the Protein Secondary Structure Prediction. Further this paper provides a scope to use these parameters for predicting secondary structure of proteins using machine learning techniques like neural networks and fuzzy logic. The ultimate objective will be to obtain greater accuracy better than the previously achieved.

Keywords: model parameters, expectation maximization algorithm, protein secondary structure prediction, bioinformatics

Procedia PDF Downloads 444
17754 Estimation of Reservoir Capacity and Sediment Deposition Using Remote Sensing Data

Authors: Odai Ibrahim Mohammed Al Balasmeh, Tapas Karmaker, Richa Babbar

Abstract:

In this study, the reservoir capacity and sediment deposition were estimated using remote sensing data. The satellite images were synchronized with water level and storage capacity to find out the change in sediment deposition due to soil erosion and transport by streamflow. The water bodies spread area was estimated using vegetation indices, e.g., normalize differences vegetation index (NDVI) and normalize differences water index (NDWI). The 3D reservoir bathymetry was modeled by integrated water level, storage capacity, and area. From the models of different time span, the change in reservoir storage capacity was estimated. Another reservoir with known water level, storage capacity, area, and sediment deposition was used to validate the estimation technique. The t-test was used to assess the results between observed and estimated reservoir capacity and sediment deposition.

Keywords: satellite data, normalize differences vegetation index, NDVI, normalize differences water index, NDWI, reservoir capacity, sedimentation, t-test hypothesis

Procedia PDF Downloads 141