Search results for: model estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17522

Search results for: model estimation

16682 Levy Model for Commodity Pricing

Authors: V. Benedico, C. Anacleto, A. Bearzi, L. Brice, V. Delahaye

Abstract:

The aim in present paper is to construct an affordable and reliable commodity prices based on a recalculation of its cost through time which allows visualize the potential risks and thus, take more appropriate decisions regarding forecasts. Here attention has been focused on Levy model, more reliable and realistic than classical random Gaussian one as it takes into consideration observed abrupt jumps in case of sudden price variation. In application to Energy Trading sector where it has never been used before, equations corresponding to Levy model have been written for electricity pricing in European market. Parameters have been set in order to predict and simulate the price and its evolution through time to remarkable accuracy. As predicted by Levy model, the results show significant spikes which reach unconventional levels contrary to currently used Brownian model.

Keywords: commodity pricing, Lévy Model, price spikes, electricity market

Procedia PDF Downloads 412
16681 Exploration and Evaluation of the Effect of Multiple Countermeasures on Road Safety

Authors: Atheer Al-Nuaimi, Harry Evdorides

Abstract:

Every day many people die or get disabled or injured on roads around the world, which necessitates more specific treatments for transportation safety issues. International road assessment program (iRAP) model is one of the comprehensive road safety models which accounting for many factors that affect road safety in a cost-effective way in low and middle income countries. In iRAP model road safety has been divided into five star ratings from 1 star (the lowest level) to 5 star (the highest level). These star ratings are based on star rating score which is calculated by iRAP methodology depending on road attributes, traffic volumes and operating speeds. The outcome of iRAP methodology are the treatments that can be used to improve road safety and reduce fatalities and serious injuries (FSI) numbers. These countermeasures can be used separately as a single countermeasure or mix as multiple countermeasures for a location. There is general agreement that the adequacy of a countermeasure is liable to consistent losses when it is utilized as a part of mix with different countermeasures. That is, accident diminishment appraisals of individual countermeasures cannot be easily added together. The iRAP model philosophy makes utilization of a multiple countermeasure adjustment factors to predict diminishments in the effectiveness of road safety countermeasures when more than one countermeasure is chosen. A multiple countermeasure correction factors are figured for every 100-meter segment and for every accident type. However, restrictions of this methodology incorporate a presumable over-estimation in the predicted crash reduction. This study aims to adjust this correction factor by developing new models to calculate the effect of using multiple countermeasures on the number of fatalities for a location or an entire road. Regression models have been used to establish relationships between crash frequencies and the factors that affect their rates. Multiple linear regression, negative binomial regression, and Poisson regression techniques were used to develop models that can address the effectiveness of using multiple countermeasures. Analyses are conducted using The R Project for Statistical Computing showed that a model developed by negative binomial regression technique could give more reliable results of the predicted number of fatalities after the implementation of road safety multiple countermeasures than the results from iRAP model. The results also showed that the negative binomial regression approach gives more precise results in comparison with multiple linear and Poisson regression techniques because of the overdispersion and standard error issues.

Keywords: international road assessment program, negative binomial, road multiple countermeasures, road safety

Procedia PDF Downloads 224
16680 Economic Impacts of Sanctuary and Immigration and Customs Enforcement Policies Inclusive and Exclusive Institutions

Authors: Alexander David Natanson

Abstract:

This paper focuses on the effect of Sanctuary and Immigration and Customs Enforcement (ICE) policies on local economies. "Sanctuary cities" refers to municipal jurisdictions that limit their cooperation with the federal government's efforts to enforce immigration. Using county-level data from the American Community Survey and ICE data on economic indicators from 2006 to 2018, this study isolates the effects of local immigration policies on U.S. counties. The investigation is accomplished by simultaneously studying the policies' effects in counties where immigrants' families are persecuted via collaboration with Immigration and Customs Enforcement (ICE), in contrast to counties that provide protections. The analysis includes a difference-in-difference & two-way fixed effect model. Results are robust to nearest-neighbor matching, after the random assignment of treatment, after running estimations using different cutoffs for immigration policies, and with a regression discontinuity model comparing bordering counties with opposite policies. Results are also robust after restricting the data to a single-year policy adoption, using the Sun and Abraham estimator, and with event-study estimation to deal with the staggered treatment issue. In addition, the study reverses the estimation to understand what drives the decision to choose policies to detect the presence of reverse causality biases in the estimated policy impact on economic factors. The evidence demonstrates that providing protections to undocumented immigrants increases economic activity. The estimates show gains in per capita income ranging from 3.1 to 7.2, median wages between 1.7 to 2.6, and GDP between 2.4 to 4.1 percent. Regarding labor, sanctuary counties saw increases in total employment between 2.3 to 4 percent, and the unemployment rate declined from 12 to 17 percent. The data further shows that ICE policies have no statistically significant effects on income, median wages, or GDP but adverse effects on total employment, with declines from 1 to 2 percent, mostly in rural counties, and an increase in unemployment of around 7 percent in urban counties. In addition, results show a decline in the foreign-born population in ICE counties but no changes in sanctuary counties. The study also finds similar results for sanctuary counties when separating the data between urban, rural, educational attainment, gender, ethnic groups, economic quintiles, and the number of business establishments. The takeaway from this study is that institutional inclusion creates the dynamic nature of an economy, as inclusion allows for economic expansion due to the extension of fundamental freedoms to newcomers. Inclusive policies show positive effects on economic outcomes with no evident increase in population. To make sense of these results, the hypothesis and theoretical model propose that inclusive immigration policies play an essential role in conditioning the effect of immigration by decreasing uncertainties and constraints for immigrants' interaction in their communities, decreasing the cost from fear of deportation or the constant fear of criminalization and optimize their human capital.

Keywords: inclusive and exclusive institutions, post matching, fixed effect, time trend, regression discontinuity, difference-in-difference, randomization inference and sun, Abraham estimator

Procedia PDF Downloads 66
16679 Loan Portfolio Quality and the Bank Soundness in the Eccas: An Empirical Evaluation of Cameroonians Banks

Authors: Andre Kadandji, Mouhamadou Fall, Francois Koum Ekalle

Abstract:

This paper aims to analyze the sound banking through the effects of the damage of the loan portfolio in the Cameroonian banking sector through the Z-score. The approach is to test the effect of other CAMEL indicators and macroeconomics indicators on the relationship between the non-performing loan and the soundness of Cameroonian banks. We use a dynamic panel data, made by 13 banks for the period 2010-2013. The analysis provides a model equations embedded in panel data. For the estimation, we use the generalized method of moments to understand the effects of macroeconomic and CAMEL type variables on the ability of Cameroonian banks to face a shock. We find that the management quality and macroeconomic variables neutralize the effects of the non-performing loan on the banks soundness.

Keywords: loan portfolio, sound banking, Z-score, dynamic panel

Procedia PDF Downloads 275
16678 Finding DEA Targets Using Multi-Objective Programming

Authors: Farzad Sharifi, Raziyeh Shamsi

Abstract:

In this paper, we obtain the projection of inefficient units in data envelopment analysis (DEA) in the case of stochastic inputs and outputs using the multi-objective programming (MOP) structure. In some problems, the inputs might be stochastic while the outputs are deterministic, and vice versa. In such cases, we propose molti-objective DEA-R model, because in some cases (e.g., when unnecessary and irrational weights by the BCC model reduces the efficiency score), an efficient DMU is introduced as inefficient by the BCC model, whereas the DMU is considered efficient by the DEA-R model. In some other case, only the ratio of stochastic data may be available (e.g; the ratio of stochastic inputs to stochastic outputs). Thus, we provide multi objective DEA model without explicit outputs and prove that in-put oriented MOP DEA-R model in the invariable return to scale case can be replacing by MOP- DEA model without explicit outputs in the variable return to scale and vice versa. Using the interactive methods for solving the proposed model, yields a projection corresponding to the viewpoint of the DM and the analyst, which is nearer to reality and more practical. Finally, an application is provided.

Keywords: DEA, MOLP, STOCHASTIC, DEA-R

Procedia PDF Downloads 384
16677 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.

Keywords: cross-validation, importance sampling, information criteria, predictive accuracy

Procedia PDF Downloads 376
16676 Model Predictive Controller for Pasteurization Process

Authors: Tesfaye Alamirew Dessie

Abstract:

Our study focuses on developing a Model Predictive Controller (MPC) and evaluating it against a traditional PID for a pasteurization process. Utilizing system identification from the experimental data, the dynamics of the pasteurization process were calculated. Using best fit with data validation, residual, and stability analysis, the quality of several model architectures was evaluated. The validation data fit the auto-regressive with exogenous input (ARX322) model of the pasteurization process by roughly 80.37 percent. The ARX322 model structure was used to create MPC and PID control techniques. After comparing controller performance based on settling time, overshoot percentage, and stability analysis, it was found that MPC controllers outperform PID for those parameters.

Keywords: MPC, PID, ARX, pasteurization

Procedia PDF Downloads 141
16675 Nonlinear Mathematical Model of the Rotor Motion in a Thin Hydrodynamic Gap

Authors: Jaroslav Krutil, Simona Fialová, , František Pochylý

Abstract:

A nonlinear mathematical model of mutual fluid-structure interaction is presented in the work. The model is applicable to the general shape of sealing gaps. An in compressible fluid and turbulent flow is assumed. The shaft carries a rotational and procession motion, the gap is axially flowed through. The achieved results of the additional mass, damping and stiffness matrices may be used in the solution of the rotor dynamics. The usage of this mathematical model is expected particularly in hydraulic machines. The method of control volumes in the ANSYS Fluent was used for the simulation. The obtained results of the pressure and velocity fields are used in the mathematical model of additional effects.

Keywords: nonlinear mathematical model, CFD modeling, hydrodynamic sealing gap, matrices of mass, stiffness, damping

Procedia PDF Downloads 516
16674 Estimation of Fragility Curves Using Proposed Ground Motion Selection and Scaling Procedure

Authors: Esra Zengin, Sinan Akkar

Abstract:

Reliable and accurate prediction of nonlinear structural response requires specification of appropriate earthquake ground motions to be used in nonlinear time history analysis. The current research has mainly focused on selection and manipulation of real earthquake records that can be seen as the most critical step in the performance based seismic design and assessment of the structures. Utilizing amplitude scaled ground motions that matches with the target spectra is commonly used technique for the estimation of nonlinear structural response. Representative ground motion ensembles are selected to match target spectrum such as scenario-based spectrum derived from ground motion prediction equations, Uniform Hazard Spectrum (UHS), Conditional Mean Spectrum (CMS) or Conditional Spectrum (CS). Different sets of criteria exist among those developed methodologies to select and scale ground motions with the objective of obtaining robust estimation of the structural performance. This study presents ground motion selection and scaling procedure that considers the spectral variability at target demand with the level of ground motion dispersion. The proposed methodology provides a set of ground motions whose response spectra match target median and corresponding variance within a specified period interval. The efficient and simple algorithm is used to assemble the ground motion sets. The scaling stage is based on the minimization of the error between scaled median and the target spectra where the dispersion of the earthquake shaking is preserved along the period interval. The impact of the spectral variability on nonlinear response distribution is investigated at the level of inelastic single degree of freedom systems. In order to see the effect of different selection and scaling methodologies on fragility curve estimations, results are compared with those obtained by CMS-based scaling methodology. The variability in fragility curves due to the consideration of dispersion in ground motion selection process is also examined.

Keywords: ground motion selection, scaling, uncertainty, fragility curve

Procedia PDF Downloads 568
16673 Profit Efficiency and Technology Adoption of Boro Rice Production in Bangladesh

Authors: Fazlul Hoque, Tahmina Akter Joya, Asma Akter, Supawat Rungsuriyawiboon

Abstract:

Rice is the staple food in Bangladesh, and therefore, self-sufficiency in rice production remains a major concern. However, Bangladesh is experiencing insufficiency in rice production due to high production cost and low national average productivity of 2.848 ton/ha in comparison to other rice-growing countries in the world. This study aims to find out the profit efficiency and determinants of profit efficiency in Boro rice cultivation in Manikganj and Dhaka districts of Bangladesh. It also focuses on technology adoption and effect of technology adoption on profit efficiency of Boro rice cultivation in Bangladesh. The data were collected from 300 households growing Boro rice through face to face interviews by one set structured questionnaire; Frontier Version 4.1 and STATA 15 software were employed to analyze the data according to the purpose of the study. Maximum likelihood estimates of the specified profit model showed that profit efficiency of the farmer varied between 23% and 97% with a mean of 76% which implied as 24% of the profit is lost due to a combination of technical and allocative inefficiencies in Boro rice cultivation in the study area. The inefficiency model revealed that the education level of the farmer, farm size, variety of seed, and training and extension service influence the profit inefficiency significantly. The study also explained that the level of technology adoption index affects profit efficiency. The technology adoption in Boro rice cultivation is influenced by the education level of the farmer, farm size and farm capital.

Keywords: farmer, maximum likelihood estimation, profit efficiency, rice

Procedia PDF Downloads 117
16672 Exploring the Possibility of Islamic Banking as a Viable Alternative to the Conventional Banking Model

Authors: Lavan Vickneson

Abstract:

In today’s modern economy, the conventional banking model is the primary banking system used around the world. A significant problem faced by the conventional banking model is the recurring nature of banking crises. History’s record of the various banking crises, ranging from the Great Depression to the 2008 subprime mortgage crisis, is testament to the fact that banking crises continue to strike despite the preventive measures in place, such as bank’s minimum capital requirements and deposit guarantee schemes. If banking crises continue to occur despite these preventive measures, it necessarily follows that there are inherent flaws with the conventional banking model itself. In light of this, a possible alternative banking model to the conventional banking model is Islamic banking. To date, Islamic banking has been a niche market, predominantly serving Muslim investors. This paper seeks to explore the possibility of Islamic banking being more than just a niche market and playing a greater role in banking sectors around the world, by being a viable alternative to the conventional banking model.

Keywords: bank crises, conventional banking model, Islamic banking, niche market

Procedia PDF Downloads 257
16671 FPGA Based Vector Control of PM Motor Using Sliding Mode Observer

Authors: Hanan Mikhael Dawood, Afaneen Anwer Abood Al-Khazraji

Abstract:

The paper presents an investigation of field oriented control strategy of Permanent Magnet Synchronous Motor (PMSM) based on hardware in the loop simulation (HIL) over a wide speed range. A sensorless rotor position estimation using sliding mode observer for permanent magnet synchronous motor is illustrated considering the effects of magnetic saturation between the d and q axes. The cross saturation between d and q axes has been calculated by finite-element analysis. Therefore, the inductance measurement regards the saturation and cross saturation which are used to obtain the suitable id-characteristics in base and flux weakening regions. Real time matrix multiplication in Field Programmable Gate Array (FPGA) using floating point number system is used utilizing Quartus-II environment to develop FPGA designs and then download these designs files into development kit. dSPACE DS1103 is utilized for Pulse Width Modulation (PWM) switching and the controller. The hardware in the loop results conducted to that from the Matlab simulation. Various dynamic conditions have been investigated.

Keywords: magnetic saturation, rotor position estimation, sliding mode observer, hardware in the loop (HIL)

Procedia PDF Downloads 508
16670 An E-Government Implementation Model for Peruvian State Companies Based on COBIT 5.0: Definition and Goals of the Model

Authors: M. Bruzza, M. Tupia, F. Rodríguez

Abstract:

As part of the regulatory compliance process and the streamlining of public administration, the Peruvian government has implemented the National E-Government Plan in all state institutions with the aim of providing citizens with solid services based on the use of Information and Communications Technologies (ICT). As part of the regulations, the requisites to be met by public institutions have been submitted. However, the lack of an implementation model was detected, one that can serve as a guide to such institutions in order to materialize the organizational and technological structures needed, which allow them to provide the required digital services. This paper develops an implementation model of electronic government (e-government) for Peru’s state institutions, in compliance with current regulations based on a COBIT 5.0 framework. Furthermore, the paper introduces phase 1 of this model: business and IT goals, the goals cascade and the future model of processes.

Keywords: e-government, u-government, COBIT, implementation model

Procedia PDF Downloads 306
16669 Optimal Tetra-Allele Cross Designs Including Specific Combining Ability Effects

Authors: Mohd Harun, Cini Varghese, Eldho Varghese, Seema Jaggi

Abstract:

Hybridization crosses find a vital role in breeding experiments to evaluate the combining abilities of individual parental lines or crosses for creation of lines with desirable qualities. There are various ways of obtaining progenies and further studying the combining ability effects of the lines taken in a breeding programme. Some of the most common methods are diallel or two-way cross, triallel or three-way cross, tetra-allele or four-way cross. These techniques help the breeders to improve the quantitative traits which are of economical as well as nutritional importance in crops and animals. Amongst these methods, tetra-allele cross provides extra information in terms of the higher specific combining ability (sca) effects and the hybrids thus produced exhibit individual as well as population buffering mechanism because of the broad genetic base. Most of the common commercial hybrids in corn are either three-way or four-way cross hybrids. Tetra-allele cross came out as the most practical and acceptable scheme for the production of slaughter pigs having fast growth rate, good feed efficiency, and carcass quality. Tetra-allele crosses are mostly used for exploitation of heterosis in case of commercial silkworm production. Experimental designs involving tetra-allele crosses have been studied extensively in literature. Optimality of designs has also been considered as a researchable issue. In practical situations, it is advisable to include sca effects in the model as this information is needed by the breeder to improve economically and nutritionally important quantitative traits. Thus, a model that provides information regarding the specific traits by utilizing sca effects along with general combining ability (gca) effects may help the breeders to deal with the problem of various stresses. In this paper, a model for experimental designs involving tetra-allele crosses that incorporates both gca and sca has been defined. Optimality aspects of such designs have been discussed incorporating sca effects in the model. Orthogonality conditions have been derived for block designs ensuring estimation of contrasts among the gca effects, after eliminating the nuisance factors, independently from sca effects. User friendly SAS macro and web solution (webPTC) have been developed for the generation and analysis of such designs.

Keywords: general combining ability, optimality, specific combining ability, tetra-allele cross, webPTC

Procedia PDF Downloads 116
16668 Support Vector Machine Based Retinal Therapeutic for Glaucoma Using Machine Learning Algorithm

Authors: P. S. Jagadeesh Kumar, Mingmin Pan, Yang Yung, Tracy Lin Huan

Abstract:

Glaucoma is a group of visual maladies represented by the scheduled optic nerve neuropathy; means to the increasing dwindling in vision ground, resulting in loss of sight. In this paper, a novel support vector machine based retinal therapeutic for glaucoma using machine learning algorithm is conservative. The algorithm has fitting pragmatism; subsequently sustained on correlation clustering mode, it visualizes perfect computations in the multi-dimensional space. Support vector clustering turns out to be comparable to the scale-space advance that investigates the cluster organization by means of a kernel density estimation of the likelihood distribution, where cluster midpoints are idiosyncratic by the neighborhood maxima of the concreteness. The predicted planning has 91% attainment rate on data set deterrent on a consolidation of 500 realistic images of resolute and glaucoma retina; therefore, the computational benefit of depending on the cluster overlapping system pedestal on machine learning algorithm has complete performance in glaucoma therapeutic.

Keywords: machine learning algorithm, correlation clustering mode, cluster overlapping system, glaucoma, kernel density estimation, retinal therapeutic

Procedia PDF Downloads 222
16667 Uncertainty Assessment in Building Energy Performance

Authors: Fally Titikpina, Abderafi Charki, Antoine Caucheteux, David Bigaud

Abstract:

The building sector is one of the largest energy consumer with about 40% of the final energy consumption in the European Union. Ensuring building energy performance is of scientific, technological and sociological matter. To assess a building energy performance, the consumption being predicted or estimated during the design stage is compared with the measured consumption when the building is operational. When valuing this performance, many buildings show significant differences between the calculated and measured consumption. In order to assess the performance accurately and ensure the thermal efficiency of the building, it is necessary to evaluate the uncertainties involved not only in measurement but also those induced by the propagation of dynamic and static input data in the model being used. The evaluation of measurement uncertainty is based on both the knowledge about the measurement process and the input quantities which influence the result of measurement. Measurement uncertainty can be evaluated within the framework of conventional statistics presented in the \textit{Guide to the Expression of Measurement Uncertainty (GUM)} as well as by Bayesian Statistical Theory (BST). Another choice is the use of numerical methods like Monte Carlo Simulation (MCS). In this paper, we proposed to evaluate the uncertainty associated to the use of a simplified model for the estimation of the energy consumption of a given building. A detailed review and discussion of these three approaches (GUM, MCS and BST) is given. Therefore, an office building has been monitored and multiple sensors have been mounted on candidate locations to get required data. The monitored zone is composed of six offices and has an overall surface of 102 $m^2$. Temperature data, electrical and heating consumption, windows opening and occupancy rate are the features for our research work.

Keywords: building energy performance, uncertainty evaluation, GUM, bayesian approach, monte carlo method

Procedia PDF Downloads 440
16666 A Regression Model for Residual-State Creep Failure

Authors: Deepak Raj Bhat, Ryuichi Yatabe

Abstract:

In this study, a residual-state creep failure model was developed based on the residual-state creep test results of clayey soils. To develop the proposed model, the regression analyses were done by using the R. The model results of the failure time (tf) and critical displacement (δc) were compared with experimental results and found in close agreements to each others. It is expected that the proposed regression model for residual-state creep failure will be more useful for the prediction of displacement of different clayey soils in the future.

Keywords: regression model, residual-state creep failure, displacement prediction, clayey soils

Procedia PDF Downloads 381
16665 Towards A New Maturity Model for Information System

Authors: Ossama Matrane

Abstract:

Information System has become a strategic lever for enterprises. It contributes effectively to align business processes on strategies of enterprises. It is regarded as an increase in productivity and effectiveness. So, many organizations are currently involved in implementing sustainable Information System. And, a large number of studies have been conducted the last decade in order to define the success factors of information system. Thus, many studies on maturity model have been carried out. Some of this study is referred to the maturity model of Information System. In this article, we report on development of maturity models specifically designed for information system. This model is built based on three components derived from Maturity Model for Information Security Management, OPM3 for Project Management Maturity Model and processes of COBIT for IT governance. Thus, our proposed model defines three maturity stages for corporate a strong Information System to support objectives of organizations. It provides a very practical structure with which to assess and improve Information System Implementation.

Keywords: information system, maturity models, information security management, OPM3, IT governance

Procedia PDF Downloads 424
16664 A Model Architecture Transformation with Approach by Modeling: From UML to Multidimensional Schemas of Data Warehouses

Authors: Ouzayr Rabhi, Ibtissam Arrassen

Abstract:

To provide a complete analysis of the organization and to help decision-making, leaders need to have relevant data; Data Warehouses (DW) are designed to meet such needs. However, designing DW is not trivial and there is no formal method to derive a multidimensional schema from heterogeneous databases. In this article, we present a Model-Driven based approach concerning the design of data warehouses. We describe a multidimensional meta-model and also specify a set of transformations starting from a Unified Modeling Language (UML) metamodel. In this approach, the UML metamodel and the multidimensional one are both considered as a platform-independent model (PIM). The first meta-model is mapped into the second one through transformation rules carried out by the Query View Transformation (QVT) language. This proposal is validated through the application of our approach to generating a multidimensional schema of a Balanced Scorecard (BSC) DW. We are interested in the BSC perspectives, which are highly linked to the vision and the strategies of an organization.

Keywords: data warehouse, meta-model, model-driven architecture, transformation, UML

Procedia PDF Downloads 139
16663 Location Choice of Firms in an Unequal Length Streets Model: Game Theory Approach as an Extension of the Spoke Model

Authors: Kiumars Shahbazi, Salah Salimian, Abdolrahim Hashemi Dizaj

Abstract:

Locating is one of the key elements in success and survival of industrial centers and has great impact on cost reduction of establishment and launching of various economic activities. In this study, streets with unequal length model have been used that is the classic extension of Spoke model; however with unlimited number of streets with uneven lengths. The results showed that the spoke model is a special case of streets with unequal length model. According to the results of this study, if the strategy of enterprises and firms is to select both price and location, there would be no balance in the game. Furthermore, increased length of streets leads to increased profit of enterprises and with increased number of streets, the enterprises choose locations that are far from center (the maximum differentiation), and the enterprises' output will decrease. Moreover, the enterprise production rate will incline toward zero when the number of streets goes to infinity, and complete competition outcome will be achieved.

Keywords: locating, Nash equilibrium, streets with unequal length model, streets with unequal length model

Procedia PDF Downloads 182
16662 Static Analysis Deployment Model for Code Quality on Research and Development Projects of Software Development

Authors: Jeong-Hyun Park, Young-Sik Park, Hyo-Teag Jung

Abstract:

This paper presents static analysis deployment model for code quality on R&D Projects of SW Development. The proposed model includes the scope of R&D projects and index for static analysis of source code, operation model and execution process, environments and infrastructure system for R&D projects of SW development. There is the static analysis result of pilot project as case study based on the proposed deployment model and environment, and strategic considerations for success operation of the proposed static analysis deployment model for R&D Projects of SW Development. The proposed static analysis deployment model in this paper will be adapted and improved continuously for quality upgrade of R&D projects, and customer satisfaction of developed source codes and products.

Keywords: static analysis, code quality, coding rules, automation tool

Procedia PDF Downloads 498
16661 Observer-based Robust Diagnosis for Wind Turbine System

Authors: Sarah Odofin, Zhiwei Gao

Abstract:

Operations and maintenance of wind turbine have received much attention by researcher due to rapid expansion of wind farms. This paper explores a novel fault diagnosis that is designed and optimized to be very sensitive to faults and robust to disturbances. The faults considered are the sensor faults of which the augmented observer is considered to enlarge faults and to be robust to disturbance. A qualitative model based analysis is proposed for early fault diagnosis to minimize downtime mostly caused by components breakdown and exploit productivity. Simulation results are computed validating the models provided which demonstrates system performance using practical application of fault type examples. The results demonstrate the effectiveness of the developed techniques investigated in a Matlab/Simulink environment.

Keywords: wind turbine, condition monitoring, genetic algorithm, fault diagnosis, augmented observer, disturbance robustness, fault estimation, sensor monitoring

Procedia PDF Downloads 478
16660 Games behind Bars: A Longitudinal Study of Inmates Pro-Social Preferences

Authors: Mario A. Maggioni, Domenico Rossignoli, Simona Beretta, Sara Balestri

Abstract:

The paper presents the results of a Longitudinal Randomized Control Trial implemented in 2016 two State Prisons in California (USA). The subjects were randomly assigned to a 10-months program (GRIP, Guiding Rage Into Power) aiming at undoing the destructive behavioral patterns that lead to criminal actions by raising the individual’s 'mindfulness'. This study tests whether the participation to this program (treatment), based on strong relationships and mutual help, affects pro-social behavior of participants, in particular with reference to trust and inequality aversion. The research protocol entails the administration of two questionnaires including a set of behavioral situations ('games') - widely used in the relevant literature in the field - to 80 inmates, 42 treated (enrolled in the program) and 38 controls. The first questionnaire has been administered before treatment and randomization took place; the second questionnaire at the end of the program. The results of a Difference-in-Differences estimation procedure, show that trust significantly increases GRIP participants to compared to the control group. The result is robust to alternative estimation techniques and to the inclusion of a set of covariates to further control for idiosyncratic characteristics of the prisoners.

Keywords: behavioral economics, difference in differences, longitudinal study, pro-social preferences

Procedia PDF Downloads 372
16659 The Spherical Geometric Model of Absorbed Particles: Application to the Electron Transport Study

Authors: A. Bentabet, A. Aydin, N. Fenineche

Abstract:

The mean penetration depth has a most important in the absorption transport phenomena. Analytical model of light ion backscattering coefficients from solid targets have been made by Vicanek and Urbassek. In the present work, we showed a mathematical expression (deterministic model) for Z1/2. In advantage, in the best of our knowledge, relatively only one analytical model exit for electron or positron mean penetration depth in solid targets. In this work, we have presented a simple geometric spherical model of absorbed particles based on CSDA scheme. In advantage, we have showed an analytical expression of the mean penetration depth by combination between our model and the Vicanek and Urbassek theory. For this, we have used the Relativistic Partial Wave Expansion Method (RPWEM) and the optical dielectric model to calculate the elastic cross sections and the ranges respectively. Good agreement was found with the experimental and theoretical data.

Keywords: Bentabet spherical geometric model, continuous slowing down approximation, stopping powers, ranges, mean penetration depth

Procedia PDF Downloads 624
16658 Retina Registration for Biometrics Based on Characterization of Retinal Feature Points

Authors: Nougrara Zineb

Abstract:

The unique structure of the blood vessels in the retina has been used for biometric identification. The retina blood vessel pattern is a unique pattern in each individual and it is almost impossible to forge that pattern in a false individual. The retina biometrics’ advantages include high distinctiveness, universality, and stability overtime of the blood vessel pattern. Once the creases have been extracted from the images, a registration stage is necessary, since the position of the retinal vessel structure could change between acquisitions due to the movements of the eye. Image registration consists of following steps: Feature detection, feature matching, transform model estimation and image resembling and transformation. In this paper, we present an algorithm of registration; it is based on the characterization of retinal feature points. For experiments, retinal images from the DRIVE database have been tested. The proposed methodology achieves good results for registration in general.

Keywords: fovea, optic disc, registration, retinal images

Procedia PDF Downloads 249
16657 Expert Review on Conceptual Design Model of iTV Advertising towards Impulse Purchase

Authors: Azizah Che Omar

Abstract:

Various studies have proposed factors of impulse purchase in different advertising mediums like website, mobile, traditional retail store and traditional television. However, to the best of researchers’ knowledge, none of the impulse purchase model is dedicated towards impulse purchase tendency for interactive TV (iTV) advertising. Therefore, the proposed model conceptual design model of interactive television advertising toward impulse purchase (iTVAdIP) was developed. The focus of this study is to evaluate the conceptual design model of iTVAdIP through expert review. As a result, the finding showed that majority of expert reviews agreed that the conceptual design model iTVAdIP is applicable to the development of interactive television advertising and it will increase the effectiveness of advertising. This study also shows the conceptual design model of iTVAdIP that has been reviewed.

Keywords: impulse purchase, interactive television advertising, persuasive

Procedia PDF Downloads 338
16656 Evaluation of Expected Annual Loss Probabilities of RC Moment Resisting Frames

Authors: Saemee Jun, Dong-Hyeon Shin, Tae-Sang Ahn, Hyung-Joon Kim

Abstract:

Building loss estimation methodologies which have been advanced considerably in recent decades are usually used to estimate socio and economic impacts resulting from seismic structural damage. In accordance with these methods, this paper presents the evaluation of an annual loss probability of a reinforced concrete moment resisting frame designed according to Korean Building Code. The annual loss probability is defined by (1) a fragility curve obtained from a capacity spectrum method which is similar to a method adopted from HAZUS, and (2) a seismic hazard curve derived from annual frequencies of exceedance per peak ground acceleration. Seismic fragilities are computed to calculate the annual loss probability of a certain structure using functions depending on structural capacity, seismic demand, structural response and the probability of exceeding damage state thresholds. This study carried out a nonlinear static analysis to obtain the capacity of a RC moment resisting frame selected as a prototype building. The analysis results show that the probability of being extensive structural damage in the prototype building is expected to 0.004% in a year.

Keywords: expected annual loss, loss estimation, RC structure, fragility analysis

Procedia PDF Downloads 386
16655 Presenting the Mathematical Model to Determine Retention in the Watersheds

Authors: S. Shamohammadi, L. Razavi

Abstract:

This paper based on the principle concepts of SCS-CN model, a new mathematical model for computation of retention potential (S) presented. In the mathematical model, not only precipitation-runoff concepts in SCS-CN model are precisely represented in a mathematical form, but also new concepts, called “maximum retention” and “total retention” is introduced, and concepts of potential retention capacity, maximum retention, and total retention have been separated from each other. In the proposed model, actual retention (F), maximum actual retention (Fmax), total retention (S), maximum retention (Smax), and potential retention (Sp), for the first time clearly defined, so that Sp is not variable, but a function of morphological characteristics of the watershed. Indeed, based on the mathematical relation of the conceptual curve of SCS-CN model, the proposed model provides a new method for the computation of actual retention in watershed and it simply determined runoff based on. In the corresponding relations, in addition to Precipitation (P), Initial retention (Ia), cumulative values of actual retention capacity (F), total retention (S), runoff (Q), antecedent moisture (M), potential retention (Sp), total retention (S), we introduced Fmax and Fmin referring to maximum and minimum actual retention, respectively. As well as, ksh is a coefficient which depends on morphological characteristics of the watershed. Advantages of the modified version versus the original model include a better precision, higher performance, easier calibration and speed computing.

Keywords: model, mathematical, retention, watershed, SCS

Procedia PDF Downloads 437
16654 High Speed Motion Tracking with Magnetometer in Nonuniform Magnetic Field

Authors: Jeronimo Cox, Tomonari Furukawa

Abstract:

Magnetometers have become more popular in inertial measurement units (IMU) for their ability to correct estimations using the earth's magnetic field. Accelerometer and gyroscope-based packages fail with dead-reckoning errors accumulated over time. Localization in robotic applications with magnetometer-inclusive IMUs has become popular as a way to track the odometry of slower-speed robots. With high-speed motions, the accumulated error increases over smaller periods of time, making them difficult to track with IMU. Tracking a high-speed motion is especially difficult with limited observability. Visual obstruction of motion leaves motion-tracking cameras unusable. When motions are too dynamic for estimation techniques reliant on the observability of the gravity vector, the use of magnetometers is further justified. As available magnetometer calibration methods are limited with the assumption that background magnetic fields are uniform, estimation in nonuniform magnetic fields is problematic. Hard iron distortion is a distortion of the magnetic field by other objects that produce magnetic fields. This kind of distortion is often observed as the offset from the origin of the center of data points when a magnetometer is rotated. The magnitude of hard iron distortion is dependent on proximity to distortion sources. Soft iron distortion is more related to the scaling of the axes of magnetometer sensors. Hard iron distortion is more of a contributor to the error of attitude estimation with magnetometers. Indoor environments or spaces inside ferrite-based structures, such as building reinforcements or a vehicle, often cause distortions with proximity. As positions correlate to areas of distortion, methods of magnetometer localization include the production of spatial mapping of magnetic field and collection of distortion signatures to better aid location tracking. The goal of this paper is to compare magnetometer methods that don't need pre-productions of magnetic field maps. Mapping the magnetic field in some spaces can be costly and inefficient. Dynamic measurement fusion is used to track the motion of a multi-link system with us. Conventional calibration by data collection of rotation at a static point, real-time estimation of calibration parameters each time step, and using two magnetometers for determining local hard iron distortion are compared to confirm the robustness and accuracy of each technique. With opposite-facing magnetometers, hard iron distortion can be accounted for regardless of position, Rather than assuming that hard iron distortion is constant regardless of positional change. The motion measured is a repeatable planar motion of a two-link system connected by revolute joints. The links are translated on a moving base to impulse rotation of the links. Equipping the joints with absolute encoders and recording the motion with cameras to enable ground truth comparison to each of the magnetometer methods. While the two-magnetometer method accounts for local hard iron distortion, the method fails where the magnetic field direction in space is inconsistent.

Keywords: motion tracking, sensor fusion, magnetometer, state estimation

Procedia PDF Downloads 63
16653 Estimation of Small Hydropower Potential Using Remote Sensing and GIS Techniques in Pakistan

Authors: Malik Abid Hussain Khokhar, Muhammad Naveed Tahir, Muhammad Amin

Abstract:

Energy demand has been increased manifold due to increasing population, urban sprawl and rapid socio-economic improvements. Low water capacity in dams for continuation of hydrological power, land cover and land use are the key parameters which are creating problems for more energy production. Overall installed hydropower capacity of Pakistan is more than 35000 MW whereas Pakistan is producing up to 17000 MW and the requirement is more than 22000 that is resulting shortfall of 5000 - 7000 MW. Therefore, there is a dire need to develop small hydropower to fulfill the up-coming requirements. In this regards, excessive rainfall, snow nurtured fast flowing perennial tributaries and streams in northern mountain regions of Pakistan offer a gigantic scope of hydropower potential throughout the year. Rivers flowing in KP (Khyber Pakhtunkhwa) province, GB (Gilgit Baltistan) and AJK (Azad Jammu & Kashmir) possess sufficient water availability for rapid energy growth. In the backdrop of such scenario, small hydropower plants are believed very suitable measures for more green environment and power sustainable option for the development of such regions. Aim of this study is to estimate hydropower potential sites for small hydropower plants and stream distribution as per steam network available in the available basins in the study area. The proposed methodology will focus on features to meet the objectives i.e. site selection of maximum hydropower potential for hydroelectric generation using well emerging GIS tool SWAT as hydrological run-off model on the Neelum, Kunhar and the Dor Rivers’ basins. For validation of the results, NDWI will be computed to show water concentration in the study area while overlaying on geospatial enhanced DEM. This study will represent analysis of basins, watershed, stream links, and flow directions with slope elevation for hydropower potential to produce increasing demand of electricity by installing small hydropower stations. Later on, this study will be benefitted for other adjacent regions for further estimation of site selection for installation of such small power plants as well.

Keywords: energy, stream network, basins, SWAT, evapotranspiration

Procedia PDF Downloads 205