Search results for: panel stochastic frontier models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7997

Search results for: panel stochastic frontier models

7397 Modeling the Demand for the Healthcare Services Using Data Analysis Techniques

Authors: Elizaveta S. Prokofyeva, Svetlana V. Maltseva, Roman D. Zaitsev

Abstract:

Rapidly evolving modern data analysis technologies in healthcare play a large role in understanding the operation of the system and its characteristics. Nowadays, one of the key tasks in urban healthcare is to optimize the resource allocation. Thus, the application of data analysis in medical institutions to solve optimization problems determines the significance of this study. The purpose of this research was to establish the dependence between the indicators of the effectiveness of the medical institution and its resources. Hospital discharges by diagnosis; hospital days of in-patients and in-patient average length of stay were selected as the performance indicators and the demand of the medical facility. The hospital beds by type of care, medical technology (magnetic resonance tomography, gamma cameras, angiographic complexes and lithotripters) and physicians characterized the resource provision of medical institutions for the developed models. The data source for the research was an open database of the statistical service Eurostat. The choice of the source is due to the fact that the databases contain complete and open information necessary for research tasks in the field of public health. In addition, the statistical database has a user-friendly interface that allows you to quickly build analytical reports. The study provides information on 28 European for the period from 2007 to 2016. For all countries included in the study, with the most accurate and complete data for the period under review, predictive models were developed based on historical panel data. An attempt to improve the quality and the interpretation of the models was made by cluster analysis of the investigated set of countries. The main idea was to assess the similarity of the joint behavior of the variables throughout the time period under consideration to identify groups of similar countries and to construct the separate regression models for them. Therefore, the original time series were used as the objects of clustering. The hierarchical agglomerate algorithm k-medoids was used. The sampled objects were used as the centers of the clusters obtained, since determining the centroid when working with time series involves additional difficulties. The number of clusters used the silhouette coefficient. After the cluster analysis it was possible to significantly improve the predictive power of the models: for example, in the one of the clusters, MAPE error was only 0,82%, which makes it possible to conclude that this forecast is highly reliable in the short term. The obtained predicted values of the developed models have a relatively low level of error and can be used to make decisions on the resource provision of the hospital by medical personnel. The research displays the strong dependencies between the demand for the medical services and the modern medical equipment variable, which highlights the importance of the technological component for the successful development of the medical facility. Currently, data analysis has a huge potential, which allows to significantly improving health services. Medical institutions that are the first to introduce these technologies will certainly have a competitive advantage.

Keywords: data analysis, demand modeling, healthcare, medical facilities

Procedia PDF Downloads 144
7396 Statistical Analysis for Overdispersed Medical Count Data

Authors: Y. N. Phang, E. F. Loh

Abstract:

Many researchers have suggested the use of zero inflated Poisson (ZIP) and zero inflated negative binomial (ZINB) models in modeling over-dispersed medical count data with extra variations caused by extra zeros and unobserved heterogeneity. The studies indicate that ZIP and ZINB always provide better fit than using the normal Poisson and negative binomial models in modeling over-dispersed medical count data. In this study, we proposed the use of Zero Inflated Inverse Trinomial (ZIIT), Zero Inflated Poisson Inverse Gaussian (ZIPIG) and zero inflated strict arcsine models in modeling over-dispersed medical count data. These proposed models are not widely used by many researchers especially in the medical field. The results show that these three suggested models can serve as alternative models in modeling over-dispersed medical count data. This is supported by the application of these suggested models to a real life medical data set. Inverse trinomial, Poisson inverse Gaussian, and strict arcsine are discrete distributions with cubic variance function of mean. Therefore, ZIIT, ZIPIG and ZISA are able to accommodate data with excess zeros and very heavy tailed. They are recommended to be used in modeling over-dispersed medical count data when ZIP and ZINB are inadequate.

Keywords: zero inflated, inverse trinomial distribution, Poisson inverse Gaussian distribution, strict arcsine distribution, Pearson’s goodness of fit

Procedia PDF Downloads 544
7395 Environmental and Socioeconomic Determinants of Climate Change Resilience in Rural Nigeria: Empirical Evidence towards Resilience Building

Authors: Ignatius Madu

Abstract:

The study aims at assessing the environmental and socioeconomic determinants of climate change resilience in rural Nigeria. This is necessary because researches and development efforts on building climate change resilience of rural areas in developing countries are usually made without the knowledge of the impacts of the inherent rural characteristics that determine resilient capacities of the households. This has, in many cases, led to costly mistakes, delayed responses, inaccurate outcomes, and other difficulties. Consequently, this assessment becomes crucial not only to policymakers and people living in risk-prone environments in rural areas but also to fill the research gap. To achieve the aim, secondary data were obtained from the Annual Abstract of Statistics 2017, LSMS-Integrated Surveys on Agriculture and General Household Survey Panel 2015/2016, and National Agriculture Sample Survey (NASS), 2010/2011.Resilience was calculated by weighting and adding the adaptive, absorptive and anticipatory measures of households variables aggregated at state levels and then regressed against rural environmental and socioeconomic characteristics influencing it. From the regression, the coefficients of the variables were used to compute the impacts of the variables using the Stochastic Regression of Impacts on Population, Affluence and Technology (STIRPAT) Model. The results showed that the northern States are generally low in resilient indices and are impacted less by the development indicators. The major determining factors are percentage of non-poor, environmental protection, road transport development, landholding, agricultural input, population density, dependency ratio (inverse), household asserts, education and maternal care. The paper concludes that any effort to a successful resilient building in rural areas of the country should first address these key factors that enhance rural development and wellbeing since it is better to take action before shocks take place.

Keywords: climate change resilience; spatial impacts; STIRPAT model; Nigeria

Procedia PDF Downloads 150
7394 The Strengths and Limitations of the Statistical Modeling of Complex Social Phenomenon: Focusing on SEM, Path Analysis, or Multiple Regression Models

Authors: Jihye Jeon

Abstract:

This paper analyzes the conceptual framework of three statistical methods, multiple regression, path analysis, and structural equation models. When establishing research model of the statistical modeling of complex social phenomenon, it is important to know the strengths and limitations of three statistical models. This study explored the character, strength, and limitation of each modeling and suggested some strategies for accurate explaining or predicting the causal relationships among variables. Especially, on the studying of depression or mental health, the common mistakes of research modeling were discussed.

Keywords: multiple regression, path analysis, structural equation models, statistical modeling, social and psychological phenomenon

Procedia PDF Downloads 653
7393 The Effectiveness of Environmental Policy Instruments for Promoting Renewable Energy Consumption: Command-and-Control Policies versus Market-Based Policies

Authors: Mahmoud Hassan

Abstract:

Understanding the impact of market- and non-market-based environmental policy instruments on renewable energy consumption (REC) is crucial for the design and choice of policy packages. This study aims to empirically investigate the effect of environmental policy stringency index (EPS) and its components on REC in 27 OECD countries over the period from 1990 to 2015, and then use the results to identify what the appropriate environmental policy mix should look like. By relying on the two-step system GMM estimator, we provide evidence that increasing environmental policy stringency as a whole promotes renewable energy consumption in these 27 developed economies. Moreover, policymakers are able, through the market- and non-market-based environmental policy instruments, to increase the use of renewable energy. However, not all of these instruments are effective for achieving this goal. The results indicate that R&D subsidies and trading schemes have a positive and significant impact on REC, while taxes, feed-in tariff and emission standards have not a significant effect. Furthermore, R&D subsidies are more effective than trading schemes for stimulating the use of clean energy. These findings proved to be robust across the three alternative panel techniques used.

Keywords: environmental policy stringency, renewable energy consumption, two-step system-GMM estimation, linear dynamic panel data model

Procedia PDF Downloads 181
7392 An Integration of Genetic Algorithm and Particle Swarm Optimization to Forecast Transport Energy Demand

Authors: N. R. Badurally Adam, S. R. Monebhurrun, M. Z. Dauhoo, A. Khoodaruth

Abstract:

Transport energy demand is vital for the economic growth of any country. Globalisation and better standard of living plays an important role in transport energy demand. Recently, transport energy demand in Mauritius has increased significantly, thus leading to an abuse of natural resources and thereby contributing to global warming. Forecasting the transport energy demand is therefore important for controlling and managing the demand. In this paper, we develop a model to predict the transport energy demand. The model developed is based on a system of five stochastic differential equations (SDEs) consisting of five endogenous variables: fuel price, population, gross domestic product (GDP), number of vehicles and transport energy demand and three exogenous parameters: crude birth rate, crude death rate and labour force. An interval of seven years is used to avoid any falsification of result since Mauritius is a developing country. Data available for Mauritius from year 2003 up to 2009 are used to obtain the values of design variables by applying genetic algorithm. The model is verified and validated for 2010 to 2012 by substituting the values of coefficients obtained by GA in the model and using particle swarm optimisation (PSO) to predict the values of the exogenous parameters. This model will help to control the transport energy demand in Mauritius which will in turn foster Mauritius towards a pollution-free country and decrease our dependence on fossil fuels.

Keywords: genetic algorithm, modeling, particle swarm optimization, stochastic differential equations, transport energy demand

Procedia PDF Downloads 369
7391 Evaluation of Football Forecasting Models: 2021 Brazilian Championship Case Study

Authors: Flavio Cordeiro Fontanella, Asla Medeiros e Sá, Moacyr Alvim Horta Barbosa da Silva

Abstract:

In the present work, we analyse the performance of football results forecasting models. In order to do so, we have performed the data collection from eight different forecasting models during the 2021 Brazilian football season. First, we guide the analysis through visual representations of the data, designed to highlight the most prominent features and enhance the interpretation of differences and similarities between the models. We propose using a 2-simplex triangle to investigate visual patterns from the results forecasting models. Next, we compute the expected points for every team playing in the championship and compare them to the final league standings, revealing interesting contrasts between actual to expected performances. Then, we evaluate forecasts’ accuracy using the Ranked Probability Score (RPS); models comparison accounts for tiny scale differences that may become consistent in time. Finally, we observe that the Wisdom of Crowds principle can be appropriately applied in the context, driving into a discussion of results forecasts usage in practice. This paper’s primary goal is to encourage football forecasts’ performance discussion. We hope to accomplish it by presenting appropriate criteria and easy-to-understand visual representations that can point out the relevant factors of the subject.

Keywords: accuracy evaluation, Brazilian championship, football results forecasts, forecasting models, visual analysis

Procedia PDF Downloads 95
7390 Digital Twin for a Floating Solar Energy System with Experimental Data Mining and AI Modelling

Authors: Danlei Yang, Luofeng Huang

Abstract:

The integration of digital twin technology with renewable energy systems offers an innovative approach to predicting and optimising performance throughout the entire lifecycle. A digital twin is a continuously updated virtual replica of a real-world entity, synchronised with data from its physical counterpart and environment. Many digital twin companies today claim to have mature digital twin products, but their focus is primarily on equipment visualisation. However, the core of a digital twin should be its model, which can mirror, shadow, and thread with the real-world entity, which is still underdeveloped. For a floating solar energy system, a digital twin model can be defined in three aspects: (a) the physical floating solar energy system along with environmental factors such as solar irradiance and wave dynamics, (b) a digital model powered by artificial intelligence (AI) algorithms, and (c) the integration of real system data with the AI-driven model and a user interface. The experimental setup for the floating solar energy system, is designed to replicate real-ocean conditions of floating solar installations within a controlled laboratory environment. The system consists of a water tank that simulates an aquatic surface, where a floating catamaran structure supports a solar panel. The solar simulator is set up in three positions: one directly above and two inclined at a 45° angle in front and behind the solar panel. This arrangement allows the simulation of different sun angles, such as sunrise, midday, and sunset. The solar simulator is positioned 400 mm away from the solar panel to maintain consistent solar irradiance on its surface. Stability for the floating structure is achieved through ropes attached to anchors at the bottom of the tank, which simulates the mooring systems used in real-world floating solar applications. The floating solar energy system's sensor setup includes various devices to monitor environmental and operational parameters. An irradiance sensor measures solar irradiance on the photovoltaic (PV) panel. Temperature sensors monitor ambient air and water temperatures, as well as the PV panel temperature. Wave gauges measure wave height, while load cells capture mooring force. Inclinometers and ultrasonic sensors record heave and pitch amplitudes of the floating system’s motions. An electric load measures the voltage and current output from the solar panel. All sensors collect data simultaneously. Artificial neural network (ANN) algorithms are central to developing the digital model, which processes historical and real-time data, identifies patterns, and predicts the system’s performance in real time. The data collected from various sensors are partly used to train the digital model, with the remaining data reserved for validation and testing. The digital twin model combines the experimental setup with the ANN model, enabling monitoring, analysis, and prediction of the floating solar energy system's operation. The digital model mirrors the functionality of the physical setup, running in sync with the experiment to provide real-time insights and predictions. It provides useful industrial benefits, such as informing maintenance plans as well as design and control strategies for optimal energy efficiency. In long term, this digital twin will help improve overall solar energy yield whilst minimising the operational costs and risks.

Keywords: digital twin, floating solar energy system, experiment setup, artificial intelligence

Procedia PDF Downloads 8
7389 Efficiency Improvement of Ternary Nanofluid Within a Solar Photovoltaic Unit Combined with Thermoelectric Considering Environmental Analysis

Authors: Mohsen Sheikholeslami, Zahra Khalili, Ladan Momayez

Abstract:

Impacts of environmental parameters and dust deposition on the efficiency of solar panel have been scrutinized in this article. To gain thermal output, trapezoidal cooling channel has been attached in the bottom of the panel incorporating ternary nanofluid. To produce working fluid, water has been mixed with Fe₃O₄-TiO₂-GO nanoparticles. Also, the arrangement of fins has been considered to grow the cooling rate of the silicon layer. The existence of a thermoelectric layer above the cooling channel leads to higher electrical output. Efficacy of ambient temperature (Ta), speed of wind (V𝓌ᵢₙ𝒹) and inlet temperature (Tᵢₙ) and velocity (Vin) of ternary nanofluid on performance of PVT has been assessed. As Tin increases, electrical efficiency declines about 3.63%. Increase of ambient temperature makes thermal performance enhance about 33.46%. The PVT efficiency decreases about 13.14% and 16.6% with augment of wind speed and dust deposition. CO₂ mitigation has been reduced about 15.49% in presence of dust while it increases about 17.38% with growth of ambient temperature.

Keywords: photovoltaic system, CO₂ mitigation, ternary nanofluid, thermoelectric generator, environmental parameters, trapezoidal cooling channel

Procedia PDF Downloads 91
7388 Design of an Instrumentation Setup and Data Acquisition System for a GAS Turbine Engine Using Suitable DAQ Software

Authors: Syed Nauman Bin Asghar Bukhari, Mohtashim Mansoor, Mohammad Nouman

Abstract:

Engine test-Bed system is a fundamental tool to measure dynamic parameters, economic performance, and reliability of an aircraft Engine, and its automation and accuracy directly influences the precision of acquired and analysed data. In this paper, we present the design of digital Data Acquisition (DAQ) system for a vintage aircraft engine test bed that lacks the capability of displaying all the analyzed parameters at one convenient location (one panel-one screen). Recording such measurements in the vintage test bed is not only time consuming but also prone to human errors. Digitizing such measurement system requires a Data Acquisition (DAQ) system capable of recording these parameters and displaying them on one screen-one panel monitor. The challenge in designing upgrade to the vintage systems arises with a need to build and integrate digital measurement system from scratch with a minimal budget and modifications to the existing vintage system. The proposed design not only displays all the key performance / maintenance parameters of the gas turbine engines for operator as well as quality inspector on separate screens but also records the data for further processing / archiving.

Keywords: Gas turbine engine, engine test cell, data acquisition, instrumentation

Procedia PDF Downloads 123
7387 Statistical Channel Modeling for Multiple-Input-Multiple-Output Communication System

Authors: M. I. Youssef, A. E. Emam, M. Abd Elghany

Abstract:

The performance of wireless communication systems is affected mainly by the environment of its associated channel, which is characterized by dynamic and unpredictable behavior. In this paper, different statistical earth-satellite channel models are studied with emphasize on two main models, first is the Rice-Log normal model, due to its representation for the environment including shadowing and multi-path components that affect the propagated signal along its path, and a three-state model that take into account different fading conditions (clear area, moderate shadow and heavy shadowing). The provided models are based on AWGN, Rician, Rayleigh, and log-normal distributions were their Probability Density Functions (PDFs) are presented. The transmission system Bit Error Rate (BER), Peak-Average-Power Ratio (PAPR), and the channel capacity vs. fading models are measured and analyzed. These simulations are implemented using MATLAB tool, and the results had shown the performance of transmission system over different channel models.

Keywords: fading channels, MIMO communication, RNS scheme, statistical modeling

Procedia PDF Downloads 149
7386 Managing Diversity in MNCS: A Literature Review of Existing Strategic Models for Managing Diversity and a Roadmap to Transfer Them to the Subsidiaries

Authors: Debora Gottardello, Mireia Valverde Aparicio, Juan Llopis Taverner

Abstract:

Globalization has given rise to a great diversity in the composition of people in organizations. Diversity management is therefore key to create growth in today’s competitive global marketplace. This work develops a literature review related to the existing models for managing diversity covering the period from 1980 until 2014. Furthermore, it identifies limitations in previous models. More specifically, the literature review reveals that there is a lack of information about how these models can be adapted from the headquarters to the subsidiaries. Therefore, the contribution of this paper is to suggest how the models should be adapted when they are directed to host countries. Our aim is to highlight the limitations of the developed models with regards to the translation of the diversity management practices to the subsidiaries. Accordingly, a model that will enable MNCs to ensure a global strategy is suggested. Taking advantage of the potential incorporated in a culturally diverse work team should be at the top of every international company’s aims. Executives from headquarters need to use different attitudes when transferring diversity practices towards their subsidiaries. Further studies should reassess local practices of diversity management to find out how this universal management model is translated.

Keywords: culture diversity, diversity management, human resources management, MNCs, subsidiaries, workforce diversity

Procedia PDF Downloads 255
7385 Numerical Investigation of the Effect of Blast Pressure on Discrete Model in Shock Tube

Authors: Aldin Justin Sundararaj, Austin Lord Tennyson, Divya Jose, A. N. Subash

Abstract:

Blast waves are generated due to the explosions of high energy materials. An explosion yielding a blast wave has the potential to cause severe damage to buildings and its personnel. In order to understand the physics of effects of blast pressure on buildings, studies in the shock tube on generic configurations are carried out at various pressures on discrete models. The strength of shock wave is systematically varied by using different driver gases and diaphragm thickness. The basic material of the diaphragm is Aluminum. To simulate the effect of shock waves on discrete models a shock tube was used. Generic models selected for this study are suitably scaled cylinder, cone and cubical blocks. The experiments were carried out with 2mm diaphragm with burst pressure ranging from 28 to 31 bar. Numerical analysis was carried out over these discrete models. A 3D model of shock-tube with different discrete models inside the tube was used for CFD computation. It was found that cone has dissipated most of the shock pressure compared to cylinder and cubical block. The robustness and the accuracy of the numerical model were validation with the analytical and experimental data.

Keywords: shock wave, blast wave, discrete models, shock tube

Procedia PDF Downloads 331
7384 Characterization of the Ignitability and Flame Regression Behaviour of Flame Retarded Natural Fibre Composite Panel

Authors: Timine Suoware, Sylvester Edelugo, Charles Amgbari

Abstract:

Natural fibre composites (NFC) are becoming very attractive especially for automotive interior and non-structural building applications because they are biodegradable, low cost, lightweight and environmentally friendly. NFC are known to release high combustible products during exposure to heat atmosphere and this behaviour has raised concerns to end users. To improve on their fire response, flame retardants (FR) such as aluminium tri-hydroxide (ATH) and ammonium polyphosphate (APP) are incorporated during processing to delay the start and spread of fire. In this paper, APP was modified with Gum Arabic powder (GAP) and synergized with carbon black (CB) to form new FR species. Four FR species at 0, 12, 15 and 18% loading ratio were added to oil palm fibre polyester composite (OPFC) panels as follows; OPFC12%APP-GAP, OPFC15%APP-GAP/CB, OPFC18%ATH/APP-GAP and OPFC18%ATH/APPGAP/CB. The panels were produced using hand lay-up compression moulding and cured at room temperature. Specimens were cut from the panels and these were tested for ignition time (Tig), peak heat released rate (HRRp), average heat release rate (HRRavg), peak mass loss rate (MLRp), residual mass (Rm) and average smoke production rate (SPRavg) using cone calorimeter apparatus as well as the available flame energy (ɸ) in driving the flame using radiant panel flame spread apparatus. From the ignitability data obtained at 50 kW/m2 heat flux (HF), it shows that the hybrid FR modified with APP that is OPFC18%ATH/APP-GAP exhibited superior flame retardancy and the improvement was based on comparison with those without FR which stood at Tig = 20 s, HRRp = 86.6 kW/m2, HRRavg = 55.8 kW/m2, MLRp =0.131 g/s, Rm = 54.6% and SPRavg = 0.05 m2/s representing respectively 17.6%, 67.4%, 62.8%, 50.9%, 565% and 62.5% improvements less than those without FR (OPFC0%). In terms of flame spread, the least flame energy (ɸ) of 0.49 kW2/s3 for OPFC18%ATH/APP-GAP caused early flame regression. This was less than 39.6 kW2/s3 compared to those without FR (OPFC0%). It can be concluded that hybrid FR modified with APP could be useful in the automotive and building industries to delay the start and spread of fire.

Keywords: flame retardant, flame regression, oil palm fibre, composite panel

Procedia PDF Downloads 128
7383 Comparison of Different k-NN Models for Speed Prediction in an Urban Traffic Network

Authors: Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu

Abstract:

A database that records average traffic speeds measured at five-minute intervals for all the links in the traffic network of a metropolitan city. While learning from this data the models that can predict future traffic speed would be beneficial for the applications such as the car navigation system, building predictive models for every link becomes a nontrivial job if the number of links in a given network is huge. An advantage of adopting k-nearest neighbor (k-NN) as predictive models is that it does not require any explicit model building. Instead, k-NN takes a long time to make a prediction because it needs to search for the k-nearest neighbors in the database at prediction time. In this paper, we investigate how much we can speed up k-NN in making traffic speed predictions by reducing the amount of data to be searched for without a significant sacrifice of prediction accuracy. The rationale behind this is that we had a better look at only the recent data because the traffic patterns not only repeat daily or weekly but also change over time. In our experiments, we build several different k-NN models employing different sets of features which are the current and past traffic speeds of the target link and the neighbor links in its up/down-stream. The performances of these models are compared by measuring the average prediction accuracy and the average time taken to make a prediction using various amounts of data.

Keywords: big data, k-NN, machine learning, traffic speed prediction

Procedia PDF Downloads 363
7382 Sales-Based Dynamic Investment and Leverage Decisions: A Longitudinal Study

Authors: Rihab Belguith, Fathi Abid

Abstract:

The paper develops a system-based approach to investigate the dynamic adjustment of debt structure and investment policies of the Dow-Jones index. This approach enables the assessment of relations among sales, debt, and investment opportunities by considering the simultaneous effect of the market environmental change and future growth opportunities. We integrate the firm-specific sales variance to capture the industries' conditions in the model. Empirical results were obtained through a panel data set of firms with different sectors. The analysis support that environmental change does not affect equally the different industry since operating leverage differs among industries and so the sensitivity to sales variance. Including adjusted-specific variance, we find that there is no monotonic relation between leverage, sales, and investment. The firm may choose a low debt level in response to high sales variance but high leverage to attenuate the negative relation between sales variance and the current level of investment. We further find that while the overall effect of debt maturity on leverage is unaffected by the level of growth opportunities, the shorter the maturity of debt is, the smaller the direct effect of sales variance on investment.

Keywords: dynamic panel, investment, leverage decision, sales uncertainty

Procedia PDF Downloads 243
7381 A Hybrid System of Hidden Markov Models and Recurrent Neural Networks for Learning Deterministic Finite State Automata

Authors: Pavan K. Rallabandi, Kailash C. Patidar

Abstract:

In this paper, we present an optimization technique or a learning algorithm using the hybrid architecture by combining the most popular sequence recognition models such as Recurrent Neural Networks (RNNs) and Hidden Markov models (HMMs). In order to improve the sequence or pattern recognition/ classification performance by applying a hybrid/neural symbolic approach, a gradient descent learning algorithm is developed using the Real Time Recurrent Learning of Recurrent Neural Network for processing the knowledge represented in trained Hidden Markov Models. The developed hybrid algorithm is implemented on automata theory as a sample test beds and the performance of the designed algorithm is demonstrated and evaluated on learning the deterministic finite state automata.

Keywords: hybrid systems, hidden markov models, recurrent neural networks, deterministic finite state automata

Procedia PDF Downloads 388
7380 Stochastic Matrices and Lp Norms for Ill-Conditioned Linear Systems

Authors: Riadh Zorgati, Thomas Triboulet

Abstract:

In quite diverse application areas such as astronomy, medical imaging, geophysics or nondestructive evaluation, many problems related to calibration, fitting or estimation of a large number of input parameters of a model from a small amount of output noisy data, can be cast as inverse problems. Due to noisy data corruption, insufficient data and model errors, most inverse problems are ill-posed in a Hadamard sense, i.e. existence, uniqueness and stability of the solution are not guaranteed. A wide class of inverse problems in physics relates to the Fredholm equation of the first kind. The ill-posedness of such inverse problem results, after discretization, in a very ill-conditioned linear system of equations, the condition number of the associated matrix can typically range from 109 to 1018. This condition number plays the role of an amplifier of uncertainties on data during inversion and then, renders the inverse problem difficult to handle numerically. Similar problems appear in other areas such as numerical optimization when using interior points algorithms for solving linear programs leads to face ill-conditioned systems of linear equations. Devising efficient solution approaches for such system of equations is therefore of great practical interest. Efficient iterative algorithms are proposed for solving a system of linear equations. The approach is based on a preconditioning of the initial matrix of the system with an approximation of a generalized inverse leading to a stochastic preconditioned matrix. This approach, valid for non-negative matrices, is first extended to hermitian, semi-definite positive matrices and then generalized to any complex rectangular matrices. The main results obtained are as follows: 1) We are able to build a generalized inverse of any complex rectangular matrix which satisfies the convergence condition requested in iterative algorithms for solving a system of linear equations. This completes the (short) list of generalized inverse having this property, after Kaczmarz and Cimmino matrices. Theoretical results on both the characterization of the type of generalized inverse obtained and the convergence are derived. 2) Thanks to its properties, this matrix can be efficiently used in different solving schemes as Richardson-Tanabe or preconditioned conjugate gradients. 3) By using Lp norms, we propose generalized Kaczmarz’s type matrices. We also show how Cimmino's matrix can be considered as a particular case consisting in choosing the Euclidian norm in an asymmetrical structure. 4) Regarding numerical results obtained on some pathological well-known test-cases (Hilbert, Nakasaka, …), some of the proposed algorithms are empirically shown to be more efficient on ill-conditioned problems and more robust to error propagation than the known classical techniques we have tested (Gauss, Moore-Penrose inverse, minimum residue, conjugate gradients, Kaczmarz, Cimmino). We end on a very early prospective application of our approach based on stochastic matrices aiming at computing some parameters (such as the extreme values, the mean, the variance, …) of the solution of a linear system prior to its resolution. Such an approach, if it were to be efficient, would be a source of information on the solution of a system of linear equations.

Keywords: conditioning, generalized inverse, linear system, norms, stochastic matrix

Procedia PDF Downloads 136
7379 Leverage Effect for Volatility with Generalized Laplace Error

Authors: Farrukh Javed, Krzysztof Podgórski

Abstract:

We propose a new model that accounts for the asymmetric response of volatility to positive ('good news') and negative ('bad news') shocks in economic time series the so-called leverage effect. In the past, asymmetric powers of errors in the conditionally heteroskedastic models have been used to capture this effect. Our model is using the gamma difference representation of the generalized Laplace distributions that efficiently models the asymmetry. It has one additional natural parameter, the shape, that is used instead of power in the asymmetric power models to capture the strength of a long-lasting effect of shocks. Some fundamental properties of the model are provided including the formula for covariances and an explicit form for the conditional distribution of 'bad' and 'good' news processes given the past the property that is important for the statistical fitting of the model. Relevant features of volatility models are illustrated using S&P 500 historical data.

Keywords: heavy tails, volatility clustering, generalized asymmetric laplace distribution, leverage effect, conditional heteroskedasticity, asymmetric power volatility, GARCH models

Procedia PDF Downloads 386
7378 System Identification of Timber Masonry Walls Using Shaking Table Test

Authors: Timir Baran Roy, Luis Guerreiro, Ashutosh Bagchi

Abstract:

Dynamic study is important in order to design, repair and rehabilitation of structures. It has played an important role in the behavior characterization of structures; such as bridges, dams, high-rise buildings etc. There had been a substantial development in this area over the last few decades, especially in the field of dynamic identification techniques of structural systems. Frequency Domain Decomposition (FDD) and Time Domain Decomposition are most commonly used methods to identify modal parameters; such as natural frequency, modal damping, and mode shape. The focus of the present research is to study the dynamic characteristics of typical timber masonry walls commonly used in Portugal. For that purpose, a multi-storey structural prototypes of such walls have been tested on a seismic shake table at the National Laboratory for Civil Engineering, Portugal (LNEC). Signal processing has been performed of the output response, which is collected from the shaking table experiment of the prototype using accelerometers. In the present work signal processing of the output response, based on the input response has been done in two ways: FDD and Stochastic Subspace Identification (SSI). In order to estimate the values of the modal parameters, algorithms for FDD are formulated, and parametric functions for the SSI are computed. Finally, estimated values from both the methods are compared to measure the accuracy of both the techniques.

Keywords: frequency domain decomposition (fdd), modal parameters, signal processing, stochastic subspace identification (ssi), time domain decomposition

Procedia PDF Downloads 264
7377 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator

Authors: Yildiz Stella Dak, Jale Tezcan

Abstract:

Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.

Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection

Procedia PDF Downloads 330
7376 The Current Practices of Analysis of Reinforced Concrete Panels Subjected to Blast Loading

Authors: Palak J. Shukla, Atul K. Desai, Chentankumar D. Modhera

Abstract:

For any country in the world, it has become a priority to protect the critical infrastructure from looming risks of terrorism. In any infrastructure system, the structural elements like lower floors, exterior columns, walls etc. are key elements which are the most susceptible to damage due to blast load. The present study revisits the state of art review of the design and analysis of reinforced concrete panels subjected to blast loading. Various aspects in association with blast loading on structure, i.e. estimation of blast load, experimental works carried out previously, the numerical simulation tools, various material models, etc. are considered for exploring the current practices adopted worldwide. Discussion on various parametric studies to investigate the effect of reinforcement ratios, thickness of slab, different charge weight and standoff distance is also made. It was observed that for the simulation of blast load, CONWEP blast function or equivalent numerical equations were successfully employed by many researchers. The study of literature indicates that the researches were carried out using experimental works and numerical simulation using well known generalized finite element methods, i.e. LS-DYNA, ABAQUS, AUTODYN. Many researchers recommended to use concrete damage model to represent concrete and plastic kinematic material model to represent steel under action of blast loads for most of the numerical simulations. Most of the studies reveal that the increase reinforcement ratio, thickness of slab, standoff distance was resulted in better blast resistance performance of reinforced concrete panel. The study summarizes the various research results and appends the present state of knowledge for the structures exposed to blast loading.

Keywords: blast phenomenon, experimental methods, material models, numerical methods

Procedia PDF Downloads 157
7375 The Growth Role of Natural Gas Consumption for Developing Countries

Authors: Tae Young Jin, Jin Soo Kim

Abstract:

Carbon emissions have emerged as global concerns. Intergovernmental Panel of Climate Change (IPCC) have published reports about Green House Gases (GHGs) emissions regularly. United Nations Framework Convention on Climate Change (UNFCCC) have held a conference yearly since 1995. Especially, COP21 held at December 2015 made the Paris agreement which have strong binding force differently from former COP. The Paris agreement was ratified as of 4 November 2016, they finally have legal binding. Participating countries set up their own Intended Nationally Determined Contributions (INDC), and will try to achieve this. Thus, carbon emissions must be reduced. The energy sector is one of most responsible for carbon emissions and fossil fuels particularly are. Thus, this paper attempted to examine the relationship between natural gas consumption and economic growth. To achieve this, we adopted the Cobb-Douglas production function that consists of natural gas consumption, economic growth, capital, and labor using dependent panel analysis. Data were preprocessed with Principal Component Analysis (PCA) to remove cross-sectional dependency which can disturb the panel results. After confirming the existence of time-trended component of each variable, we moved to cointegration test considering cross-sectional dependency and structural breaks to describe more realistic behavior of volatile international indicators. The cointegration test result indicates that there is long-run equilibrium relationship between selected variables. Long-run cointegrating vector and Granger causality test results show that while natural gas consumption can contribute economic growth in the short-run, adversely affect in the long-run. From these results, we made following policy implications. Since natural gas has positive economic effect in only short-run, the policy makers in developing countries must consider the gradual switching of major energy source, from natural gas to sustainable energy source. Second, the technology transfer and financing business suggested by COP must be accelerated. Acknowledgement—This work was supported by the Energy Efficiency & Resources Core Technology Program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea (No. 20152510101880) and by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-205S1A3A2046684).

Keywords: developing countries, economic growth, natural gas consumption, panel data analysis

Procedia PDF Downloads 234
7374 Analyzing Business Model Choices and Sustainable Value Capturing: A Multiple Case Study of Sharing Economy Business Models

Authors: Minttu Laukkanen, Janne Huiskonen

Abstract:

This study investigates the sharing economy business models as examples of the sustainable business models. The aim is to contribute to the limited literature on sharing economy in connection with sustainable business models by explaining sharing economy business models value capturing. Specifically, this research answers the following question: How business model choices affect captured sustainable value? A multiple case study approach is applied in this study. Twenty different successful sharing economy business models focusing on consumer business and covering four main areas, accommodation, mobility, food, and consumer goods, are selected for analysis. The secondary data available on companies’ websites, previous research, reports, and other public documents are used. All twenty cases are analyzed through the sharing economy business model framework and sustainable value analysis framework using qualitative data analysis. This study represents general sharing economy business model value attributes and their specifications, i.e. sustainable value propositions for different stakeholders, and further explains the sustainability impacts of different sharing economy business models through captured and uncaptured value. In conclusion, this study represents how business model choices affect sustainable value capturing through eight business model attributes identified in this study. This paper contributes to the research on sustainable business models and sharing economy by examining how business model choices affect captured sustainable value. This study highlights the importance of careful business model and sustainability impacts analyses including the triple bottom line, multiple stakeholders and value captured and uncaptured perspectives as well as sustainability trade-offs. It is not self-evident that sharing economy business models advance sustainability, and business model choices does matter.

Keywords: sharing economy, sustainable business model innovation, sustainable value, value capturing

Procedia PDF Downloads 173
7373 Generic Hybrid Models for Two-Dimensional Ultrasonic Guided Wave Problems

Authors: Manoj Reghu, Prabhu Rajagopal, C. V. Krishnamurthy, Krishnan Balasubramaniam

Abstract:

A thorough understanding of guided ultrasonic wave behavior in structures is essential for the application of existing Non Destructive Evaluation (NDE) technologies, as well as for the development of new methods. However, the analysis of guided wave phenomena is challenging because of their complex dispersive and multimodal nature. Although numerical solution procedures have proven to be very useful in this regard, the increasing complexity of features and defects to be considered, as well as the desire to improve the accuracy of inspection often imposes a large computational cost. Hybrid models that combine numerical solutions for wave scattering with faster alternative methods for wave propagation have long been considered as a solution to this problem. However usually such models require modification of the base code of the solution procedure. Here we aim to develop Generic Hybrid models that can be directly applied to any two different solution procedures. With this goal in mind, a Numerical Hybrid model and an Analytical-Numerical Hybrid model has been developed. The concept and implementation of these Hybrid models are discussed in this paper.

Keywords: guided ultrasonic waves, Finite Element Method (FEM), Hybrid model

Procedia PDF Downloads 465
7372 Computational Models for Accurate Estimation of Joint Forces

Authors: Ibrahim Elnour Abdelrahman Eltayeb

Abstract:

Computational modelling is a method used to investigate joint forces during a movement. It can get high accuracy in the joint forces via subject-specific models. However, the construction of subject-specific models remains time-consuming and expensive. The purpose of this paper was to identify what alterations we can make to generic computational models to get a better estimation of the joint forces. It appraised the impact of these alterations on the accuracy of the estimated joint forces. It found different strategies of alterations: joint model, muscle model, and an optimisation problem. All these alterations affected joint contact force accuracy, so showing the potential for improving the model predictions without involving costly and time-consuming medical images.

Keywords: joint force, joint model, optimisation problem, validation

Procedia PDF Downloads 170
7371 Modelling Mode Choice Behaviour Using Cloud Theory

Authors: Leah Wright, Trevor Townsend

Abstract:

Mode choice models are crucial instruments in the analysis of travel behaviour. These models show the relationship between an individual’s choice of transportation mode for a given O-D pair and the individual’s socioeconomic characteristics such as household size and income level, age and/or gender, and the features of the transportation system. The most popular functional forms of these models are based on Utility-Based Choice Theory, which addresses the uncertainty in the decision-making process with the use of an error term. However, with the development of artificial intelligence, many researchers have started to take a different approach to travel demand modelling. In recent times, researchers have looked at using neural networks, fuzzy logic and rough set theory to develop improved mode choice formulas. The concept of cloud theory has recently been introduced to model decision-making under uncertainty. Unlike the previously mentioned theories, cloud theory recognises a relationship between randomness and fuzziness, two of the most common types of uncertainty. This research aims to investigate the use of cloud theory in mode choice models. This paper highlights the conceptual framework of the mode choice model using cloud theory. Merging decision-making under uncertainty and mode choice models is state of the art. The cloud theory model is expected to address the issues and concerns with the nested logit and improve the design of mode choice models and their use in travel demand.

Keywords: Cloud theory, decision-making, mode choice models, travel behaviour, uncertainty

Procedia PDF Downloads 388
7370 Simulation of Channel Models for Device-to-Device Application of 5G Urban Microcell Scenario

Authors: H. Zormati, J. Chebil, J. Bel Hadj Tahar

Abstract:

Next generation wireless transmission technology (5G) is expected to support the development of channel models for higher frequency bands, so clarification of high frequency bands is the most important issue in radio propagation research for 5G, multiple urban microcellular measurements have been carried out at 60 GHz. In this paper, the collected data is uniformly analyzed with focus on the path loss (PL), the objective is to compare simulation results of some studied channel models with the purpose of testing the performance of each one.

Keywords: 5G, channel model, 60GHz channel, millimeter-wave, urban microcell

Procedia PDF Downloads 319
7369 Analyzing the Effects of Supply and Demand Shocks in the Spanish Economy

Authors: José M Martín-Moreno, Rafaela Pérez, Jesús Ruiz

Abstract:

In this paper we use a small open economy Dynamic Stochastic General Equilibrium Model (DSGE) for the Spanish economy to search for a deeper characterization of the determinants of Spain’s macroeconomic fluctuations throughout the period 1970-2008. In order to do this, we distinguish between tradable and non-tradable goods to take into account the fact that the presence of non-tradable goods in this economy is one of the largest in the world. We estimate a DSGE model with supply and demand shocks (sectorial productivity, public spending, international real interest rate and preferences) using Kalman Filter techniques. We find the following results. First of all, our variance decomposition analysis suggests that 1) the preference shock basically accounts for private consumption volatility, 2) the idiosyncratic productivity shock accounts for non-tradable output volatility, and 3) the sectorial productivity shock along with the international interest rate both greatly account for tradable output. Secondly, the model closely replicates the time path observed in the data for the Spanish economy and finally, the model captures the main cyclical qualitative features of this economy reasonably well.

Keywords: business cycle, DSGE models, Kalman filter estimation, small open economy

Procedia PDF Downloads 416
7368 Contrasted Mean and Median Models in Egyptian Stock Markets

Authors: Mai A. Ibrahim, Mohammed El-Beltagy, Motaz Khorshid

Abstract:

Emerging Markets return distributions have shown significance departure from normality were they are characterized by fatter tails relative to the normal distribution and exhibit levels of skewness and kurtosis that constitute a significant departure from normality. Therefore, the classical Markowitz Mean-Variance is not applicable for emerging markets since it assumes normally-distributed returns (with zero skewness and kurtosis) and a quadratic utility function. Moreover, the Markowitz mean-variance analysis can be used in cases of moderate non-normality and it still provides a good approximation of the expected utility, but it may be ineffective under large departure from normality. Higher moments models and median models have been suggested in the literature for asset allocation in this case. Higher moments models have been introduced to account for the insufficiency of the description of a portfolio by only its first two moments while the median model has been introduced as a robust statistic which is less affected by outliers than the mean. Tail risk measures such as Value-at Risk (VaR) and Conditional Value-at-Risk (CVaR) have been introduced instead of Variance to capture the effect of risk. In this research, higher moment models including the Mean-Variance-Skewness (MVS) and Mean-Variance-Skewness-Kurtosis (MVSK) are formulated as single-objective non-linear programming problems (NLP) and median models including the Median-Value at Risk (MedVaR) and Median-Mean Absolute Deviation (MedMAD) are formulated as a single-objective mixed-integer linear programming (MILP) problems. The higher moment models and median models are compared to some benchmark portfolios and tested on real financial data in the Egyptian main Index EGX30. The results show that all the median models outperform the higher moment models were they provide higher final wealth for the investor over the entire period of study. In addition, the results have confirmed the inapplicability of the classical Markowitz Mean-Variance to the Egyptian stock market as it resulted in very low realized profits.

Keywords: Egyptian stock exchange, emerging markets, higher moment models, median models, mixed-integer linear programming, non-linear programming

Procedia PDF Downloads 315