Search results for: AR linear estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5045

Search results for: AR linear estimation

4415 On the Construction of Lightweight Circulant Maximum Distance Separable Matrices

Authors: Qinyi Mei, Li-Ping Wang

Abstract:

MDS matrices are of great significance in the design of block ciphers and hash functions. In the present paper, we investigate the problem of constructing MDS matrices which are both lightweight and low-latency. We propose a new method of constructing lightweight MDS matrices using circulant matrices which can be implemented efficiently in hardware. Furthermore, we provide circulant MDS matrices with as few bit XOR operations as possible for the classical dimensions 4 × 4, 8 × 8 over the space of linear transformations over finite field F42 . In contrast to previous constructions of MDS matrices, our constructions have achieved fewer XORs.

Keywords: linear diffusion layer, circulant matrix, lightweight, maximum distance separable (MDS) matrix

Procedia PDF Downloads 410
4414 Reliable Consensus Problem for Multi-Agent Systems with Sampled-Data

Authors: S. H. Lee, M. J. Park, O. M. Kwon

Abstract:

In this paper, reliable consensus of multi-agent systems with sampled-data is investigated. By using a suitable Lyapunov-Krasovskii functional and some techniques such as Wirtinger Inequality, Schur Complement and Kronecker Product, the results of this systems are obtained by solving a set of Linear Matrix Inequalities(LMIs). One numerical example is included to show the effectiveness of the proposed criteria.

Keywords: multi-agent, linear matrix inequalities (LMIs), kronecker product, sampled-data, Lyapunov method

Procedia PDF Downloads 528
4413 Time of Week Intensity Estimation from Interval Censored Data with Application to Police Patrol Planning

Authors: Jiahao Tian, Michael D. Porter

Abstract:

Law enforcement agencies are tasked with crime prevention and crime reduction under limited resources. Having an accurate temporal estimate of the crime rate would be valuable to achieve such a goal. However, estimation is usually complicated by the interval-censored nature of crime data. We cast the problem of intensity estimation as a Poisson regression using an EM algorithm to estimate the parameters. Two special penalties are added that provide smoothness over the time of day and day of the week. This approach presented here provides accurate intensity estimates and can also uncover day-of-week clusters that share the same intensity patterns. Anticipating where and when crimes might occur is a key element to successful policing strategies. However, this task is complicated by the presence of interval-censored data. The censored data refers to the type of data that the event time is only known to lie within an interval instead of being observed exactly. This type of data is prevailing in the field of criminology because of the absence of victims for certain types of crime. Despite its importance, the research in temporal analysis of crime has lagged behind the spatial component. Inspired by the success of solving crime-related problems with a statistical approach, we propose a statistical model for the temporal intensity estimation of crime with censored data. The model is built on Poisson regression and has special penalty terms added to the likelihood. An EM algorithm was derived to obtain maximum likelihood estimates, and the resulting model shows superior performance to the competing model. Our research is in line with the smart policing initiative (SPI) proposed by the Bureau Justice of Assistance (BJA) as an effort to support law enforcement agencies in building evidence-based, data-driven law enforcement tactics. The goal is to identify strategic approaches that are effective in crime prevention and reduction. In our case, we allow agencies to deploy their resources for a relatively short period of time to achieve the maximum level of crime reduction. By analyzing a particular area within cities where data are available, our proposed approach could not only provide an accurate estimate of intensities for the time unit considered but a time-variation crime incidence pattern. Both will be helpful in the allocation of limited resources by either improving the existing patrol plan with the understanding of the discovery of the day of week cluster or supporting extra resources available.

Keywords: cluster detection, EM algorithm, interval censoring, intensity estimation

Procedia PDF Downloads 66
4412 Sustainability of Green Supply Chain for a Steel Industry Using Mixed Linear Programing Model

Authors: Ameen Alawneh

Abstract:

The cost of material management across the supply chain represents a major contributor to the overall cost of goods in many companies both manufacturing and service sectors. This fact combined with the fierce competition make supply chains more efficient and cost effective. It also requires the companies to improve the quality of the products and services, increase the effectiveness of supply chain operations, focus on customer needs, reduce wastes and costs across the supply chain. As a heavy industry, steel manufacturing companies in particular are nowadays required to be more environmentally conscious due to their contribution to air, soil, and water pollution that results from emissions and wastes across their supply chains. Steel companies are increasingly looking for methods to reduce or cost cut in the operations and provide extra value to their customers to stay competitive under the current low margins. In this research we develop a green framework model for the sustainability of a steel company supply chain using Mixed integer Linear programming.

Keywords: Supply chain, Mixed Integer linear programming, heavy industry, water pollution

Procedia PDF Downloads 448
4411 Decomposition of Third-Order Discrete-Time Linear Time-Varying Systems into Its Second- and First-Order Pairs

Authors: Mohamed Hassan Abdullahi

Abstract:

Decomposition is used as a synthesis tool in several physical systems. It can also be used for tearing and restructuring, which is large-scale system analysis. On the other hand, the commutativity of series-connected systems has fascinated the interest of researchers, and its advantages have been emphasized in the literature. The presentation looks into the necessary conditions for decomposing any third-order discrete-time linear time-varying system into a commutative pair of first- and second-order systems. Additional requirements are derived in the case of nonzero initial conditions. MATLAB simulations are used to verify the findings. The work is unique and is being published for the first time. It is critical from the standpoints of synthesis and/or design. Because many design techniques in engineering systems rely on tearing and reconstruction, this is the process of putting together simple components to create a finished product. Furthermore, it is demonstrated that regarding sensitivity to initial conditions, some combinations may be better than others. The results of this work can be extended for the decomposition of fourth-order discrete-time linear time-varying systems into lower-order commutative pairs, as two second-order commutative subsystems or one first-order and one third-order commutative subsystems.

Keywords: commutativity, decomposition, discrete time-varying systems, systems

Procedia PDF Downloads 110
4410 Inverse Scattering of Two-Dimensional Objects Using an Enhancement Method

Authors: A.R. Eskandari, M.R. Eskandari

Abstract:

A 2D complete identification algorithm for dielectric and multiple objects immersed in air is presented. The employed technique consists of initially retrieving the shape and position of the scattering object using a linear sampling method and then determining the electric permittivity and conductivity of the scatterer using adjoint sensitivity analysis. This inversion algorithm results in high computational speed and efficiency, and it can be generalized for any scatterer structure. Also, this method is robust with respect to noise. The numerical results clearly show that this hybrid approach provides accurate reconstructions of various objects.

Keywords: inverse scattering, microwave imaging, two-dimensional objects, Linear Sampling Method (LSM)

Procedia PDF Downloads 387
4409 Effects of Wind Load on the Tank Structures with Various Shapes and Aspect Ratios

Authors: Doo Byong Bae, Jae Jun Yoo, Il Gyu Park, Choi Seowon, Oh Chang Kook

Abstract:

There are several wind load provisions to evaluate the wind response on tank structures such as API, Euro-code, etc. the assessment of wind action applying these provisions is made by performing the finite element analysis using both linear bifurcation analysis and geometrically nonlinear analysis. By comparing the pressure patterns obtained from the analysis with the results of wind tunnel test, most appropriate wind load criteria will be recommended.

Keywords: wind load, finite element analysis, linear bifurcation analysis, geometrically nonlinear analysis

Procedia PDF Downloads 637
4408 An Efficient Fundamental Matrix Estimation for Moving Object Detection

Authors: Yeongyu Choi, Ju H. Park, S. M. Lee, Ho-Youl Jung

Abstract:

In this paper, an improved method for estimating fundamental matrix is proposed. The method is applied effectively to monocular camera based moving object detection. The method consists of corner points detection, moving object’s motion estimation and fundamental matrix calculation. The corner points are obtained by using Harris corner detector, motions of moving objects is calculated from pyramidal Lucas-Kanade optical flow algorithm. Through epipolar geometry analysis using RANSAC, the fundamental matrix is calculated. In this method, we have improved the performances of moving object detection by using two threshold values that determine inlier or outlier. Through the simulations, we compare the performances with varying the two threshold values.

Keywords: corner detection, optical flow, epipolar geometry, RANSAC

Procedia PDF Downloads 408
4407 Development and Validation of First Derivative Method and Artificial Neural Network for Simultaneous Spectrophotometric Determination of Two Closely Related Antioxidant Nutraceuticals in Their Binary Mixture”

Authors: Mohamed Korany, Azza Gazy, Essam Khamis, Marwa Adel, Miranda Fawzy

Abstract:

Background: Two new, simple and specific methods; First, a Zero-crossing first-derivative technique and second, a chemometric-assisted spectrophotometric artificial neural network (ANN) were developed and validated in accordance with ICH guidelines. Both methods were used for the simultaneous estimation of the two closely related antioxidant nutraceuticals ; Coenzyme Q10 (Q) ; also known as Ubidecarenone or Ubiquinone-10, and Vitamin E (E); alpha-tocopherol acetate, in their pharmaceutical binary mixture. Results: For first method: By applying the first derivative, both Q and E were alternatively determined; each at the zero-crossing of the other. The D1 amplitudes of Q and E, at 285 nm and 235 nm respectively, were recorded and correlated to their concentrations. The calibration curve is linear over the concentration range of 10-60 and 5.6-70 μg mL-1 for Q and E, respectively. For second method: ANN (as a multivariate calibration method) was developed and applied for the simultaneous determination of both analytes. A training set (or a concentration set) of 90 different synthetic mixtures containing Q and E, in wide concentration ranges between 0-100 µg/mL and 0-556 µg/mL respectively, were prepared in ethanol. The absorption spectra of the training sets were recorded in the spectral region of 230–300 nm. A Gradient Descend Back Propagation ANN chemometric calibration was computed by relating the concentration sets (x-block) to their corresponding absorption data (y-block). Another set of 45 synthetic mixtures of the two drugs, in defined range, was used to validate the proposed network. Neither chemical separation, preparation stage nor mathematical graphical treatment were required. Conclusions: The proposed methods were successfully applied for the assay of Q and E in laboratory prepared mixtures and combined pharmaceutical tablet with excellent recoveries. The ANN method was superior over the derivative technique as the former determined both drugs in the non-linear experimental conditions. It also offers rapidity, high accuracy, effort and money saving. Moreover, no need for an analyst for its application. Although the ANN technique needed a large training set, it is the method of choice in the routine analysis of Q and E tablet. No interference was observed from common pharmaceutical additives. The results of the two methods were compared together

Keywords: coenzyme Q10, vitamin E, chemometry, quantitative analysis, first derivative spectrophotometry, artificial neural network

Procedia PDF Downloads 446
4406 Linear Complementary Based Approach for Unilateral Frictional Contact between Wheel and Beam

Authors: Muskaan Sethi, Arnab Banerjee, Bappaditya Manna

Abstract:

The present paper aims to investigate a suitable contact between a wheel rolling over a flexible beam. A Linear Complementary (LCP) based approach has been adopted to simulate the contact dynamics for a rigid wheel traversing over a flexible Euler Bernoulli simply supported beam. The adopted methodology is suitable to incorporate the effect of frictional force acting at the wheel-beam interface. Moreover, the possibility of the generation of a gap between the two bodies has also been considered. The present method is based on a unilateral contact assumption which assumes that no penetration would occur when the two bodies come in contact. This assumption helps to predict the contact between wheels and beams in a more practical sense. The proposed methodology is validated with the previously published results and is found to be in good agreement. Further, this method is applied to simulate the contact between wheels and beams for various railway configurations. Moreover, different parametric studies are conducted to study the contact dynamics between the wheel and beam more thoroughly.

Keywords: contact dynamics, linear complementary problem, railway dynamics, unilateral contact

Procedia PDF Downloads 101
4405 Conceptional Design of a Hyperloop Capsule with Linear Induction Propulsion System

Authors: Ahmed E. Hodaib, Samar F. Abdel Fattah

Abstract:

High-speed transportation is a growing concern. To develop high-speed rails and to increase high-speed efficiencies, the idea of Hyperloop was introduced. The challenge is to overcome the difficulties of managing friction and air-resistance which become substantial when vehicles approach high speeds. In this paper, we are presenting the methodologies of the capsule design which got a design concept innovation award at SpaceX competition in January, 2016. MATLAB scripts are written for the levitation and propulsion calculations and iterations. Computational Fluid Dynamics (CFD) is used to simulate the air flow around the capsule considering the effect of the axial-flow air compressor and the levitation cushion on the air flow. The design procedures of a single-sided linear induction motor are analyzed in detail and its geometric and magnetic parameters are determined. A structural design is introduced and Finite Element Method (FEM) is used to analyze the stresses in different parts. The configuration and the arrangement of the components are illustrated. Moreover, comments on manufacturing are made.

Keywords: high-speed transportation, hyperloop, railways transportation, single-sided linear induction Motor (SLIM)

Procedia PDF Downloads 276
4404 Construction of Finite Woven Frames through Bounded Linear Operators

Authors: A. Bhandari, S. Mukherjee

Abstract:

Two frames in a Hilbert space are called woven or weaving if all possible merge combinations between them generate frames of the Hilbert space with uniform frame bounds. Weaving frames are powerful tools in wireless sensor networks which require distributed data processing. Considering the practical applications, this article deals with finite woven frames. We provide methods of constructing finite woven frames, in particular, bounded linear operators are used to construct woven frames from a given frame. Several examples are discussed. We also introduce the notion of woven frame sequences and characterize them through the concepts of gaps and angles between spaces.

Keywords: frames, woven frames, gap, angle

Procedia PDF Downloads 193
4403 Convergence Analysis of Training Two-Hidden-Layer Partially Over-Parameterized ReLU Networks via Gradient Descent

Authors: Zhifeng Kong

Abstract:

Over-parameterized neural networks have attracted a great deal of attention in recent deep learning theory research, as they challenge the classic perspective of over-fitting when the model has excessive parameters and have gained empirical success in various settings. While a number of theoretical works have been presented to demystify properties of such models, the convergence properties of such models are still far from being thoroughly understood. In this work, we study the convergence properties of training two-hidden-layer partially over-parameterized fully connected networks with the Rectified Linear Unit activation via gradient descent. To our knowledge, this is the first theoretical work to understand convergence properties of deep over-parameterized networks without the equally-wide-hidden-layer assumption and other unrealistic assumptions. We provide a probabilistic lower bound of the widths of hidden layers and proved linear convergence rate of gradient descent. We also conducted experiments on synthetic and real-world datasets to validate our theory.

Keywords: over-parameterization, rectified linear units ReLU, convergence, gradient descent, neural networks

Procedia PDF Downloads 142
4402 A Study of User Awareness and Attitudes Towards Civil-ID Authentication in Oman’s Electronic Services

Authors: Raya Al Khayari, Rasha Al Jassim, Muna Al Balushi, Fatma Al Moqbali, Said El Hajjar

Abstract:

This study utilizes linear regression analysis to investigate the correlation between user account passwords and the probability of civil ID exposure, offering statistical insights into civil ID security. The study employs multiple linear regression (MLR) analysis to further investigate the elements that influence consumers’ views of civil ID security. This aims to increase awareness and improve preventive measures. The results obtained from the MLR analysis provide a thorough comprehension and can guide specific educational and awareness campaigns aimed at promoting improved security procedures. In summary, the study’s results offer significant insights for improving existing security measures and developing more efficient tactics to reduce risks related to civil ID security in Oman. By identifying key factors that impact consumers’ perceptions, organizations can tailor their strategies to address vulnerabilities effectively. Additionally, the findings can inform policymakers on potential regulatory changes to enhance civil ID security in the country.

Keywords: civil-id disclosure, awareness, linear regression, multiple regression

Procedia PDF Downloads 57
4401 Cars Redistribution Optimization Problem in the Free-Float Car-Sharing

Authors: Amine Ait-Ouahmed, Didier Josselin, Fen Zhou

Abstract:

Free-Float car-sharing is an one-way car-sharing service where cars are available anytime and anywhere in the streets such that no dedicated stations are needed. This means that after driving a car you can park it anywhere. This car-sharing system creates an imbalance car distribution in the cites which can be regulated by staff agents through the redistribution of cars. In this paper, we aim to solve the car-reservation and agents traveling problem so that the number of successful cars’ reservations could be maximized. Beside, we also tend to minimize the distance traveled by agents for cars redistribution. To this end, we present a mixed integer linear programming formulation for the car-sharing problem.

Keywords: one-way car-sharing, vehicle redistribution, car reservation, linear programming

Procedia PDF Downloads 348
4400 In and Out-Of-Sample Performance of Non Simmetric Models in International Price Differential Forecasting in a Commodity Country Framework

Authors: Nicola Rubino

Abstract:

This paper presents an analysis of a group of commodity exporting countries' nominal exchange rate movements in relationship to the US dollar. Using a series of Unrestricted Self-exciting Threshold Autoregressive models (SETAR), we model and evaluate sixteen national CPI price differentials relative to the US dollar CPI. Out-of-sample forecast accuracy is evaluated through calculation of mean absolute error measures on the basis of two-hundred and fifty-three months rolling window forecasts and extended to three additional models, namely a logistic smooth transition regression (LSTAR), an additive non linear autoregressive model (AAR) and a simple linear Neural Network model (NNET). Our preliminary results confirm presence of some form of TAR non linearity in the majority of the countries analyzed, with a relatively higher goodness of fit, with respect to the linear AR(1) benchmark, in five countries out of sixteen considered. Although no model appears to statistically prevail over the other, our final out-of-sample forecast exercise shows that SETAR models tend to have quite poor relative forecasting performance, especially when compared to alternative non-linear specifications. Finally, by analyzing the implied half-lives of the > coefficients, our results confirms the presence, in the spirit of arbitrage band adjustment, of band convergence with an inner unit root behaviour in five of the sixteen countries analyzed.

Keywords: transition regression model, real exchange rate, nonlinearities, price differentials, PPP, commodity points

Procedia PDF Downloads 278
4399 A Combined Error Control with Forward Euler Method for Dynamical Systems

Authors: R. Vigneswaran, S. Thilakanathan

Abstract:

Variable time-stepping algorithms for solving dynamical systems performed poorly for long time computations which pass close to a fixed point. To overcome this difficulty, several authors considered phase space error controls for numerical simulation of dynamical systems. In one generalized phase space error control, a step-size selection scheme was proposed, which allows this error control to be incorporated into the standard adaptive algorithm as an extra constraint at negligible extra computational cost. For this generalized error control, it was already analyzed the forward Euler method applied to the linear system whose coefficient matrix has real and negative eigenvalues. In this paper, this result was extended to the linear system whose coefficient matrix has complex eigenvalues with negative real parts. Some theoretical results were obtained and numerical experiments were carried out to support the theoretical results.

Keywords: adaptivity, fixed point, long time simulations, stability, linear system

Procedia PDF Downloads 312
4398 A Linear Programming Approach to Assist Roster Construction Under a Salary Cap

Authors: Alex Contarino

Abstract:

Professional sports leagues often have a “free agency” period, during which teams may sign players with expiring contracts.To promote parity, many leagues operate under a salary cap that limits the amount teams can spend on player’s salaries in a given year. Similarly, in fantasy sports leagues, salary cap drafts are a popular method for selecting players. In order to sign a free agent in either setting, teams must bid against one another to buy the player’s services while ensuring the sum of their player’s salaries is below the salary cap. This paper models the bidding process for a free agent as a constrained optimization problem that can be solved using linear programming. The objective is to determine the largest bid that a team should offer the player subject to the constraint that the value of signing the player must exceed the value of using the salary cap elsewhere. Iteratively solving this optimization problem for each available free agent provides teams with an effective framework for maximizing the talent on their rosters. The utility of this approach is demonstrated for team sport roster construction and fantasy sport drafts, using recent data sets from both settings.

Keywords: linear programming, optimization, roster management, salary cap

Procedia PDF Downloads 111
4397 On the Algorithmic Iterative Solutions of Conjugate Gradient, Gauss-Seidel and Jacobi Methods for Solving Systems of Linear Equations

Authors: Hussaini Doko Ibrahim, Hamilton Cyprian Chinwenyi, Henrietta Nkem Ude

Abstract:

In this paper, efforts were made to examine and compare the algorithmic iterative solutions of the conjugate gradient method as against other methods such as Gauss-Seidel and Jacobi approaches for solving systems of linear equations of the form Ax=b, where A is a real n×n symmetric and positive definite matrix. We performed algorithmic iterative steps and obtained analytical solutions of a typical 3×3 symmetric and positive definite matrix using the three methods described in this paper (Gauss-Seidel, Jacobi, and conjugate gradient methods), respectively. From the results obtained, we discovered that the conjugate gradient method converges faster to exact solutions in fewer iterative steps than the two other methods, which took many iterations, much time, and kept tending to the exact solutions.

Keywords: conjugate gradient, linear equations, symmetric and positive definite matrix, gauss-seidel, Jacobi, algorithm

Procedia PDF Downloads 149
4396 Cycle Number Estimation Method on Fatigue Crack Initiation Using Voronoi Tessellation and the Tanaka Mura Model

Authors: Mohammad Ridzwan Bin Abd Rahim, Siegfried Schmauder, Yupiter HP Manurung, Peter Binkele, Meor Iqram B. Meor Ahmad, Kiarash Dogahe

Abstract:

This paper deals with the short crack initiation of the material P91 under cyclic loading at two different temperatures, concluded with the estimation of the short crack initiation Wöhler (S/N) curve. An artificial but representative model microstructure was generated using Voronoi tessellation and the Finite Element Method, and the non-uniform stress distribution was calculated accordingly afterward. The number of cycles needed for crack initiation is estimated on the basis of the stress distribution in the model by applying the physically-based Tanaka-Mura model. Initial results show that the number of cycles to generate crack initiation is strongly correlated with temperature.

Keywords: short crack initiation, P91, Wöhler curve, Voronoi tessellation, Tanaka-Mura model

Procedia PDF Downloads 101
4395 Residual Lifetime Estimation for Weibull Distribution by Fusing Expert Judgements and Censored Data

Authors: Xiang Jia, Zhijun Cheng

Abstract:

The residual lifetime of a product is the operation time between the current time and the time point when the failure happens. The residual lifetime estimation is rather important in reliability analysis. To predict the residual lifetime, it is necessary to assume or verify a particular distribution that the lifetime of the product follows. And the two-parameter Weibull distribution is frequently adopted to describe the lifetime in reliability engineering. Due to the time constraint and cost reduction, a life testing experiment is usually terminated before all the units have failed. Then the censored data is usually collected. In addition, other information could also be obtained for reliability analysis. The expert judgements are considered as it is common that the experts could present some useful information concerning the reliability. Therefore, the residual lifetime is estimated for Weibull distribution by fusing the censored data and expert judgements in this paper. First, the closed-forms concerning the point estimate and confidence interval for the residual lifetime under the Weibull distribution are both presented. Next, the expert judgements are regarded as the prior information and how to determine the prior distribution of Weibull parameters is developed. For completeness, the cases that there is only one, and there are more than two expert judgements are both focused on. Further, the posterior distribution of Weibull parameters is derived. Considering that it is difficult to derive the posterior distribution of residual lifetime, a sample-based method is proposed to generate the posterior samples of Weibull parameters based on the Monte Carlo Markov Chain (MCMC) method. And these samples are used to obtain the Bayes estimation and credible interval for the residual lifetime. Finally, an illustrative example is discussed to show the application. It demonstrates that the proposed method is rather simple, satisfactory, and robust.

Keywords: expert judgements, information fusion, residual lifetime, Weibull distribution

Procedia PDF Downloads 142
4394 Effective Dose and Size Specific Dose Estimation with and without Tube Current Modulation for Thoracic Computed Tomography Examinations: A Phantom Study

Authors: S. Gharbi, S. Labidi, M. Mars, M. Chelli, F. Ladeb

Abstract:

The purpose of this study is to reduce radiation dose for chest CT examination by including Tube Current Modulation (TCM) to a standard CT protocol. A scan of an anthropomorphic male Alderson phantom was performed on a 128-slice scanner. The estimation of effective dose (ED) in both scans with and without mAs modulation was done via multiplication of Dose Length Product (DLP) to a conversion factor. Results were compared to those measured with a CT-Expo software. The size specific dose estimation (SSDE) values were obtained by multiplication of the volume CT dose index (CTDIvol) with a conversion size factor related to the phantom’s effective diameter. Objective assessment of image quality was performed with Signal to Noise Ratio (SNR) measurements in phantom. SPSS software was used for data analysis. Results showed including CARE Dose 4D; ED was lowered by 48.35% and 51.51% using DLP and CT-expo, respectively. In addition, ED ranges between 7.01 mSv and 6.6 mSv in case of standard protocol, while it ranges between 3.62 mSv and 3.2 mSv with TCM. Similar results are found for SSDE; dose was higher without TCM of 16.25 mGy and was lower by 48.8% including TCM. The SNR values calculated were significantly different (p=0.03<0.05). The highest one is measured on images acquired with TCM and reconstructed with Filtered back projection (FBP). In conclusion, this study proves the potential of TCM technique in SSDE and ED reduction and in conserving image quality with high diagnostic reference level for thoracic CT examinations.

Keywords: anthropomorphic phantom, computed tomography, CT-expo, radiation dose

Procedia PDF Downloads 221
4393 Seismic Response Mitigation of Structures Using Base Isolation System Considering Uncertain Parameters

Authors: Rama Debbarma

Abstract:

The present study deals with the performance of Linear base isolation system to mitigate seismic response of structures characterized by random system parameters. This involves optimization of the tuning ratio and damping properties of the base isolation system considering uncertain system parameters. However, the efficiency of base isolator may reduce if it is not tuned to the vibrating mode it is designed to suppress due to unavoidable presence of system parameters uncertainty. With the aid of matrix perturbation theory and first order Taylor series expansion, the total probability concept is used to evaluate the unconditional response of the primary structures considering random system parameters. For this, the conditional second order information of the response quantities are obtained in random vibration framework using state space formulation. Subsequently, the maximum unconditional root mean square displacement of the primary structures is used as the objective function to obtain optimum damping parameters Numerical study is performed to elucidate the effect of parameters uncertainties on the optimization of parameters of linear base isolator and system performance.

Keywords: linear base isolator, earthquake, optimization, uncertain parameters

Procedia PDF Downloads 433
4392 Frequency Identification of Wiener-Hammerstein Systems

Authors: Brouri Adil, Giri Fouad

Abstract:

The problem of identifying Wiener-Hammerstein systems is addressed in the presence of two linear subsystems of structure totally unknown. Presently, the nonlinear element is allowed to be noninvertible. The system identification problem is dealt by developing a two-stage frequency identification method such a set of points of the nonlinearity are estimated first. Then, the frequency gains of the two linear subsystems are determined at a number of frequencies. The method involves Fourier series decomposition and only requires periodic excitation signals. All involved estimators are shown to be consistent.

Keywords: Wiener-Hammerstein systems, Fourier series expansions, frequency identification, automation science

Procedia PDF Downloads 536
4391 Design Flood Estimation in Satluj Basin-Challenges for Sunni Dam Hydro Electric Project, Himachal Pradesh-India

Authors: Navneet Kalia, Lalit Mohan Verma, Vinay Guleria

Abstract:

Introduction: Design Flood studies are essential for effective planning and functioning of water resource projects. Design flood estimation for Sunni Dam Hydro Electric Project located in State of Himachal Pradesh, India, on the river Satluj, was a big challenge in view of the river flowing in the Himalayan region from Tibet to India, having a large catchment area of varying topography, climate, and vegetation. No Discharge data was available for the part of the river in Tibet, whereas, for India, it was available only at Khab, Rampur, and Luhri. The estimation of Design Flood using standard methods was not possible. This challenge was met using two different approaches for upper (snow-fed) and lower (rainfed) catchment using Flood Frequency Approach and Hydro-metrological approach. i) For catchment up to Khab Gauging site (Sub-Catchment, C1), Flood Frequency approach was used. Around 90% of the catchment area (46300 sqkm) up to Khab is snow-fed which lies above 4200m. In view of the predominant area being snow-fed area, 1 in 10000 years return period flood estimated using Flood Frequency analysis at Khab was considered as Probable Maximum Flood (PMF). The flood peaks were taken from daily observed discharges at Khab, which were increased by 10% to make them instantaneous. Design Flood of 4184 cumec thus obtained was considered as PMF at Khab. ii) For catchment between Khab and Sunni Dam (Sub-Catchment, C2), Hydro-metrological approach was used. This method is based upon the catchment response to the rainfall pattern observed (Probable Maximum Precipitation - PMP) in a particular catchment area. The design flood computation mainly involves the estimation of a design storm hyetograph and derivation of the catchment response function. A unit hydrograph is assumed to represent the response of the entire catchment area to a unit rainfall. The main advantage of the hydro-metrological approach is that it gives a complete flood hydrograph which allows us to make a realistic determination of its moderation effect while passing through a reservoir or a river reach. These studies were carried out to derive PMF for the catchment area between Khab and Sunni Dam site using a 1-day and 2-day PMP values of 232 and 416 cm respectively. The PMF so obtained was 12920.60 cumec. Final Result: As the Catchment area up to Sunni Dam has been divided into 2 sub-catchments, the Flood Hydrograph for the Catchment C1 has been routed through the connecting channel reach (River Satluj) using Muskingum method and accordingly, the Design Flood was computed after adding the routed flood ordinates with flood ordinates of catchment C2. The total Design Flood (i.e. 2-Day PMF) with a peak of 15473 cumec was obtained. Conclusion: Even though, several factors are relevant while deciding the method to be used for design flood estimation, data availability and the purpose of study are the most important factors. Since, generally, we cannot wait for the hydrological data of adequate quality and quantity to be available, flood estimation has to be done using whatever data is available. Depending upon the type of data available for a particular catchment, the method to be used is to be selected.

Keywords: design flood, design storm, flood frequency, PMF, PMP, unit hydrograph

Procedia PDF Downloads 326
4390 The Effectiveness of Environmental Policy Instruments for Promoting Renewable Energy Consumption: Command-and-Control Policies versus Market-Based Policies

Authors: Mahmoud Hassan

Abstract:

Understanding the impact of market- and non-market-based environmental policy instruments on renewable energy consumption (REC) is crucial for the design and choice of policy packages. This study aims to empirically investigate the effect of environmental policy stringency index (EPS) and its components on REC in 27 OECD countries over the period from 1990 to 2015, and then use the results to identify what the appropriate environmental policy mix should look like. By relying on the two-step system GMM estimator, we provide evidence that increasing environmental policy stringency as a whole promotes renewable energy consumption in these 27 developed economies. Moreover, policymakers are able, through the market- and non-market-based environmental policy instruments, to increase the use of renewable energy. However, not all of these instruments are effective for achieving this goal. The results indicate that R&D subsidies and trading schemes have a positive and significant impact on REC, while taxes, feed-in tariff and emission standards have not a significant effect. Furthermore, R&D subsidies are more effective than trading schemes for stimulating the use of clean energy. These findings proved to be robust across the three alternative panel techniques used.

Keywords: environmental policy stringency, renewable energy consumption, two-step system-GMM estimation, linear dynamic panel data model

Procedia PDF Downloads 180
4389 Active Linear Quadratic Gaussian Secondary Suspension Control of Flexible Bodied Railway Vehicle

Authors: Kaushalendra K. Khadanga, Lee Hee Hyol

Abstract:

Passenger comfort has been paramount in the design of suspension systems of high speed cars. To analyze the effect of vibration on vehicle ride quality, a vertical model of a six degree of freedom railway passenger vehicle, with front and rear suspension, is built. It includes car body flexible effects and vertical rigid modes. A second order linear shaping filter is constructed to model Gaussian white noise into random rail excitation. The temporal correlation between the front and rear wheels is given by a second order Pade approximation. The complete track and the vehicle model are then designed. An active secondary suspension system based on a Linear Quadratic Gaussian (LQG) optimal control method is designed. The results show that the LQG control method reduces the vertical acceleration, pitching acceleration and vertical bending vibration of the car body as compared to the passive system.

Keywords: active suspension, bending vibration, railway vehicle, vibration control

Procedia PDF Downloads 260
4388 Volume Estimation of Trees: An Exploratory Study on Rosewood Logging Within Forest Transition and Savannah Ecological Zones of Ghana

Authors: Albert Kwabena Osei Konadu

Abstract:

One of the endemic forest species of the savannah transition zones enlisted by the Convention of International Treaty for Endangered Species (CITES) in Appendix II is the Rosewood, also known as Pterocarpus erinaceus or Krayie. Its economic viability has made it increasingly popular and in high demand. Ghana’s forest resource management regime for these ecozones is mainly on conservation and very little on resource utilization. Consequently, commercial logging management standards are at teething stage and not fully developed, leading to a deficiency in the monitoring of logging operations and quantification of harvested trees volumes. Tree information form (TIF); a volume estimation and tracking regime, has proven to be an effective sustainable management tool for regulating timber resource extraction in the high forest zones of the country. This work aims to generate TIF that can track and capture requisite parameters to accurately estimate the volume of harvested rosewood within forest savannah transition zones. Tree information forms were created on three scenarios of individual billets, stacked billets and conveying vessel basis. The study was limited by the usage of regulators assigned volume as benchmark and also fraught with potential volume measurement error in the stacked billet scenario due to the existence of spaces within packed billets. These TIFs were field-tested to deduce the most viable option for the tracking and estimation of harvested volumes of rosewood using the smallian and cubic volume estimation formula. Overall, four districts were covered with individual billets, stacked billets and conveying vessel scenarios registering mean volumes of 25.83m3,45.08m3 and 32.6m3, respectively. These adduced volumes were validated by benchmarking to assigned volumes of the Forestry Commission of Ghana and known standard volumes of conveying vessels. The results did indicate an underestimation of extracted volumes under the quotas regime, a situation that could lead to unintended overexploitation of the species. The research revealed conveying vessels route is the most viable volume estimation and tracking regime for the sustainable management of the Pterocarpous erinaceus species as it provided a more practical volume estimate and data extraction protocol.

Keywords: cubic volume formula, smallian volume formula, pterocarpus erinaceus, tree information form, forest transition and savannah zones, harvested tree volume

Procedia PDF Downloads 44
4387 Applications of Analytical Probabilistic Approach in Urban Stormwater Modeling in New Zealand

Authors: Asaad Y. Shamseldin

Abstract:

Analytical probabilistic approach is an innovative approach for urban stormwater modeling. It can provide information about the long-term performance of a stormwater management facility without being computationally very demanding. This paper explores the application of the analytical probabilistic approach in New Zealand. The paper presents the results of a case study aimed at development of an objective way of identifying what constitutes a rainfall storm event and the estimation of the corresponding statistical properties of storms using two selected automatic rainfall stations located in the Auckland region in New Zealand. The storm identification and the estimation of the storm statistical properties are regarded as the first step in the development of the analytical probabilistic models. The paper provides a recommendation about the definition of the storm inter-event time to be used in conjunction with the analytical probabilistic approach.

Keywords: hydrology, rainfall storm, storm inter-event time, New Zealand, stormwater management

Procedia PDF Downloads 344
4386 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures

Authors: Rui Teixeira, Alan O’Connor, Maria Nogal

Abstract:

The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.

Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data

Procedia PDF Downloads 272