Search results for: likelihood estimation method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8701

Search results for: likelihood estimation method

8461 Kalman Filter Gain Elimination in Linear Estimation

Authors: Nicholas D. Assimakis

Abstract:

In linear estimation, the traditional Kalman filter uses the Kalman filter gain in order to produce estimation and prediction of the n-dimensional state vector using the m-dimensional measurement vector. The computation of the Kalman filter gain requires the inversion of an m x m matrix in every iteration. In this paper, a variation of the Kalman filter eliminating the Kalman filter gain is proposed. In the time varying case, the elimination of the Kalman filter gain requires the inversion of an n x n matrix and the inversion of an m x m matrix in every iteration. In the time invariant case, the elimination of the Kalman filter gain requires the inversion of an n x n matrix in every iteration. The proposed Kalman filter gain elimination algorithm may be faster than the conventional Kalman filter, depending on the model dimensions.

Keywords: Discrete time, linear estimation, Kalman filter, Kalman filter gain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 573
8460 Complex Wavelet Transform Based Image Denoising and Zooming Under the LMMSE Framework

Authors: T. P. Athira, Gibin Chacko George

Abstract:

This paper proposes a dual tree complex wavelet transform (DT-CWT) based directional interpolation scheme for noisy images. The problems of denoising and interpolation are modelled as to estimate the noiseless and missing samples under the same framework of optimal estimation. Initially, DT-CWT is used to decompose an input low-resolution noisy image into low and high frequency subbands. The high-frequency subband images are interpolated by linear minimum mean square estimation (LMMSE) based interpolation, which preserves the edges of the interpolated images. For each noisy LR image sample, we compute multiple estimates of it along different directions and then fuse those directional estimates for a more accurate denoised LR image. The estimation parameters calculated in the denoising processing can be readily used to interpolate the missing samples. The inverse DT-CWT is applied on the denoised input and interpolated high frequency subband images to obtain the high resolution image. Compared with the conventional schemes that perform denoising and interpolation in tandem, the proposed DT-CWT based noisy image interpolation method can reduce many noise-caused interpolation artifacts and preserve well the image edge structures. The visual and quantitative results show that the proposed technique outperforms many of the existing denoising and interpolation methods.

Keywords: Dual-tree complex wavelet transform (DT-CWT), denoising, interpolation, optimal estimation, super resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2127
8459 A Robust Frequency Offset Estimation Scheme for OFDM System with Cyclic Delay Diversity

Authors: Won-Jae Shin, Young-Hwan You

Abstract:

Cyclic delay diversity (CDD) is a simple technique to intentionally increase frequency selectivity of channels for orthogonal frequency division multiplexing (OFDM).This paper proposes a residual carrier frequency offset (RFO) estimation scheme for OFDMbased broadcasting system using CDD. In order to improve the RFO estimation, this paper addresses a decision scheme of the amount of cyclic delay and pilot pattern used to estimate the RFO. By computer simulation, the proposed estimator is shown to benefit form propoerly chosen delay parameter and perform robustly.

Keywords: OFDM, cyclic delay diversity, FM system, synchronization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1719
8458 Robust Adaptive Observer Design for Lipschitz Class of Nonlinear Systems

Authors: M. Pourgholi, V.J.Majd

Abstract:

This paper addresses parameter and state estimation problem in the presence of the perturbation of observer gain bounded input disturbances for the Lipschitz systems that are linear in unknown parameters and nonlinear in states. A new nonlinear adaptive resilient observer is designed, and its stability conditions based on Lyapunov technique are derived. The gain for this observer is derived systematically using linear matrix inequality approach. A numerical example is provided in which the nonlinear terms depend on unmeasured states. The simulation results are presented to show the effectiveness of the proposed method.

Keywords: Adaptive observer, linear matrix inequality, nonlinear systems, nonlinear observer, resilient observer, robust estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2561
8457 Rapid Determination of Biochemical Oxygen Demand

Authors: Mayur Milan Kale, Indu Mehrotra

Abstract:

Biochemical Oxygen Demand (BOD) is a measure of the oxygen used in bacteria mediated oxidation of organic substances in water and wastewater. Theoretically an infinite time is required for complete biochemical oxidation of organic matter, but the measurement is made over 5-days at 20 0C or 3-days at 27 0C test period with or without dilution. Researchers have worked to further reduce the time of measurement. The objective of this paper is to review advancement made in BOD measurement primarily to minimize the time and negate the measurement difficulties. Survey of literature review in four such techniques namely BOD-BARTTM, Biosensors, Ferricyanidemediated approach, luminous bacterial immobilized chip method. Basic principle, method of determination, data validation and their advantage and disadvantages have been incorporated of each of the methods. In the BOD-BARTTM method the time lag is calculated for the system to change from oxidative to reductive state. BIOSENSORS are the biological sensing element with a transducer which produces a signal proportional to the analyte concentration. Microbial species has its metabolic deficiencies. Co-immobilization of bacteria using sol-gel biosensor increases the range of substrate. In ferricyanidemediated approach, ferricyanide has been used as e-acceptor instead of oxygen. In Luminous bacterial cells-immobilized chip method, bacterial bioluminescence which is caused by lux genes was observed. Physiological responses is measured and correlated to BOD due to reduction or emission. There is a scope to further probe into the rapid estimation of BOD.

Keywords: BOD, Four methods, Rapid estimation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3595
8456 Estimation of Uncertainty of Thermal Conductivity Measurement with Single Laboratory Validation Approach

Authors: Saowaluck Ukrisdawithid

Abstract:

The thermal conductivity of thermal insulation materials are measured by Heat Flow Meter (HFM) apparatus. The components of uncertainty are complex and difficult on routine measurement by modelling approach. In this study, uncertainty of thermal conductivity measurement was estimated by single laboratory validation approach. The within-laboratory reproducibility was 1.1%. The standard uncertainty of method and laboratory bias by using SRM1453 expanded polystyrene board was dominant at 1.4%. However, it was assessed that there was no significant bias. For sample measurement, the sources of uncertainty were repeatability, density of sample and thermal conductivity resolution of HFM. From this approach to sample measurements, the combined uncertainty was calculated. In summary, the thermal conductivity of sample, polystyrene foam, was reported as 0.03367 W/m·K ± 3.5% (k = 2) at mean temperature 23.5 °C. The single laboratory validation approach is simple key of routine testing laboratory for estimation uncertainty of thermal conductivity measurement by using HFM, according to ISO/IEC 17025-2017 requirements. These are meaningful for laboratory competent improvement, quality control on products, and conformity assessment.

Keywords: Single laboratory validation approach, within-laboratory reproducibility, method and laboratory bias, certified reference material.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 727
8455 Efficiency of Different GLR Test-statistics for Spatial Signal Detection

Authors: Olesya Bolkhovskaya, Alexander Maltsev

Abstract:

In this work the characteristics of spatial signal detec¬tion from an antenna array in various sample cases are investigated. Cases for a various number of available prior information about the received signal and the background noise are considered. The spatial difference between a signal and noise is only used. The performance characteristics and detecting curves are presented. All test-statistics are obtained on the basis of the generalized likelihood ratio (GLR). The received results are correct for a short and long sample.

Keywords: GLR test-statistic, detection task, generalized likelihood ratio, antenna array, detection curves, performance characteristics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1482
8454 Functional Decomposition Based Effort Estimation Model for Software-Intensive Systems

Authors: Nermin Sökmen

Abstract:

An effort estimation model is needed for softwareintensive projects that consist of hardware, embedded software or some combination of the two, as well as high level software solutions. This paper first focuses on functional decomposition techniques to measure functional complexity of a computer system and investigates its impact on system development effort. Later, it examines effects of technical difficulty and design team capability factors in order to construct the best effort estimation model. With using traditional regression analysis technique, the study develops a system development effort estimation model which takes functional complexity, technical difficulty and design team capability factors as input parameters. Finally, the assumptions of the model are tested.

Keywords: Functional complexity, functional decomposition, development effort, technical difficulty, design team capability, regression analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2232
8453 Human Pose Estimation using Active Shape Models

Authors: Changhyuk Jang, Keechul Jung

Abstract:

Human pose estimation can be executed using Active Shape Models. The existing techniques for applying to human-body research using Active Shape Models, such as human detection, primarily take the form of silhouette of human body. This technique is not able to estimate accurately for human pose to concern two arms and legs, as the silhouette of human body represents the shape as out of round. To solve this problem, we applied the human body model as stick-figure, “skeleton". The skeleton model of human body can give consideration to various shapes of human pose. To obtain effective estimation result, we applied background subtraction and deformed matching algorithm of primary Active Shape Models in the fitting process. The images which were used to make the model were 600 human bodies, and the model has 17 landmark points which indicate body junction and key features of human pose. The maximum iteration for the fitting process was 30 times and the execution time was less than .03 sec.

Keywords: Active shape models, skeleton, pose estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2367
8452 Estimation of the Mean of the Selected Population

Authors: Kalu Ram Meena, Aditi Kar Gangopadhyay, Satrajit Mandal

Abstract:

Two normal populations with different means and same variance are considered, where the variance is known. The population with the smaller sample mean is selected. Various estimators are constructed for the mean of the selected normal population. Finally, they are compared with respect to the bias and MSE risks by the mehod of Monte-Carlo simulation and their performances are analysed with the help of graphs.

Keywords: Estimation after selection, Brewster-Zidek technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1363
8451 Quantitative Genetics Researches on Milk Protein Systems of Romanian Grey Steppe Breed

Authors: V. Maciuc, Şt. Creangă, I. Gîlcă, V. Ujică

Abstract:

The paper makes part from a complex research project on Romanian Grey Steppe, a unique breed in terms of biological and cultural-historical importance, on the verge of extinction and which has been included in a preservation programme of genetic resources from Romania. The study of genetic polymorphism of protean fractions, especially kappa-casein, and the genotype relations of these lactoproteins with some quantitative and qualitative features of milk yield represents a current theme and a novelty for this breed. In the estimation of the genetic parameters we used R.E.M.L. (Restricted Maximum Likelihood) method. The main lactoprotein from milk, kappa - casein (K-cz), characterized in the specialized literature as a feature having a high degree of hereditary transmission, behaves as such in the nucleus under study, a value also confirmed by the heritability coefficient (h2 = 0.57 %). We must mention the medium values for milk and fat quantity (h2=0.26, 0.29 %) and the fat and protein percentage from milk having a high hereditary influence h2 = 0.71 - 0.63 %. Correlations between kappa-casein and the milk quantity are negative and strong. Between kappa-casein and other qualitative features of milk (fat content 0.58-0.67 % and protein content 0.77- 0.87%), there are positive and very strong correlations. At the same time, between kappa-casein and β casein (β-cz), β lactoglobulin (β- lg) respectively, correlations are positive having high values (0.37 – 0.45 %), indicating the same causes and determining factors for the two groups of features.

Keywords: breed, genetic preservation, lactoproteins, Romanian Grey Steppe

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658
8450 An Approach to Measure Snow Depth of Winter Accumulation at Basin Scale Using Satellite Data

Authors: M. Geetha Priya, D. Krishnaveni

Abstract:

Snow depth estimation and monitoring studies have been carried out for decades using empirical relationship or extrapolation of point measurements carried out in field. With the development of advanced satellite based remote sensing techniques, a modified approach is proposed in the present study to estimate the winter accumulated snow depth at basin scale. Assessment of snow depth by differencing Digital Elevation Model (DEM) generated at the beginning and end of winter season can be experimented for the region of interest (Himalayan and polar regions) accounting for winter accumulation (solid precipitation). The proposed approach is based on existing geodetic method that is being used for glacier mass balance estimation. Considering the satellite datasets purely acquired during beginning and end of winter season, it is possible to estimate the change in depth or thickness for the snow that is accumulated during the winter as it takes one year for the snow to get transformed into firn (snow that has survived one summer or one-year old snow).

Keywords: Digital elevation model, snow depth, geodetic method, snow cover.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 661
8449 Development of Cooling Load Demand Program for Building in Malaysia

Authors: Zamri Noranai, Dayang Siti Zainab Abang Bujang, Rosli Asmawi, Hamidon Salleh, Mohammad Zainal Md Yusof

Abstract:

Air conditioning is mainly to be used as human comfort medium. It has been use more often in country in which the daily temperatures are high. In scientific, air conditioning is defined as a process of controlling the moisture, cooling, heating and cleaning air. Without proper estimation of cooling load, big amount of waste energy been used because of unsuitable of air conditioning system are not considering to overcoming heat gains from surrounding. This is due to the size of the room is too big and the air conditioning has to use more energy to cool the room and the air conditioning is too small for the room. The studies are basically to develop a program to calculate cooling load. Through this study it is easy to calculate cooling load estimation. Furthermore it-s help to compare the cooling load estimation by hourly and yearly. Base on the last study that been done, the developed software are not user-friendly. For individual without proper knowledge of calculating cooling load estimation might be problem. Easy excess and user-friendly should be the main objective to design something. This program will allow cooling load able be estimate by any users rather than estimation by using rule of thumb. Several of limitation of case study is judged to sure it-s meeting to Malaysia building specification. Finally validation is done by comparison manual calculation and by developed program.

Keywords: Building, Energy and Coaling Load

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2897
8448 Groundwater Seepage Estimation into Amirkabir Tunnel Using Analytical Methods and DEM and SGR Method

Authors: Hadi Farhadian, Homayoon Katibeh

Abstract:

In this paper, groundwater seepage into Amirkabir tunnel has been estimated using analytical and numerical methods for 14 different sections of the tunnel. Site Groundwater Rating (SGR) method also has been performed for qualitative and quantitative classification of the tunnel sections. The obtained results of above mentioned methods were compared together. The study shows reasonable accordance with results of the all methods unless for two sections of tunnel. In these two sections there are some significant discrepancies between numerical and analytical results mainly originated from model geometry and high overburden. SGR and the analytical and numerical calculations, confirm high concentration of seepage inflow in fault zones. Maximum seepage flow into tunnel has been estimated 0.425 lit/sec/m using analytical method and 0.628 lit/sec/m using numerical method occured in crashed zone. Based on SGR method, six sections of 14 sections in Amirkabir tunnel axis are found to be in "No Risk" class that is supported by the analytical and numerical seepage value of less than 0.04 lit/sec/m.

Keywords: Water Seepage, Amirkabir Tunnel, Analytical Method, DEM, SGR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3902
8447 An Integrated Software Architecture for Bandwidth Adaptive Video Streaming

Authors: T. Arsan

Abstract:

Video streaming over lossy IP networks is very important issues, due to the heterogeneous structure of networks. Infrastructure of the Internet exhibits variable bandwidths, delays, congestions and time-varying packet losses. Because of variable attributes of the Internet, video streaming applications should not only have a good end-to-end transport performance but also have a robust rate control, furthermore multipath rate allocation mechanism. So for providing the video streaming service quality, some other components such as Bandwidth Estimation and Adaptive Rate Controller should be taken into consideration. This paper gives an overview of video streaming concept and bandwidth estimation tools and then introduces special architectures for bandwidth adaptive video streaming. A bandwidth estimation algorithm – pathChirp, Optimized Rate Controllers and Multipath Rate Allocation Algorithm are considered as all-in-one solution for video streaming problem. This solution is directed and optimized by a decision center which is designed for obtaining the maximum quality at the receiving side.

Keywords: Adaptive Video Streaming, Bandwidth Estimation, QoS, Software Architecture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1391
8446 Implementation of SU-MIMO and MU-MIMOGTD-System under Imperfect CSI Knowledge

Authors: Parit Kanjanavirojkul, Kiatwarakorn Keeratishananond, Prapun Suksompong

Abstract:

We study the performance of compressed beamforming weights feedback technique in generalized triangular decomposition (GTD) based MIMO system. GTD is a beamforming technique that enjoys QoS flexibility. The technique, however, will perform at its optimum only when the full knowledge of channel state information (CSI) is available at the transmitter. This would be impossible in the real system, where there are channel estimation error and limited feedback. We suggest a way to implement the quantized beamforming weights feedback, which can significantly reduce the feedback data, on GTD-based MIMO system and investigate the performance of the system. Interestingly, we found that compressed beamforming weights feedback does not degrade the BER performance of the system at low input power, while the channel estimation error and quantization do. For comparison, GTD is more sensitive to compression and quantization, while SVD is more sensitive to the channel estimation error. We also explore the performance of GTDbased MU-MIMO system, and find that the BER performance starts to degrade largely at around -20 dB channel estimation error.

Keywords: MIMO, MU-MIMO, GTD, Imperfect CSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1901
8445 Optimization Parameters of Rotary Positioner Controller using CDM

Authors: Meemongkol A., Tipsuwanporn V., Numsomran A.

Abstract:

The authors present optimization parameters of rotary positioner controller in hard disk drive servo track writing process using coefficient diagram method; CDM. Due to estimation parameters in PI Positioning Control System by expected ratio method cannot meet the required specification of response effectively, we suggest coefficient diagram method for defining controller parameters under the requirement of the system. Finally, the simulation results show that our proposed method can improve the problem in tuning parameter of rotary positioner controller. It is satisfied specification of performance of control system. Furthermore, it is very convenient as a fast adjustment damping ratio as well as a high speed response.

Keywords: Optimization Parameters, Rotary Positioner, CDM

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1498
8444 Angles of Arrival Estimation with Unitary Partial Propagator

Authors: Youssef Khmou, Said Safi

Abstract:

In this paper, we investigated the effect of real valued transformation of the spectral matrix of the received data for Angles Of Arrival estimation problem.  Indeed, the unitary transformation of Partial Propagator (UPP) for narrowband sources is proposed and applied on Uniform Linear Array (ULA).

Monte Carlo simulations proved the performance of the UPP spectrum comparatively with Forward Backward Partial Propagator (FBPP) and Unitary Propagator (UP). The results demonstrates that when some of the sources are fully correlated and closer than the Rayleigh angular limit resolution of the broadside array, the UPP method outperforms the FBPP in both of spatial resolution and complexity.

Keywords: DOA, Uniform Linear Array, Narrowband, Propagator, Real valued transformation, Subspace, Unitary Operator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2239
8443 Kalman Filter Design in Structural Identification with Unknown Excitation

Authors: Z. Masoumi, B. Moaveni

Abstract:

This article is about first step of structural health monitoring by identifying structural system in the presence of unknown input. In the structural system identification, identification of structural parameters such as stiffness and damping are considered. In this study, the Kalman filter (KF) design for structural systems with unknown excitation is expressed. External excitations, such as earthquakes, wind or any other forces are not measured or not available. The purpose of this filter is its strengths to estimate the state variables of the system in the presence of unknown input. Also least squares estimation (LSE) method with unknown input is studied. Estimates of parameters have been adopted. Finally, using two examples advantages and drawbacks of both methods are studied.

Keywords: Structural health monitoring, Kalman filter, Least square estimation, structural system identification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2237
8442 Channel Estimation for Orthogonal Frequency Division Multiplexing Systems over Doubly Selective Channels Based on the DCS-DCSOMP Algorithm

Authors: Linyu Wang, Furui Huo, Jianhong Xiang

Abstract:

The Doppler shift generated by high-speed movement and multipath effects in the channel are the main reasons for the generation of a time-frequency doubly-selective (DS) channel. There is severe inter-carrier interference (ICI) in the DS channel. Channel estimation for an orthogonal frequency division multiplexing (OFDM) system over a DS channel is very difficult. The simultaneous orthogonal matching pursuit (SOMP) algorithm under distributed compressive sensing theory (DCS-SOMP) has been used in channel estimation for OFDM systems over DS channels. However, the reconstruction accuracy of the DCS-SOMP algorithm is not high enough in the low Signal-to-Noise Ratio (SNR) stage. To solve this problem, in this paper, we propose an improved DCS-SOMP algorithm based on the inner product difference comparison operation (DCS-DCSOMP). The reconstruction accuracy is improved by increasing the number of candidate indexes and designing the comparison conditions of inner product difference. We combine the DCS-DCSOMP algorithm with the basis expansion model (BEM) to reduce the complexity of channel estimation. Simulation results show the effectiveness of the proposed algorithm and its advantages over other algorithms.

Keywords: OFDM, doubly selective, channel estimation, compressed sensing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 259
8441 An Optimized Method for 3D Magnetic Navigation of Nanoparticles inside Human Arteries

Authors: Evangelos G. Karvelas, Christos Liosis, Andreas Theodorakakos, Theodoros E. Karakasidis

Abstract:

In the present work, a numerical method for the estimation of the appropriate gradient magnetic fields for optimum driving of the particles into the desired area inside the human body is presented. The proposed method combines Computational Fluid Dynamics (CFD), Discrete Element Method (DEM) and Covariance Matrix Adaptation (CMA) evolution strategy for the magnetic navigation of nanoparticles. It is based on an iteration procedure that intents to eliminate the deviation of the nanoparticles from a desired path. Hence, the gradient magnetic field is constantly adjusted in a suitable way so that the particles’ follow as close as possible to a desired trajectory. Using the proposed method, it is obvious that the diameter of particles is crucial parameter for an efficient navigation. In addition, increase of particles' diameter decreases their deviation from the desired path. Moreover, the navigation method can navigate nanoparticles into the desired areas with efficiency approximately 99%.

Keywords: CFD, CMA evolution strategy, DEM, magnetic navigation, spherical particles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 470
8440 Navigation Patterns Mining Approach based on Expectation Maximization Algorithm

Authors: Norwati Mustapha, Manijeh Jalali, Abolghasem Bozorgniya, Mehrdad Jalali

Abstract:

Web usage mining algorithms have been widely utilized for modeling user web navigation behavior. In this study we advance a model for mining of user-s navigation pattern. The model makes user model based on expectation-maximization (EM) algorithm.An EM algorithm is used in statistics for finding maximum likelihood estimates of parameters in probabilistic models, where the model depends on unobserved latent variables. The experimental results represent that by decreasing the number of clusters, the log likelihood converges toward lower values and probability of the largest cluster will be decreased while the number of the clusters increases in each treatment.

Keywords: Web Usage Mining, Expectation maximization, navigation pattern mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1533
8439 Building Information Modeling-Based Approach for Automatic Quantity Take-off and Cost Estimation

Authors: Lo Kar Yin, Law Ka Mei

Abstract:

Architectural, engineering, construction and operations (AECO) industry practitioners have been well adapting to the dynamic construction market from the fundamental training of its disciplines. As further triggered by the pandemic since 2019, great steps are taken in virtual environment and the best collaboration is strived with project teams without boundaries. With adoption of Building Information Modeling-based approach and qualitative analysis, this paper is to review quantity take-off (QTO) and cost estimation process through modeling techniques in liaison with suppliers, fabricators, subcontractors, contractors, designers, consultants and services providers in the construction industry value chain for automatic project cost budgeting, project cost control and cost evaluation on design options of in-situ reinforced-concrete construction and Modular Integrated Construction (MiC) at design stage, variation of works and cash flow/spending analysis at construction stage as far as practicable, with a view to sharing the findings for enhancing mutual trust and co-operation among AECO industry practitioners. It is to foster development through a common prototype of design and build project delivery method in NEC4 Engineering and Construction Contract (ECC) Options A and C.

Keywords: Building Information Modeling, cost estimation, quantity take-off, modeling techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 623
8438 Density Estimation using Generalized Linear Model and a Linear Combination of Gaussians

Authors: Aly Farag, Ayman El-Baz, Refaat Mohamed

Abstract:

In this paper we present a novel approach for density estimation. The proposed approach is based on using the logistic regression model to get initial density estimation for the given empirical density. The empirical data does not exactly follow the logistic regression model, so, there will be a deviation between the empirical density and the density estimated using logistic regression model. This deviation may be positive and/or negative. In this paper we use a linear combination of Gaussian (LCG) with positive and negative components as a model for this deviation. Also, we will use the expectation maximization (EM) algorithm to estimate the parameters of LCG. Experiments on real images demonstrate the accuracy of our approach.

Keywords: Logistic regression model, Expectationmaximization, Segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1689
8437 Factors of Effective Business Software Systems Development and Enhancement Projects Work Effort Estimation

Authors: Beata Czarnacka-Chrobot

Abstract:

Majority of Business Software Systems (BSS) Development and Enhancement Projects (D&EP) fail to meet criteria of their effectiveness, what leads to the considerable financial losses. One of the fundamental reasons for such projects- exceptionally low success rate are improperly derived estimates for their costs and time. In the case of BSS D&EP these attributes are determined by the work effort, meanwhile reliable and objective effort estimation still appears to be a great challenge to the software engineering. Thus this paper is aimed at presenting the most important synthetic conclusions coming from the author-s own studies concerning the main factors of effective BSS D&EP work effort estimation. Thanks to the rational investment decisions made on the basis of reliable and objective criteria it is possible to reduce losses caused not only by abandoned projects but also by large scale of overrunning the time and costs of BSS D&EP execution.

Keywords: Benchmarking data, business software systems development and enhancement projects, effort estimation, software engineering economics, software functional size measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1504
8436 A Linear Use Case Based Software Cost Estimation Model

Authors: Hasan.O. Farahneh, Ayman A. Issa

Abstract:

Software development is moving towards agility with use cases and scenarios being used for requirements stories. Estimates of software costs are becoming even more important than before as effects of delays is much larger in successive short releases context of agile development. Thus, this paper reports on the development of new linear use case based software cost estimation model applicable in the very early stages of software development being based on simple metric. Evaluation showed that accuracy of estimates varies between 43% and 55% of actual effort of historical test projects. These results outperformed those of wellknown models when applied in the same context. Further work is being carried out to improve the performance of the proposed model when considering the effect of non-functional requirements.

Keywords: Metrics, Software Cost Estimation, Use Cases

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1960
8435 Generalized Maximal Ratio Combining as a Supra-optimal Receiver Diversity Scheme

Authors: Jean-Pierre Dubois, Rania Minkara, Rafic Ayoubi

Abstract:

Maximal Ratio Combining (MRC) is considered the most complex combining technique as it requires channel coefficients estimation. It results in the lowest bit error rate (BER) compared to all other combining techniques. However the BER starts to deteriorate as errors are introduced in the channel coefficients estimation. A novel combining technique, termed Generalized Maximal Ratio Combining (GMRC) with a polynomial kernel, yields an identical BER as MRC with perfect channel estimation and a lower BER in the presence of channel estimation errors. We show that GMRC outperforms the optimal MRC scheme in general and we hereinafter introduce it to the scientific community as a new “supraoptimal" algorithm. Since diversity combining is especially effective in small femto- and pico-cells, internet-associated wireless peripheral systems are to benefit most from GMRC. As a result, many spinoff applications can be made to IP-based 4th generation networks.

Keywords: Bit error rate, femto-internet cells, generalized maximal ratio combining, signal-to-scattering noise ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2113
8434 An Efficient Collocation Method for Solving the Variable-Order Time-Fractional Partial Differential Equations Arising from the Physical Phenomenon

Authors: Haniye Dehestani, Yadollah Ordokhani

Abstract:

In this work, we present an efficient approach for solving variable-order time-fractional partial differential equations, which are based on Legendre and Laguerre polynomials. First, we introduced the pseudo-operational matrices of integer and variable fractional order of integration by use of some properties of Riemann-Liouville fractional integral. Then, applied together with collocation method and Legendre-Laguerre functions for solving variable-order time-fractional partial differential equations. Also, an estimation of the error is presented. At last, we investigate numerical examples which arise in physics to demonstrate the accuracy of the present method. In comparison results obtained by the present method with the exact solution and the other methods reveals that the method is very effective.

Keywords: Collocation method, fractional partial differential equations, Legendre-Laguerre functions, pseudo-operational matrix of integration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 967
8433 Testing the Accuracy of ML-ANN for Harmonic Estimation in Balanced Industrial Distribution Power System

Authors: Wael M. El-Mamlouk, Metwally A. El-Sharkawy, Hossam. E. Mostafa

Abstract:

In this paper, we analyze and test a scheme for the estimation of electrical fundamental frequency signals from the harmonic load current and voltage signals. The scheme was based on using two different Multi Layer Artificial Neural Networks (ML-ANN) one for the current and the other for the voltage. This study also analyzes and tests the effect of choosing the optimum artificial neural networks- sizes which determine the quality and accuracy of the estimation of electrical fundamental frequency signals. The simulink tool box of the Matlab program for the simulation of the test system and the test of the neural networks has been used.

Keywords: Harmonics, Neural Networks, Modeling, Simulation, Active filters, electric Networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1551
8432 Assessment of the Energy Balance Method in the Case of Masonry Domes

Authors: M. M. Sadeghi, S. Vahdani

Abstract:

Masonry dome structures had been widely used for covering large spans in the past. The seismic assessment of these historical structures is very complicated due to the nonlinear behavior of the material, their rigidness, and special stability configuration. The assessment method based on energy balance concept, as well as the standard pushover analysis, is used to evaluate the effectiveness of these methods in the case of masonry dome structures. The Soltanieh dome building is used as an example to which two methods are applied. The performance points are given from superimposing the capacity, and demand curves in Acceleration Displacement Response Spectra (ADRS) and energy coordination are compared with the nonlinear time history analysis as the exact result. The results show a good agreement between the dynamic analysis and the energy balance method, but standard pushover method does not provide an acceptable estimation.

Keywords: Energy balance method, pushover analysis, time history analysis, masonry dome.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 797