Search results for: optimal estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2577

Search results for: optimal estimation

2007 Optimal Model Order Selection for Transient Error Autoregressive Moving Average (TERA) MRI Reconstruction Method

Authors: Abiodun M. Aibinu, Athaur Rahman Najeeb, Momoh J. E. Salami, Amir A. Shafie

Abstract:

An alternative approach to the use of Discrete Fourier Transform (DFT) for Magnetic Resonance Imaging (MRI) reconstruction is the use of parametric modeling technique. This method is suitable for problems in which the image can be modeled by explicit known source functions with a few adjustable parameters. Despite the success reported in the use of modeling technique as an alternative MRI reconstruction technique, two important problems constitutes challenges to the applicability of this method, these are estimation of Model order and model coefficient determination. In this paper, five of the suggested method of evaluating the model order have been evaluated, these are: The Final Prediction Error (FPE), Akaike Information Criterion (AIC), Residual Variance (RV), Minimum Description Length (MDL) and Hannan and Quinn (HNQ) criterion. These criteria were evaluated on MRI data sets based on the method of Transient Error Reconstruction Algorithm (TERA). The result for each criterion is compared to result obtained by the use of a fixed order technique and three measures of similarity were evaluated. Result obtained shows that the use of MDL gives the highest measure of similarity to that use by a fixed order technique.

Keywords: Autoregressive Moving Average (ARMA), MagneticResonance Imaging (MRI), Parametric modeling, Transient Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1596
2006 The Results of the Fetal Weight Estimation of the Infants Delivered in the Delivery Room At Dan Khunthot Hospital by Johnson-s Method

Authors: Nareelux Suwannobol, JintanaTapin, Khuanchanok Narachan

Abstract:

The objective of this study was to determine the accuracy to estimation fetal weight by Johnson-s method and compares it with actual birth weight. The sample group was 126 infants delivered in Dan KhunThot hospital from January March 2012. Fetal weight was estimated by measuring fundal height according to Johnson-s method. The information was collected by studying historical delivery records and then analyzed by using the statistics of frequency, percentage, mean, and standard deviation. Finally, the difference was analyzed by a paired t-test.The results showed had an average birth weight was 3093.57 ± 391.03 g (mean ± SD) and 3,455 ± 454.55 g average estimated fetal weight by Johnson-s method higher than average actual birth weight was 384.09 grams. When classifying the infants according to birth weight found that low birth weight (<2500 g) and the appropriate birth weight (2500-3999g) actual birth weight less than estimate fetal weight . But the high birth weight (> 4000 g) actual birth weight was more than estimated fetal weight. The difference was found between actual birth weight and estimation fetal weight of the minimum weight in high birth weight ( > 4000 g) , the appropriate birth weight (2500-3999g) and low birth weight (<2500 g) respectively. The rate of estimates fetal weight within 10% of actual birth weight was 35.7%. Actual birth weight were compared with the found that the difference is statistically significant (p <.000). Employing Johnson-s method to estimate fetal weight can estimate initial fetal weight before passing to special examinations, which may require excessive high cost. A variety of methods should be employed to estimate fetal weight more precisely, which will help plan care for mother-s and infant-s safety.

Keywords: Johnson's method, Fetal weight estimate, Delivery Room, Student nurse.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2318
2005 Explicit Solution of an Investment Plan for a DC Pension Scheme with Voluntary Contributions and Return Clause under Logarithm Utility

Authors: Promise A. Azor, Avievie Igodo, Esabai M. Ase

Abstract:

The paper merged the return of premium clause and voluntary contributions to investigate retirees’ investment plan in a defined contributory (DC) pension scheme with a portfolio comprising of a risk-free asset and a risky asset whose price process is described by geometric Brownian motion (GBM). The paper considers additional voluntary contributions paid by members, charge on balance by pension fund administrators and the mortality risk of members of the scheme during the accumulation period by introducing return of premium clause. To achieve this, the Weilbull mortality force function is used to establish the mortality rate of members during accumulation phase. Furthermore, an optimization problem from the Hamilton Jacobi Bellman (HJB) equation is obtained using dynamic programming approach. Also, the Legendre transformation method is used to transform the HJB equation which is a nonlinear partial differential equation to a linear partial differential equation and solves the resultant equation for the value function and the optimal distribution plan under logarithm utility function. Finally, numerical simulations of the impact of some important parameters on the optimal distribution plan were obtained and it was observed that the optimal distribution plan is inversely proportional to the initial fund size, predetermined interest rate, additional voluntary contributions, charge on balance and instantaneous volatility.

Keywords: Legendre transform, logarithm utility, optimal distribution plan, return clause of premium, charge on balance, Weibull mortality function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 165
2004 Evaluating Accuracy of Foetal Weight Estimation by Clinicians in Christian Medical College Hospital, India and Its Correlation to Actual Birth Weight: A Clinical Audit

Authors: Aarati Susan Mathew, Radhika Narendra Patel, Jiji Mathew

Abstract:

A retrospective study conducted at Christian Medical College (CMC) Teaching Hospital, Vellore, India on 14th August 2014 to assess the accuracy of clinically estimated foetal weight upon labour admission. Estimating foetal weight is a crucial factor in assessing maternal and foetal complications during and after labour. Medical notes of ninety-eight postnatal women who fulfilled the inclusion criteria were studied to evaluate the correlation between their recorded Estimated Foetal Weight (EFW) on admission and actual birth weight (ABW) of the newborn after delivery. Data concerning maternal and foetal demographics was also noted. Accuracy was determined by absolute percentage error and proportion of estimates within 10% of ABW. Actual birth weights ranged from 950-4080g. A strong positive correlation between EFW and ABW (r=0.904) was noted. Term deliveries (≥40 weeks) in the normal weight range (2500-4000g) had a 59.5% estimation accuracy (n=74) compared to pre-term (<40 weeks) with an estimation accuracy of 0% (n=2). Out of the term deliveries, macrosomic babies (>4000g) were underestimated by 25% (n=3) and low birthweight (LBW) babies were overestimated by 12.7% (n=9). Registrars who estimated foetal weight were accurate in babies within normal weight ranges. However, there needs to be an improvement in predicting weight of macrosomic and LBW foetuses. We have suggested the use of an amended version of the Johnson’s formula for the Indian population for improvement and a need to re-audit once implemented.

Keywords: Clinical palpation, estimated foetal weight, pregnancy, India, Johnson’s formula.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2901
2003 A Fast Adaptive Tomlinson-Harashima Precoder for Indoor Wireless Communications

Authors: M. Naresh Kumar, Abhijit Mitra, C. Ardil

Abstract:

A fast adaptive Tomlinson Harashima (T-H) precoder structure is presented for indoor wireless communications, where the channel may vary due to rotation and small movement of the mobile terminal. A frequency-selective slow fading channel which is time-invariant over a frame is assumed. In this adaptive T-H precoder, feedback coefficients are updated at the end of every uplink frame by using system identification technique for channel estimation in contrary with the conventional T-H precoding concept where the channel is estimated during the starting of the uplink frame via Wiener solution. In conventional T-H precoder it is assumed the channel is time-invariant in both uplink and downlink frames. However assuming the channel is time-invariant over only one frame instead of two, the proposed adaptive T-H precoder yields better performance than conventional T-H precoder if the channel is varied in uplink after receiving the training sequence.

Keywords: Tomlinson-Harashima precoder, Adaptive channel estimation, Indoor wireless communication, Bit error rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795
2002 Optimal Facility Layout Problem Solution Using Genetic Algorithm

Authors: Maricar G. Misola, Bryan B. Navarro

Abstract:

Facility Layout Problem (FLP) is one of the essential problems of several types of manufacturing and service sector. It is an optimization problem on which the main objective is to obtain the efficient locations, arrangement and order of the facilities. In the literature, there are numerous facility layout problem research presented and have used meta-heuristic approaches to achieve optimal facility layout design. This paper presented genetic algorithm to solve facility layout problem; to minimize total cost function. The performance of the proposed approach was verified and compared using problems in the literature.

Keywords: Facility Layout Problem, Genetic Algorithm, Material Handling Cost, Meta-heuristic Approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4708
2001 The Estimation Method of Stress Distribution for Beam Structures Using the Terrestrial Laser Scanning

Authors: Sang Wook Park, Jun Su Park, Byung Kwan Oh, Yousok Kim, Hyo Seon Park

Abstract:

This study suggests the estimation method of stress distribution for the beam structures based on TLS (Terrestrial Laser Scanning). The main components of method are the creation of the lattices of raw data from TLS to satisfy the suitable condition and application of CSSI (Cubic Smoothing Spline Interpolation) for estimating stress distribution. Estimation of stress distribution for the structural member or the whole structure is one of the important factors for safety evaluation of the structure. Existing sensors which include ESG (Electric strain gauge) and LVDT (Linear Variable Differential Transformer) can be categorized as contact type sensor which should be installed on the structural members and also there are various limitations such as the need of separate space where the network cables are installed and the difficulty of access for sensor installation in real buildings. To overcome these problems inherent in the contact type sensors, TLS system of LiDAR (light detection and ranging), which can measure the displacement of a target in a long range without the influence of surrounding environment and also get the whole shape of the structure, has been applied to the field of structural health monitoring. The important characteristic of TLS measuring is a formation of point clouds which has many points including the local coordinate. Point clouds are not linear distribution but dispersed shape. Thus, to analyze point clouds, the interpolation is needed vitally. Through formation of averaged lattices and CSSI for the raw data, the method which can estimate the displacement of simple beam was developed. Also, the developed method can be extended to calculate the strain and finally applicable to estimate a stress distribution of a structural member. To verify the validity of the method, the loading test on a simple beam was conducted and TLS measured it. Through a comparison of the estimated stress and reference stress, the validity of the method is confirmed.

Keywords: Structural health monitoring, terrestrial laser scanning, estimation of stress distribution, coordinate transformation, cubic smoothing spline interpolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2712
2000 An Optimal Algorithm for Finding (r, Q) Policy in a Price-Dependent Order Quantity Inventory System with Soft Budget Constraint

Authors: S. Hamid Mirmohammadi, Shahrazad Tamjidzad

Abstract:

This paper is concerned with the single-item continuous review inventory system in which demand is stochastic and discrete. The budget consumed for purchasing the ordered items is not restricted but it incurs extra cost when exceeding specific value. The unit purchasing price depends on the quantity ordered under the all-units discounts cost structure. In many actual systems, the budget as a resource which is occupied by the purchased items is limited and the system is able to confront the resource shortage by charging more costs. Thus, considering the resource shortage costs as a part of system costs, especially when the amount of resource occupied by the purchased item is influenced by quantity discounts, is well motivated by practical concerns. In this paper, an optimization problem is formulated for finding the optimal (r, Q) policy, when the system is influenced by the budget limitation and a discount pricing simultaneously. Properties of the cost function are investigated and then an algorithm based on a one-dimensional search procedure is proposed for finding an optimal (r, Q) policy which minimizes the expected system costs.

Keywords: (r, Q) policy, Stochastic demand, backorders, limited resource, quantity discounts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1840
1999 Optimal Voltage and Frequency Control of a Microgrid Using the Harmony Search Algorithm

Authors: Hossein Abbasi

Abstract:

The stability is an important topic to plan and manage the energy in the microgrids as the same as the conventional power systems. The voltage and frequency stability is one of the most important issues recently studied in microgrids. The objectives of this paper are the modelling and designing of the components and optimal controllers for the voltage and frequency control of the AC/DC hybrid microgrid under the different disturbances. Since the PI controllers have the advantages of simple structure and easy implementation, so they are designed and modeled in this paper. The harmony search (HS) algorithm is used to optimize the controllers’ parameters. According to the achieved results, the PI controllers have a good performance in voltage and frequency control of the microgrid.

Keywords: Frequency control, HS algorithm, microgrid, PI controller, voltage control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1327
1998 Estimation of Attenuation and Phase Delay in Driving Voltage Waveform of an Ultra-High-Speed Image Sensor by Dimensional Analysis

Authors: V. T. S. Dao, T. G. Etoh, C. Vo Le, H. D. Nguyen, K. Takehara, T. Akino, K. Nishi

Abstract:

We present an explicit expression to estimate driving voltage attenuation through RC networks representation of an ultrahigh- speed image sensor. Elmore delay metric for a fundamental RC chain is employed as the first-order approximation. By application of dimensional analysis to SPICE simulation data, we found a simple expression that significantly improves the accuracy of the approximation. Estimation error of the resultant expression for uniform RC networks is less than 2%. Similarly, another simple closed-form model to estimate 50 % delay through fundamental RC networks is also derived with sufficient accuracy. The framework of this analysis can be extended to address delay or attenuation issues of other VLSI structures.

Keywords: Dimensional Analysis, Elmore model, RC network, Signal Attenuation, Ultra-High-Speed Image Sensor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1404
1997 Optimal Design of Two-Channel Recursive Parallelogram Quadrature Mirror Filter Banks

Authors: Ju-Hong Lee, Yi-Lin Shieh

Abstract:

This paper deals with the optimal design of two-channel recursive parallelogram quadrature mirror filter (PQMF) banks. The analysis and synthesis filters of the PQMF bank are composed of two-dimensional (2-D) recursive digital all-pass filters (DAFs) with nonsymmetric half-plane (NSHP) support region. The design problem can be facilitated by using the 2-D doubly complementary half-band (DC-HB) property possessed by the analysis and synthesis filters. For finding the coefficients of the 2-D recursive NSHP DAFs, we appropriately formulate the design problem to result in an optimization problem that can be solved by using a weighted least-squares (WLS) algorithm in the minimax (L) optimal sense. The designed 2-D recursive PQMF bank achieves perfect magnitude response and possesses satisfactory phase response without requiring extra phase equalizer. Simulation results are also provided for illustration and comparison.

Keywords: Parallelogram Quadrature Mirror Filter Bank, Doubly Complementary Filter, Nonsymmetric Half-Plane Filter, Weighted Least Squares Algorithm, Digital All-Pass Filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513
1996 Human Motion Capture: New Innovations in the Field of Computer Vision

Authors: Najm Alotaibi

Abstract:

Human motion capture has become one of the major area of interest in the field of computer vision. Some of the major application areas that have been rapidly evolving include the advanced human interfaces, virtual reality and security/surveillance systems. This study provides a brief overview of the techniques and applications used for the markerless human motion capture, which deals with analyzing the human motion in the form of mathematical formulations. The major contribution of this research is that it classifies the computer vision based techniques of human motion capture based on the taxonomy, and then breaks its down into four systematically different categories of tracking, initialization, pose estimation and recognition. The detailed descriptions and the relationships descriptions are given for the techniques of tracking and pose estimation. The subcategories of each process are further described. Various hypotheses have been used by the researchers in this domain are surveyed and the evolution of these techniques have been explained. It has been concluded in the survey that most researchers have focused on using the mathematical body models for the markerless motion capture.

Keywords: Human Motion Capture, Computer Vision, Vision based, Tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2455
1995 Optimal Operation of a Photovoltaic Induction Motor Drive Water Pumping System

Authors: Nelson K. Lujara

Abstract:

The performance characteristics of a photovoltaic induction motor drive water pumping system with and without maximum power tracker is analyzed and presented. The analysis is done through determination and assessment of critical loss components in the system using computer aided design (CAD) tools for optimal operation of the system. The results can be used to formulate a well-calibrated computer aided design package of photovoltaic water pumping systems based on the induction motor drive. The results allow the design engineer to pre-determine the flow rate and efficiency of the system to suit particular application.

Keywords: Photovoltaic, water pumping, losses, induction motor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1724
1994 A Self Adaptive Genetic Based Algorithm for the Identification and Elimination of Bad Data

Authors: A. A. Hossam-Eldin, E. N. Abdallah, M. S. El-Nozahy

Abstract:

The identification and elimination of bad measurements is one of the basic functions of a robust state estimator as bad data have the effect of corrupting the results of state estimation according to the popular weighted least squares method. However this is a difficult problem to handle especially when dealing with multiple errors from the interactive conforming type. In this paper, a self adaptive genetic based algorithm is proposed. The algorithm utilizes the results of the classical linearized normal residuals approach to tune the genetic operators thus instead of making a randomized search throughout the whole search space it is more likely to be a directed search thus the optimum solution is obtained at very early stages(maximum of 5 generations). The algorithm utilizes the accumulating databases of already computed cases to reduce the computational burden to minimum. Tests are conducted with reference to the standard IEEE test systems. Test results are very promising.

Keywords: Bad Data, Genetic Algorithms, Linearized Normal residuals, Observability, Power System State Estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1326
1993 Optimization of Communication Protocols by stochastic Delay Mechanisms

Authors: J. Levendovszky, I. Koncz, P. Boros

Abstract:

The paper is concerned with developing stochastic delay mechanisms for efficient multicast protocols and for smooth mobile handover processes which are capable of preserving a given Quality of Service (QoS). In both applications the participating entities (receiver nodes or subscribers) sample a stochastic timer and generate load after a random delay. In this way, the load on the networking resources is evenly distributed which helps to maintain QoS communication. The optimal timer distributions have been sought in different p.d.f. families (e.g. exponential, power law and radial basis function) and the optimal parameter have been found in a recursive manner. Detailed simulations have demonstrated the improvement in performance both in the case of multicast and mobile handover applications.

Keywords: Multicast communication, stochactic delay mechanisms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1529
1992 Estimation Model for Concrete Slump Recovery by Using Superplasticizer

Authors: Chaiyakrit Raoupatham, Ram Hari Dhakal, Chalermchai Wanichlamlert

Abstract:

This paper aimed to introduce the solution of concrete slump recovery using chemical admixture type-F (superplasticizer, naphthalene base) to the practice in order to solve unusable concrete problem due to concrete loss its slump, especially for those tropical countries that have faster slump loss rate. In the other hand, randomly adding superplasticizer into concrete can cause concrete to segregate. Therefore, this paper also develops the estimation model used to calculate amount of second dose of superplasticizer need for concrete slump recovery. Fresh properties of ordinary Portland cement concrete with volumetric ratio of paste to void between aggregate (paste content) of 1.1-1.3 with water-cement ratio zone of 0.30 to 0.67 and initial superplasticizer (naphthalene base) of 0.25%-1.6% were tested for initial slump and slump loss for every 30 minutes for one and half hour by slump cone test. Those concretes with slump loss range from 10% to 90% were re-dosed and successfully recovered back to its initial slump. Slump after re-dosed was tested by slump cone test. From the result, it has been concluded that, slump loss was slower for those mix with high initial dose of superplasticizer due to addition of superplasticizer will disturb cement hydration. The required second dose of superplasticizer was affected by two major parameters, which were water-cement ratio and paste content, where lower water-cement ratio and paste content cause an increase in require second dose of superplasticizer. The amount of second dose of superplasticizer is higher as the solid content within the system is increase, solid can be either from cement particles or aggregate. The data was analyzed to form an equation use to estimate the amount of second dosage requirement of superplasticizer to recovery slump to its original.

Keywords: Estimation model, second superplasticizer dosage, slump loss, slump recovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1893
1991 A New Approach of Fuzzy Methods for Evaluating of Hydrological Data

Authors: Nasser Shamskia, Seyyed Habib Rahmati, Hassan Haleh , Seyyedeh Hoda Rahmati

Abstract:

The main criteria of designing in the most hydraulic constructions essentially are based on runoff or discharge of water. Two of those important criteria are runoff and return period. Mostly, these measures are calculated or estimated by stochastic data. Another feature in hydrological data is their impreciseness. Therefore, in order to deal with uncertainty and impreciseness, based on Buckley-s estimation method, a new fuzzy method of evaluating hydrological measures are developed. The method introduces triangular shape fuzzy numbers for different measures in which both of the uncertainty and impreciseness concepts are considered. Besides, since another important consideration in most of the hydrological studies is comparison of a measure during different months or years, a new fuzzy method which is consistent with special form of proposed fuzzy numbers, is also developed. Finally, to illustrate the methods more explicitly, the two algorithms are tested on one simple example and a real case study.

Keywords: Fuzzy Discharge, Fuzzy estimation, Fuzzy ranking method, Hydrological data

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1690
1990 A Case Study of Limited Dynamic Voltage Frequency Scaling in Low-Power Processors

Authors: Hwan Su Jung, Ahn Jun Gil, Jong Tae Kim

Abstract:

Power management techniques are necessary to save power in the microprocessor. By changing the frequency and/or operating voltage of processor, DVFS can control power consumption. In this paper, we perform a case study to find optimal power state transition for DVFS. We propose the equation to find the optimal ratio between executions of states while taking into account the deadline of processing time and the power state transition delay overhead. The experiment is performed on the Cortex-M4 processor, and average 6.5% power saving is observed when DVFS is applied under the deadline condition.

Keywords: Deadline, Dynamic Voltage Frequency Scaling, Power State Transition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 938
1989 Stochastic Model Predictive Control for Linear Discrete-Time Systems with Random Dither Quantization

Authors: Tomoaki Hashimoto

Abstract:

Recently, feedback control systems using random dither quantizers have been proposed for linear discrete-time systems. However, the constraints imposed on state and control variables have not yet been taken into account for the design of feedback control systems with random dither quantization. Model predictive control is a kind of optimal feedback control in which control performance over a finite future is optimized with a performance index that has a moving initial and terminal time. An important advantage of model predictive control is its ability to handle constraints imposed on state and control variables. Based on the model predictive control approach, the objective of this paper is to present a control method that satisfies probabilistic state constraints for linear discrete-time feedback control systems with random dither quantization. In other words, this paper provides a method for solving the optimal control problems subject to probabilistic state constraints for linear discrete-time feedback control systems with random dither quantization.

Keywords: Optimal control, stochastic systems, discrete-time systems, probabilistic constraints, random dither quantization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1128
1988 Application of UAS in Forest Firefighting for Detecting Ignitions and 3D Fuel Volume Estimation

Authors: Artur Krukowski, Emmanouela Vogiatzaki

Abstract:

The article presents results from the AF3 project “Advanced Forest Fire Fighting” focused on Unmanned Aircraft Systems (UAS)-based 3D surveillance and 3D area mapping using high-resolution photogrammetric methods from multispectral imaging, also taking advantage of the 3D scanning techniques from the SCAN4RECO project. We also present a proprietary embedded sensor system used for the detection of fire ignitions in the forest using near-infrared based scanner with weight and form factors allowing it to be easily deployed on standard commercial micro-UAVs, such as DJI Inspire or Mavic. Results from real-life pilot trials in Greece, Spain, and Israel demonstrated added-value in the use of UAS for precise and reliable detection of forest fires, as well as high-resolution 3D aerial modeling for accurate quantification of human resources and equipment required for firefighting.

Keywords: Forest wildfires, fuel volume estimation, 3D modeling, UAV, surveillance, firefighting, ignition detectors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 533
1987 Application Procedure for Optimized Placement of Buckling Restrained Braces in Reinforced Concrete Building Structures

Authors: S. A. Faizi, S. Yoshitomi

Abstract:

The optimal design procedure of buckling restrained braces (BRBs) in reinforced concrete (RC) building structures can provide the distribution of horizontal stiffness of BRBs at each story, which minimizes story drift response of the structure under the constraint of specified total stiffness of BRBs. In this paper, a simple rule is proposed to convert continuous horizontal stiffness of BRBs into sectional sizes of BRB which are available from standardized section list assuming realistic structural design stage.

Keywords: Buckling restrained brace, building engineering, optimal damper placement, structural engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1363
1986 Acceleration-Based Motion Model for Visual SLAM

Authors: Daohong Yang, Xiang Zhang, Wanting Zhou, Lei Li

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) is a technology that gathers information about the surrounding environment to ascertain its own position and create a map. It is widely used in computer vision, robotics, and various other fields. Many visual SLAM systems, such as OBSLAM3, utilize a constant velocity motion model. The utilization of this model facilitates the determination of the initial pose of the current frame, thereby enhancing the efficiency and precision of feature matching. However, it is often difficult to satisfy the constant velocity motion model in actual situations. This can result in a significant deviation between the obtained initial pose and the true value, leading to errors in nonlinear optimization results. Therefore, this paper proposes a motion model based on acceleration that can be applied to most SLAM systems. To provide a more accurate description of the camera pose acceleration, we separate the pose transformation matrix into its rotation matrix and translation vector components. The rotation matrix is now represented by a rotation vector. We assume that, over a short period, the changes in rotating angular velocity and translation vector remain constant. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of the constant velocity model is analyzed theoretically. Finally, we apply our proposed approach to the ORBSLAM3 system and evaluate two sets of sequences from the TUM datasets. The results show that our proposed method has a more accurate initial pose estimation, resulting in an improvement of 6.61% and 6.46% in the accuracy of the ORBSLAM3 system on the two test sequences, respectively.

Keywords: Error estimation, constant acceleration motion model, pose estimation, visual SLAM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 190
1985 Performance of an Electrocoagulation Process in Treating Direct Dye: Batch and Continuous Upflow Processes

Authors: C. Phalakornkule, S. Polgumhang, W. Tongdaung

Abstract:

This study presents an investigation of electrochemical variables and an application of the optimal parameters in operating a continuous upflow electrocoagulation reactor in removing dye. Direct red 23, which is azo-based, was used as a representative of direct dyes. First, a batch mode was employed to optimize the design parameters: electrode type, electrode distance, current density and electrocoagulation time. The optimal parameters were found to be iron anode, distance between electrodes of 8 mm and current density of 30 A·m-2 with contact time of 5 min. The performance of the continuous upflow reactor with these parameters was satisfactory, with >95% color removal and energy consumption in the order of 0.6-0.7 kWh·m-3.

Keywords: Decolorization, Direct Dye, Electrocoagulation, Textile Wastewater, Upflow Reactor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3008
1984 A Search Algorithm for Solving the Economic Lot Scheduling Problem with Reworks under the Basic Period Approach

Authors: Yu-Jen Chang, Shih-Chieh Chen, Yu-Wei Kuo

Abstract:

In this study, we are interested in the economic lot scheduling problem (ELSP) that considers manufacturing of the serviceable products and remanufacturing of the reworked products. In this paper, we formulate a mathematical model for the ELSP with reworks using the basic period approach. In order to solve this problem, we propose a search algorithm to find the cyclic multiplier ki of each product that can be cyclically produced for every ki basic periods. This research also uses two heuristics to search for the optimal production sequence of all lots and the optimal time length of the basic period so as to minimize the average total cost. This research uses a numerical example to show the effectiveness of our approach.

Keywords: Economic lot, reworks, inventory, basic period.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1488
1983 Perspectives of Renewable Energy in 21st Century in India: Statistics and Estimation

Authors: Manoj Kumar, Rajesh Kumar

Abstract:

With the favourable geographical conditions at Indian-subcontinent, it is suitable for flourishing renewable energy. Increasing amount of dependence on coal and other conventional sources is driving the world into pollution and depletion of resources. This paper presents the statistics of energy consumption and energy generation in Indian Sub-continent, which notifies us with the increasing energy demands surpassing energy generation. With the aggrandizement in demand for energy, usage of coal has increased, since the major portion of energy production in India is from thermal power plants. The increase in usage of thermal power plants causes pollution and depletion of reserves; hence, a paradigm shift to renewable sources is inevitable. In this work, the capacity and potential of renewable sources in India are analyzed. Based on the analysis of this work, future potential of these sources is estimated.

Keywords: Energy consumption and generation, depletion of reserves, pollution, estimation, renewable sources.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 795
1982 Optimization of the Input Layer Structure for Feed-Forward Narx Neural Networks

Authors: Zongyan Li, Matt Best

Abstract:

This paper presents an optimization method for reducing the number of input channels and the complexity of the feed-forward NARX neural network (NN) without compromising the accuracy of the NN model. By utilizing the correlation analysis method, the most significant regressors are selected to form the input layer of the NN structure. An application of vehicle dynamic model identification is also presented in this paper to demonstrate the optimization technique and the optimal input layer structure and the optimal number of neurons for the neural network is investigated.

Keywords: Correlation analysis, F-ratio, Levenberg-Marquardt, MSE, NARX, neural network, optimisation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2171
1981 An Efficient Adaptive Thresholding Technique for Wavelet Based Image Denoising

Authors: D.Gnanadurai, V.Sadasivam

Abstract:

This frame work describes a computationally more efficient and adaptive threshold estimation method for image denoising in the wavelet domain based on Generalized Gaussian Distribution (GGD) modeling of subband coefficients. In this proposed method, the choice of the threshold estimation is carried out by analysing the statistical parameters of the wavelet subband coefficients like standard deviation, arithmetic mean and geometrical mean. The noisy image is first decomposed into many levels to obtain different frequency bands. Then soft thresholding method is used to remove the noisy coefficients, by fixing the optimum thresholding value by the proposed method. Experimental results on several test images by using this method show that this method yields significantly superior image quality and better Peak Signal to Noise Ratio (PSNR). Here, to prove the efficiency of this method in image denoising, we have compared this with various denoising methods like wiener filter, Average filter, VisuShrink and BayesShrink.

Keywords: Wavelet Transform, Gaussian Noise, ImageDenoising, Filter Banks and Thresholding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2880
1980 Stackelberg Security Game for Optimizing Security of Federated Internet of Things Platform Instances

Authors: Violeta Damjanovic-Behrendt

Abstract:

This paper presents an approach for optimal cyber security decisions to protect instances of a federated Internet of Things (IoT) platform in the cloud. The presented solution implements the repeated Stackelberg Security Game (SSG) and a model called Stochastic Human behaviour model with AttRactiveness and Probability weighting (SHARP). SHARP employs the Subjective Utility Quantal Response (SUQR) for formulating a subjective utility function, which is based on the evaluations of alternative solutions during decision-making. We augment the repeated SSG (including SHARP and SUQR) with a reinforced learning algorithm called Naïve Q-Learning. Naïve Q-Learning belongs to the category of active and model-free Machine Learning (ML) techniques in which the agent (either the defender or the attacker) attempts to find an optimal security solution. In this way, we combine GT and ML algorithms for discovering optimal cyber security policies. The proposed security optimization components will be validated in a collaborative cloud platform that is based on the Industrial Internet Reference Architecture (IIRA) and its recently published security model.

Keywords: Security, internet of things, cloud computing, Stackelberg security game, machine learning, Naïve Q-learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1549
1979 Optimization of Artificial Ageing Time and Temperature on Evaluation of Hardness and Resistivity of Al-Si-Mg (Cu or/& Ni) Alloys

Authors: A. Hossain, A. S. W. Kurny

Abstract:

The factors necessary to obtain an optimal heat treatment that influence the hardness and resistivity of Al-6Si-0.5Mg casting alloys with Cu or/and Ni additions were investigated. The alloys were homogenised (24hr at 500oC), solutionized (2hr at 540oC) and artificially ageing at various times and temperatures. The alloys were aged isochronally for 60 minutes at temperatures up to 400oC and isothermally at 150, 175, 200, 225, 250 & 300oC for different periods in the range 15 to 360 minutes. The hardness and electrical resistivity of the alloys were measured for various artificial ageing times and temperatures. From the isochronal ageing treatment, hardness found maximum ageing at 225oC. And from the isothermal ageing treatment, hardness found maximum for 60 minutes at 225oC. So the optimal heat treatment consists of 60 minutes ageing at 225oC.

Keywords: Ageing, Al-Si-Mg alloy, hardness, resistivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3020
1978 A Study of Adaptive Fault Detection Method for GNSS Applications

Authors: Je Young Lee, Hee Sung Kim, Kwang Ho Choi, Joonhoo Lim, Sebum Chun, Hyung Keun Lee

Abstract:

This study is purposed to develop an efficient fault detection method for Global Navigation Satellite Systems (GNSS) applications based on adaptive noise covariance estimation. Due to the dependence on radio frequency signals, GNSS measurements are dominated by systematic errors in receiver’s operating environment. In the proposed method, the pseudorange and carrier-phase measurement noise covariances are obtained at time propagations and measurement updates in process of Carrier-Smoothed Code (CSC) filtering, respectively. The test statistics for fault detection are generated by the estimated measurement noise covariances. To evaluate the fault detection capability, intentional faults were added to the filed-collected measurements. The experiment result shows that the proposed method is efficient in detecting unhealthy measurements and improves GNSS positioning accuracy against fault occurrences.

Keywords: Adaptive estimation, fault detection, GNSS, residual.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2528