Search results for: likelihood estimation method
8305 Operational Risk – Scenario Analysis
Authors: Milan Rippel, Petr Teply
Abstract:
This paper focuses on operational risk measurement techniques and on economic capital estimation methods. A data sample of operational losses provided by an anonymous Central European bank is analyzed using several approaches. Loss Distribution Approach and scenario analysis method are considered. Custom plausible loss events defined in a particular scenario are merged with the original data sample and their impact on capital estimates and on the financial institution is evaluated. Two main questions are assessed – What is the most appropriate statistical method to measure and model operational loss data distribution? and What is the impact of hypothetical plausible events on the financial institution? The g&h distribution was evaluated to be the most suitable one for operational risk modeling. The method based on the combination of historical loss events modeling and scenario analysis provides reasonable capital estimates and allows for the measurement of the impact of extreme events on banking operations.Keywords: operational risk, scenario analysis, economic capital, loss distribution approach, extreme value theory, stress testing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24308304 Industrial Wastewater Sludge Treatment in Chongqing, China
Authors: Victor Emery David Jr, Jiang Wenchao, Yasinta John, Md. Sahadat Hossain
Abstract:
Sludge originates from the process of treatment of wastewater. It is the byproduct of wastewater treatment containing concentrated heavy metals and poorly biodegradable trace organic compounds, as well as potentially pathogenic organisms (viruses, bacteria, etc.) which are usually difficult to treat or dispose of. China, like other countries, is no stranger to the challenges posed by increase of wastewater. Treatment and disposal of sludge has been a problem for most cities in China. However, this problem has been exacerbated by other issues such as lack of technology, funding, and other factors. Suitable methods for such climatic conditions are still unavailable for modern cities in China. Against this background, this paper seeks to describe the methods used for treatment and disposal of sludge from industries and suggest a suitable method for treatment and disposal in Chongqing/China. From the research conducted, it was discovered that the highest treatment rate of sludge in Chongqing was 10.08%. The industrial waste piping system is not separated from the domestic system. Considering the proliferation of industry and urbanization, there is a likelihood that the production of sludge in Chongqing will increase. If the sludge produced is not properly managed, this may lead to adverse health and environmental effects. Disposal costs and methods for Chongqing were also included in this paper’s analysis. Research showed that incineration is the most expensive method of sludge disposal in China/Chongqing. Subsequent research therefore considered optional alternatives such as composting. Composting represents a relatively cheap waste disposal method considering the vast population, current technology and economic conditions of Chongqing, as well as China at large.Keywords: Sludge, disposal of sludge, treatment, industrial sludge, Chongqing, wastewater.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17868303 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting
Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas
Abstract:
The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.Keywords: Artificial neural network, low series manufacturing, polymer cutting, setup period estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9768302 Development and Validation of a HPLC Method for 6-Gingerol and 6-Shogaol in Joint Pain Relief Gel Containing Ginger (Zingiber officinale)
Authors: Tanwarat Kajsongkram, Saowalux Rotamporn, Sirinat Limbunruang, Sirinan Thubthimthed
Abstract:
High Performance Liquid Chromatography (HPLC) method was developed and validated for simultaneous estimation of 6-Gingerol(6G) and 6-Shogaol(6S) in joint pain relief gel containing ginger extract. The chromatographic separation was achieved by using C18 column, 150 x 4.6mm i.d., 5μ Luna, mobile phase containing acetonitrile and water (gradient elution). The flow rate was 1.0 ml/min and the absorbance was monitored at 282 nm. The proposed method was validated in terms of the analytical parameters such as specificity, accuracy, precision, linearity, range, limit of detection (LOD), limit of quantification (LOQ), and determined based on the International Conference on Harmonization (ICH) guidelines. The linearity ranges of 6G and 6S were obtained over 20- 60 and 6-18 μg/ml respectively. Good linearity was observed over the above-mentioned range with linear regression equation Y= 11016x- 23778 for 6G and Y = 19276x-19604 for 6S (x is concentration of analytes in μg/ml and Y is peak area). The value of correlation coefficient was found to be 0.9994 for both markers. The limit of detection (LOD) and limit of quantification (LOQ) for 6G were 0.8567 and 2.8555 μg/ml and for 6S were 0.3672 and 1.2238 μg/ml respectively. The recovery range for 6G and 6S were found to be 91.57 to 102.36 % and 84.73 to 92.85 % for all three spiked levels. The RSD values from repeated extractions for 6G and 6S were 3.43 and 3.09% respectively. The validation of developed method on precision, accuracy, specificity, linearity, and range were also performed with well-accepted results.
Keywords: Ginger, 6-gingerol, HPLC, 6-shogaol.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34238301 Quality Estimation of Video Transmitted overan Additive WGN Channel based on Digital Watermarking and Wavelet Transform
Authors: Mohamed S. El-Mahallawy, Attalah Hashad, Hazem Hassan Ali, Heba Sami Zaky
Abstract:
This paper presents an evaluation for a wavelet-based digital watermarking technique used in estimating the quality of video sequences transmitted over Additive White Gaussian Noise (AWGN) channel in terms of a classical objective metric, such as Peak Signal-to-Noise Ratio (PSNR) without the need of the original video. In this method, a watermark is embedded into the Discrete Wavelet Transform (DWT) domain of the original video frames using a quantization method. The degradation of the extracted watermark can be used to estimate the video quality in terms of PSNR with good accuracy. We calculated PSNR for video frames contaminated with AWGN and compared the values with those estimated using the Watermarking-DWT based approach. It is found that the calculated and estimated quality measures of the video frames are highly correlated, suggesting that this method can provide a good quality measure for video frames transmitted over AWGN channel without the need of the original video.Keywords: AWGN, DWT, PSNR, Watermarking, VideoQuality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18378300 A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics
Authors: Hui Zhang, Ye Tian, Fang Ye, Ziming Guo
Abstract:
Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%.Keywords: Communication signal, feature extraction, holder coefficient, improved cloud model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7098299 Improving Flash Flood Forecasting with a Bayesian Probabilistic Approach: A Case Study on the Posina Basin in Italy
Authors: Zviad Ghadua, Biswa Bhattacharya
Abstract:
The Flash Flood Guidance (FFG) provides the rainfall amount of a given duration necessary to cause flooding. The approach is based on the development of rainfall-runoff curves, which helps us to find out the rainfall amount that would cause flooding. An alternative approach, mostly experimented with Italian Alpine catchments, is based on determining threshold discharges from past events and on finding whether or not an oncoming flood has its magnitude more than some critical discharge thresholds found beforehand. Both approaches suffer from large uncertainties in forecasting flash floods as, due to the simplistic approach followed, the same rainfall amount may or may not cause flooding. This uncertainty leads to the question whether a probabilistic model is preferable over a deterministic one in forecasting flash floods. We propose the use of a Bayesian probabilistic approach in flash flood forecasting. A prior probability of flooding is derived based on historical data. Additional information, such as antecedent moisture condition (AMC) and rainfall amount over any rainfall thresholds are used in computing the likelihood of observing these conditions given a flash flood has occurred. Finally, the posterior probability of flooding is computed using the prior probability and the likelihood. The variation of the computed posterior probability with rainfall amount and AMC presents the suitability of the approach in decision making in an uncertain environment. The methodology has been applied to the Posina basin in Italy. From the promising results obtained, we can conclude that the Bayesian approach in flash flood forecasting provides more realistic forecasting over the FFG.
Keywords: Flash flood, Bayesian, flash flood guidance, FFG, forecasting, Posina.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7508298 Equity Risk Premiums and Risk Free Rates in Modelling and Prediction of Financial Markets
Authors: Mohammad Ghavami, Reza S. Dilmaghani
Abstract:
This paper presents an adaptive framework for modelling financial markets using equity risk premiums, risk free rates and volatilities. The recorded economic factors are initially used to train four adaptive filters for a certain limited period of time in the past. Once the systems are trained, the adjusted coefficients are used for modelling and prediction of an important financial market index. Two different approaches based on least mean squares (LMS) and recursive least squares (RLS) algorithms are investigated. Performance analysis of each method in terms of the mean squared error (MSE) is presented and the results are discussed. Computer simulations carried out using recorded data show MSEs of 4% and 3.4% for the next month prediction using LMS and RLS adaptive algorithms, respectively. In terms of twelve months prediction, RLS method shows a better tendency estimation compared to the LMS algorithm.Keywords: Prediction of financial markets, Adaptive methods, MSE, LSE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10228297 Identification, Prediction and Detection of the Process Fault in a Cement Rotary Kiln by Locally Linear Neuro-Fuzzy Technique
Authors: Masoud Sadeghian, Alireza Fatehi
Abstract:
In this paper, we use nonlinear system identification method to predict and detect process fault of a cement rotary kiln. After selecting proper inputs and output, an input-output model is identified for the plant. To identify the various operation points in the kiln, Locally Linear Neuro-Fuzzy (LLNF) model is used. This model is trained by LOLIMOT algorithm which is an incremental treestructure algorithm. Then, by using this method, we obtained 3 distinct models for the normal and faulty situations in the kiln. One of the models is for normal condition of the kiln with 15 minutes prediction horizon. The other two models are for the two faulty situations in the kiln with 7 minutes prediction horizon are presented. At the end, we detect these faults in validation data. The data collected from White Saveh Cement Company is used for in this study.Keywords: Cement Rotary Kiln, Fault Detection, Delay Estimation Method, Locally Linear Neuro Fuzzy Model, LOLIMOT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16738296 Iterative Solutions to Some Linear Matrix Equations
Authors: Jiashang Jiang, Hao Liu, Yongxin Yuan
Abstract:
In this paper the gradient based iterative algorithms are presented to solve the following four types linear matrix equations: (a) AXB = F; (b) AXB = F, CXD = G; (c) AXB = F s. t. X = XT ; (d) AXB+CYD = F, where X and Y are unknown matrices, A,B,C,D, F,G are the given constant matrices. It is proved that if the equation considered has a solution, then the unique minimum norm solution can be obtained by choosing a special kind of initial matrices. The numerical results show that the proposed method is reliable and attractive.
Keywords: Matrix equation, iterative algorithm, parameter estimation, minimum norm solution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18578295 Friction Estimation and Compensation for Steering Angle Control for Highly Automated Driving
Authors: Marcus Walter, Norbert Nitzsche, Dirk Odenthal, Steffen M¨uller
Abstract:
This contribution presents a friction estimator for industrial purposes which identifies Coulomb friction in a steering system. The estimator only needs a few, usually known, steering system parameters. Friction occurs on almost every mechanical system and has a negative influence on high-precision position control. This is demonstrated on a steering angle controller for highly automated driving. In this steering system the friction induces limit cycles which cause oscillating vehicle movement when the vehicle follows a given reference trajectory. When compensating the friction with the introduced estimator, limit cycles can be suppressed. This is demonstrated by measurements in a series vehicle.Keywords: Friction estimation, friction compensation, steering system, lateral vehicle guidance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30548294 Application of Build-up and Wash-off Models for an East-Australian Catchment
Authors: Iqbal Hossain, Monzur Alam Imteaz, Mohammed Iqbal Hossain
Abstract:
Estimation of stormwater pollutants is a pre-requisite for the protection and improvement of the aquatic environment and for appropriate management options. The usual practice for the stormwater quality prediction is performed through water quality modeling. However, the accuracy of the prediction by the models depends on the proper estimation of model parameters. This paper presents the estimation of model parameters for a catchment water quality model developed for the continuous simulation of stormwater pollutants from a catchment to the catchment outlet. The model is capable of simulating the accumulation and transportation of the stormwater pollutants; suspended solids (SS), total nitrogen (TN) and total phosphorus (TP) from a particular catchment. Rainfall and water quality data were collected for the Hotham Creek Catchment (HTCC), Gold Coast, Australia. Runoff calculations from the developed model were compared with the calculated discharges from the widely used hydrological models, WBNM and DRAINS. Based on the measured water quality data, model water quality parameters were calibrated for the above-mentioned catchment. The calibrated parameters are expected to be helpful for the best management practices (BMPs) of the region. Sensitivity analyses of the estimated parameters were performed to assess the impacts of the model parameters on overall model estimations of runoff water quality.Keywords: Calibration, Model Parameters, Suspended Solids, TotalNitrogen, Total Phosphorus.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21838293 VFAST TCP: A delay-based enhanced version of FAST TCP
Authors: Salem Belhaj, Moncef Tagina
Abstract:
This paper is aimed at describing a delay-based endto- end (e2e) congestion control algorithm, called Very FAST TCP (VFAST), which is an enhanced version of FAST TCP. The main idea behind this enhancement is to smoothly estimate the Round-Trip Time (RTT) based on a nonlinear filter, which eliminates throughput and queue oscillation when RTT fluctuates. In this context, an evaluation of the suggested scheme through simulation is introduced, by comparing our VFAST prototype with FAST in terms of throughput, queue behavior, fairness, stability, RTT and adaptivity to changes in network. The achieved simulation results indicate that the suggested protocol offer better performance than FAST TCP in terms of RTT estimation and throughput.Keywords: Fast tcp, RTT, delay estimation, delay-based congestion control, high speed TCP, large bandwidth delay product.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17328292 A Comparison of the Sum of Squares in Linear and Partial Linear Regression Models
Authors: Dursun Aydın
Abstract:
In this paper, estimation of the linear regression model is made by ordinary least squares method and the partially linear regression model is estimated by penalized least squares method using smoothing spline. Then, it is investigated that differences and similarity in the sum of squares related for linear regression and partial linear regression models (semi-parametric regression models). It is denoted that the sum of squares in linear regression is reduced to sum of squares in partial linear regression models. Furthermore, we indicated that various sums of squares in the linear regression are similar to different deviance statements in partial linear regression. In addition to, coefficient of the determination derived in linear regression model is easily generalized to coefficient of the determination of the partial linear regression model. For this aim, it is made two different applications. A simulated and a real data set are considered to prove the claim mentioned here. In this way, this study is supported with a simulation and a real data example.Keywords: Partial Linear Regression Model, Linear RegressionModel, Residuals, Deviance, Smoothing Spline.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18738291 A 3D Approach for Extraction of the Coronaryartery and Quantification of the Stenosis
Authors: Mahdi Mazinani, S. D. Qanadli, Rahil Hosseini, Tim Ellis, Jamshid Dehmeshki
Abstract:
Segmentation and quantification of stenosis is an important task in assessing coronary artery disease. One of the main challenges is measuring the real diameter of curved vessels. Moreover, uncertainty in segmentation of different tissues in the narrow vessel is an important issue that affects accuracy. This paper proposes an algorithm to extract coronary arteries and measure the degree of stenosis. Markovian fuzzy clustering method is applied to model uncertainty arises from partial volume effect problem. The algorithm employs: segmentation, centreline extraction, estimation of orthogonal plane to centreline, measurement of the degree of stenosis. To evaluate the accuracy and reproducibility, the approach has been applied to a vascular phantom and the results are compared with real diameter. The results of 10 patient datasets have been visually judged by a qualified radiologist. The results reveal the superiority of the proposed method compared to the Conventional thresholding Method (CTM) on both datasets.Keywords: 3D coronary artery tree extraction, segmentation, quantification, fuzzy clustering, and Markov random field
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15828290 Video Matting based on Background Estimation
Authors: J.-H. Moon, D.-O Kim, R.-H. Park
Abstract:
This paper presents a video matting method, which extracts the foreground and alpha matte from a video sequence. The objective of video matting is finding the foreground and compositing it with the background that is different from the one in the original image. By finding the motion vectors (MVs) using a sliced block matching algorithm (SBMA), we can extract moving regions from the video sequence under the assumption that the foreground is moving and the background is stationary. In practice, foreground areas are not moving through all frames in an image sequence, thus we accumulate moving regions through the image sequence. The boundaries of moving regions are found by Canny edge detector and the foreground region is separated in each frame of the sequence. Remaining regions are defined as background regions. Extracted backgrounds in each frame are combined and reframed as an integrated single background. Based on the estimated background, we compute the frame difference (FD) of each frame. Regions with the FD larger than the threshold are defined as foreground regions, boundaries of foreground regions are defined as unknown regions and the rest of regions are defined as backgrounds. Segmentation information that classifies an image into foreground, background, and unknown regions is called a trimap. Matting process can extract an alpha matte in the unknown region using pixel information in foreground and background regions, and estimate the values of foreground and background pixels in unknown regions. The proposed video matting approach is adaptive and convenient to extract a foreground automatically and to composite a foreground with a background that is different from the original background.
Keywords: Background estimation, Object segmentation, Blockmatching algorithm, Video matting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18138289 Comparison of Two-Phase Critical Flow Models for Estimation of Leak Flow Rate through Cracks
Authors: Tadashi Watanabe, Jinya Katsuyama, Akihiro Mano
Abstract:
The estimation of leak flow rates through narrow cracks in structures is of importance for nuclear reactor safety, since the leak flow could be detected before occurrence of loss-of-coolant accidents. The two-phase critical leak flow rates are calculated using the system analysis code, and two representative non-homogeneous critical flow models, Henry-Fauske model and Ransom-Trapp model, are compared. The pressure decrease and vapor generation in the crack, and the leak flow rates are found to be larger for the Henry-Fauske model. It is shown that the leak flow rates are not affected by the structural temperature, but affected largely by the roughness of crack surface.
Keywords: Crack, critical flow, leak, roughness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8438288 The Cost and Benefit on the Investment in Safety and Health of the Enterprises in Thailand
Authors: Charawee Butbumrung
Abstract:
The purpose of this study is to evaluate the monetary worthiness of investment and the usefulness of risk estimation as a tool employed by a production section of an electronic factory. This study employed the case study of accidents occurring in production areas. Data is collected from interviews with six production of safety coordinators and collect the information from the relevant section. The study will present the ratio of benefits compared with the operation costs for investment. The result showed that it is worthwhile for investment with the safety measures. In addition, the organizations must be able to analyze the causes of accidents about the benefits of investing in protective working process. They also need to quickly provide the manual for the staff to learn how to protect themselves from accidents and how to use all of the safety equipment.
Keywords: Cost and benefit, enterprises in Thailand, investment in safety and health, risk estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7848287 Application of Data Mining Tools to Predicate Completion Time of a Project
Authors: Seyed Hossein Iranmanesh, Zahra Mokhtari
Abstract:
Estimation time and cost of work completion in a project and follow up them during execution are contributors to success or fail of a project, and is very important for project management team. Delivering on time and within budgeted cost needs to well managing and controlling the projects. To dealing with complex task of controlling and modifying the baseline project schedule during execution, earned value management systems have been set up and widely used to measure and communicate the real physical progress of a project. But it often fails to predict the total duration of the project. In this paper data mining techniques is used predicting the total project duration in term of Time Estimate At Completion-EAC (t). For this purpose, we have used a project with 90 activities, it has updated day by day. Then, it is used regular indexes in literature and applied Earned Duration Method to calculate time estimate at completion and set these as input data for prediction and specifying the major parameters among them using Clem software. By using data mining, the effective parameters on EAC and the relationship between them could be extracted and it is very useful to manage a project with minimum delay risks. As we state, this could be a simple, safe and applicable method in prediction the completion time of a project during execution.Keywords: Data Mining Techniques, Earned Duration Method, Earned Value, Estimate At Completion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18038286 Estimation of Production Function in Fishery on the Coasts of Caspian Sea
Authors: Komeil Jahanifar, Zahra Abedi, Yaghob Zeraatkish
Abstract:
This research was conducted for the first time at the southeastern coasts of the Caspian Sea in order to evaluate the performance of osteichthyes cooperatives through production (catch) function. Using one of the indirect valuation methods in this research, contributory factors in catch were identified and were inserted into the function as independent variables. In order to carry out this research, the performance of 25 Osteichthyes catching cooperatives in the utilization year of 2009 which were involved in fishing in Miankale wildlife refuge region. The contributory factors in catch were divided into groups of economic, ecological and biological factors. In the mentioned function, catch rate of the cooperative were inserted into as the dependant variable and fourteen partial variables in terms of nine general variables as independent variables. Finally, after function estimation, seven variables were rendered significant at 99 percent reliably level. The results of the function estimation indicated that human resource (fisherman quantity) had the greatest positive effect on catch rate with an influence coefficient of 1.7 while weather conditions had the greatest negative effect on the catch rate of cooperatives with an influence coefficient of -2.07. Moreover, factors like member's share, experience and fisherman training and fishing effort played the main roles in the catch rate of cooperative with influence coefficients of 0.81, 0.5 and 0.21, respectively.Keywords: Production Function, Coefficient, Variable, Osteichthyes, Caspian Sea
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20428285 Applying Sequential Pattern Mining to Generate Block for Scheduling Problems
Authors: Meng-Hui Chen, Chen-Yu Kao, Chia-Yu Hsu, Pei-Chann Chang
Abstract:
The main idea in this paper is using sequential pattern mining to find the information which is helpful for finding high performance solutions. By combining this information, it is defined as blocks. Using the blocks to generate artificial chromosomes (ACs) could improve the structure of solutions. Estimation of Distribution Algorithms (EDAs) is adapted to solve the combinatorial problems. Nevertheless many of these approaches are advantageous for this application, but only some of them are used to enhance the efficiency of application. Generating ACs uses patterns and EDAs could increase the diversity. According to the experimental result, the algorithm which we proposed has a better performance to solve the permutation flow-shop problems.
Keywords: Combinatorial problems, Sequential Pattern Mining, Estimation of Distribution Algorithms, Artificial Chromosomes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17188284 Inverse Dynamic Active Ground Motion Acceleration Inputs Estimation of the Retaining Structure
Authors: Ming-Hui Lee, Iau-Teh Wang
Abstract:
The innovative fuzzy estimator is used to estimate the ground motion acceleration of the retaining structure in this study. The Kalman filter without the input term and the fuzzy weighting recursive least square estimator are two main portions of this method. The innovation vector can be produced by the Kalman filter, and be applied to the fuzzy weighting recursive least square estimator to estimate the acceleration input over time. The excellent performance of this estimator is demonstrated by comparing it with the use of difference weighting function, the distinct levels of the measurement noise covariance and the initial process noise covariance. The availability and the precision of the proposed method proposed in this study can be verified by comparing the actual value and the one obtained by numerical simulation.Keywords: Earthquake, Fuzzy Estimator, Kalman Filter, Recursive Least Square Estimator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15478283 Instant Location Detection of Objects Moving at High-Speedin C-OTDR Monitoring Systems
Authors: Andrey V. Timofeev
Abstract:
The practical efficient approach is suggested to estimate the high-speed objects instant bounds in C-OTDR monitoring systems. In case of super-dynamic objects (trains, cars) is difficult to obtain the adequate estimate of the instantaneous object localization because of estimation lag. In other words, reliable estimation coordinates of monitored object requires taking some time for data observation collection by means of C-OTDR system, and only if the required sample volume will be collected the final decision could be issued. But it is contrary to requirements of many real applications. For example, in rail traffic management systems we need to get data of the dynamic objects localization in real time. The way to solve this problem is to use the set of statistical independent parameters of C-OTDR signals for obtaining the most reliable solution in real time. The parameters of this type we can call as «signaling parameters» (SP). There are several the SP’s which carry information about dynamic objects instant localization for each of COTDR channels. The problem is that some of these parameters are very sensitive to dynamics of seismoacoustic emission sources, but are non-stable. On the other hand, in case the SP is very stable it becomes insensitive as rule. This report contains describing of the method for SP’s co-processing which is designed to get the most effective dynamic objects localization estimates in the C-OTDR monitoring system framework.
Keywords: C-OTDR-system, co-processing of signaling parameters, high-speed objects localization, multichannel monitoring systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19098282 Non-Parametric, Unconditional Quantile Estimation of Efficiency in Microfinance Institutions
Authors: Komlan Sedzro
Abstract:
We apply the non-parametric, unconditional, hyperbolic order-α quantile estimator to appraise the relative efficiency of Microfinance Institutions in Africa in terms of outreach. Our purpose is to verify if these institutions, which must constantly try to strike a compromise between their social role and financial sustainability are operationally efficient. Using data on African MFIs extracted from the Microfinance Information eXchange (MIX) database and covering the 2004 to 2006 periods, we find that more efficient MFIs are also the most profitable. This result is in line with the view that social performance is not in contradiction with the pursuit of excellent financial performance. Our results also show that large MFIs in terms of asset and those charging the highest fees are not necessarily the most efficient.Keywords: Data envelopment analysis, microfinance institutions, quantile estimation of efficiency, social and financial performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16798281 An Identification Method of Geological Boundary Using Elastic Waves
Authors: Masamitsu Chikaraishi, Mutsuto Kawahara
Abstract:
This paper focuses on a technique for identifying the geological boundary of the ground strata in front of a tunnel excavation site using the first order adjoint method based on the optimal control theory. The geological boundary is defined as the boundary which is different layers of elastic modulus. At tunnel excavations, it is important to presume the ground situation ahead of the cutting face beforehand. Excavating into weak strata or fault fracture zones may cause extension of the construction work and human suffering. A theory for determining the geological boundary of the ground in a numerical manner is investigated, employing excavating blasts and its vibration waves as the observation references. According to the optimal control theory, the performance function described by the square sum of the residuals between computed and observed velocities is minimized. The boundary layer is determined by minimizing the performance function. The elastic analysis governed by the Navier equation is carried out, assuming the ground as an elastic body with linear viscous damping. To identify the boundary, the gradient of the performance function with respect to the geological boundary can be calculated using the adjoint equation. The weighed gradient method is effectively applied to the minimization algorithm. To solve the governing and adjoint equations, the Galerkin finite element method and the average acceleration method are employed for the spatial and temporal discretizations, respectively. Based on the method presented in this paper, the different boundary of three strata can be identified. For the numerical studies, the Suemune tunnel excavation site is employed. At first, the blasting force is identified in order to perform the accuracy improvement of analysis. We identify the geological boundary after the estimation of blasting force. With this identification procedure, the numerical analysis results which almost correspond with the observation data were provided.
Keywords: Parameter identification, finite element method, average acceleration method, first order adjoint equation method, weighted gradient method, geological boundary, navier equation, optimal control theory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15848280 Remarks Regarding Queuing Model and Packet Loss Probability for the Traffic with Self-Similar Characteristics
Authors: Mihails Kulikovs, Ernests Petersons
Abstract:
Network management techniques have long been of interest to the networking research community. The queue size plays a critical role for the network performance. The adequate size of the queue maintains Quality of Service (QoS) requirements within limited network capacity for as many users as possible. The appropriate estimation of the queuing model parameters is crucial for both initial size estimation and during the process of resource allocation. The accurate resource allocation model for the management system increases the network utilization. The present paper demonstrates the results of empirical observation of memory allocation for packet-based services.Keywords: Queuing System, Packet Loss Probability, Measurement-Based Admission Control (MBAC), Performanceevaluation, Quality of Service (QoS).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17748279 ANFIS Modeling of the Surface Roughness in Grinding Process
Authors: H. Baseri, G. Alinejad
Abstract:
The objective of this study is to design an adaptive neuro-fuzzy inference system (ANFIS) for estimation of surface roughness in grinding process. The Used data have been generated from experimental observations when the wheel has been dressed using a rotary diamond disc dresser. The input parameters of model are dressing speed ratio, dressing depth and dresser cross-feed rate and output parameter is surface roughness. In the experimental procedure the grinding conditions are constant and only the dressing conditions are varied. The comparison of the predicted values and the experimental data indicates that the ANFIS model has a better performance with respect to back-propagation neural network (BPNN) model which has been presented by the authors in previous work for estimation of the surface roughness.Keywords: Grinding, ANFIS, Neural network, Disc dressing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24168278 Wind Energy Resources Assessment and Micrositting on Different Areas of Libya: The Case Study in Darnah
Authors: F. Ahwide, Y. Bouker, K. Hatem
Abstract:
This paper presents long term wind data analysis in terms of annual and diurnal variations at different areas of Libya. The data of the wind speed and direction are taken each ten minutes for a period, at least two years, are used in the analysis. ‘WindPRO’ software and Excel workbook were used for the wind statistics and energy calculations. As for Darnah, average speeds are 10m, 20m and 40m and 6.57 m/s, 7.18 m/s, and 8.09 m/s, respectively. Highest wind speeds are observed at SSW, followed by S, WNW and NW sectors. Lowest wind speeds are observed between N and E sectors. Most frequent wind directions are NW and NNW. Hence, wind turbines can be installed against these directions. The most powerful sector is NW (31.3% of total expected wind energy), followed by 17.9% SSW, 11.5% NNW and 8.2% WNW
In Excel workbook, an estimation of annual energy yield at position of Derna, Al-Maqrun, Tarhuna and Al-Asaaba meteorological mast has been done, considering a generic wind turbine of 1.65 MW. (mtORRES, TWT 82-1.65MW) in position of meteorological mast. Three other turbines have been tested and a reduction of 18% over the net AEP. At 80m, the estimation of energy yield for Derna, Al- Maqrun, Tarhuna and Asaaba is 6.78 GWh or 3390 equivalent hours, 5.80 GWh or 2900 equivalent hours, 4.91 GWh or 2454 equivalent hours and 5.08 GWh or 2541 equivalent hours respectively. It seems a fair value in the context of a possible development of a wind energy project in the areas, considering a value of 2400 equivalent hours as an approximate limit to consider a wind warm economically profitable. Furthermore, an estimation of annual energy yield at positions of Misalatha, Azizyah and Goterria meteorological mast has been done, considering a generic wind turbine of 2 MW. We found that, at 80 m the estimation of energy yield is 3.12 GWh or 1557 equivalent hours, 4.47 GWh or 2235 equivalent hours and 4.07GWh or 2033 respectively.
It seems a very poor value in the context of possible development of a wind energy project in the areas, considering a value of 2400 equivalent hours as an approximate limit to consider a wind warm economically profitable. Anyway, more data and a detailed wind farm study would be necessary to draw conclusions.
Keywords: Wind turbines, wind data, energy yield, micrositting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26378277 Artificial Neural Networks Application to Improve Shunt Active Power Filter
Authors: Rachid.Dehini, Abdesselam.Bassou, Brahim.Ferdi
Abstract:
Active Power Filters (APFs) are today the most widely used systems to eliminate harmonics compensate power factor and correct unbalanced problems in industrial power plants. We propose to improve the performances of conventional APFs by using artificial neural networks (ANNs) for harmonics estimation. This new method combines both the strategies for extracting the three-phase reference currents for active power filters and DC link voltage control method. The ANNs learning capabilities to adaptively choose the power system parameters for both to compute the reference currents and to recharge the capacitor value requested by VDC voltage in order to ensure suitable transit of powers to supply the inverter. To investigate the performance of this identification method, the study has been accomplished using simulation with the MATLAB Simulink Power System Toolbox. The simulation study results of the new (SAPF) identification technique compared to other similar methods are found quite satisfactory by assuring good filtering characteristics and high system stability.Keywords: Artificial Neural Networks (ANN), p-q theory, (SAPF), Harmonics, Total Harmonic Distortion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20358276 Forecasting Models for Steel Demand Uncertainty Using Bayesian Methods
Authors: Watcharin Sangma, Onsiri Chanmuang, Pitsanu Tongkhow
Abstract:
A forecasting model for steel demand uncertainty in Thailand is proposed. It consists of trend, autocorrelation, and outliers in a hierarchical Bayesian frame work. The proposed model uses a cumulative Weibull distribution function, latent first-order autocorrelation, and binary selection, to account for trend, time-varying autocorrelation, and outliers, respectively. The Gibbs sampling Markov Chain Monte Carlo (MCMC) is used for parameter estimation. The proposed model is applied to steel demand index data in Thailand. The root mean square error (RMSE), mean absolute percentage error (MAPE), and mean absolute error (MAE) criteria are used for model comparison. The study reveals that the proposed model is more appropriate than the exponential smoothing method.
Keywords: Forecasting model, Steel demand uncertainty, Hierarchical Bayesian framework, Exponential smoothing method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2535