Search results for: uncertainty estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1369

Search results for: uncertainty estimation

139 Towards an Enhanced Stochastic Simulation Model for Risk Analysis in Highway Construction

Authors: Anshu Manik, William G. Buttlar, Kasthurirangan Gopalakrishnan

Abstract:

Over the years, there is a growing trend towards quality-based specifications in highway construction. In many Quality Control/Quality Assurance (QC/QA) specifications, the contractor is primarily responsible for quality control of the process, whereas the highway agency is responsible for testing the acceptance of the product. A cooperative investigation was conducted in Illinois over several years to develop a prototype End-Result Specification (ERS) for asphalt pavement construction. The final characteristics of the product are stipulated in the ERS and the contractor is given considerable freedom in achieving those characteristics. The risk for the contractor or agency depends on how the acceptance limits and processes are specified. Stochastic simulation models are very useful in estimating and analyzing payment risk in ERS systems and these form an integral part of the Illinois-s prototype ERS system. This paper describes the development of an innovative methodology to estimate the variability components in in-situ density, air voids and asphalt content data from ERS projects. The information gained from this would be crucial in simulating these ERS projects for estimation and analysis of payment risks associated with asphalt pavement construction. However, these methods require at least two parties to conduct tests on all the split samples obtained according to the sampling scheme prescribed in present ERS implemented in Illinois.

Keywords: Asphalt Pavement, Risk Analysis, StochasticSimulation, QC/QA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1515
138 Jamun Juice Extraction Using Commercial Enzymes and Optimization of the Treatment with the Help of Physicochemical, Nutritional and Sensory Properties

Authors: Payel Ghosh, Rama Chandra Pradhan, Sabyasachi Mishra

Abstract:

Jamun (Syzygium cuminii L.) is one of the important indigenous minor fruit with high medicinal value. The jamun cultivation is unorganized and there is huge loss of this fruit every year. The perishable nature of the fruit makes its postharvest management further difficult. Due to the strong cell wall structure of pectin-protein bonds and hard seeds, extraction of juice becomes difficult. Enzymatic treatment has been commercially used for improvement of juice quality with high yield. The objective of the study was to optimize the best treatment method for juice extraction. Enzymes (Pectinase and Tannase) from different stains had been used and for each enzyme, best result obtained by using response surface methodology. Optimization had been done on the basis of physicochemical property, nutritional property, sensory quality and cost estimation. According to quality aspect, cost analysis and sensory evaluation, the optimizing enzymatic treatment was obtained by Pectinase from Aspergillus aculeatus strain. The optimum condition for the treatment was 44 oC with 80 minute with a concentration of 0.05% (w/w). At these conditions, 75% of yield with turbidity of 32.21NTU, clarity of 74.39%T, polyphenol content of 115.31 mg GAE/g, protein content of 102.43 mg/g have been obtained with a significant difference in overall acceptability.

Keywords: Jamun, enzymatic treatment, physicochemical property, sensory analysis, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1554
137 Gas Detection via Machine Learning

Authors: Walaa Khalaf, Calogero Pace, Manlio Gaudioso

Abstract:

We present an Electronic Nose (ENose), which is aimed at identifying the presence of one out of two gases, possibly detecting the presence of a mixture of the two. Estimation of the concentrations of the components is also performed for a volatile organic compound (VOC) constituted by methanol and acetone, for the ranges 40-400 and 22-220 ppm (parts-per-million), respectively. Our system contains 8 sensors, 5 of them being gas sensors (of the class TGS from FIGARO USA, INC., whose sensing element is a tin dioxide (SnO2) semiconductor), the remaining being a temperature sensor (LM35 from National Semiconductor Corporation), a humidity sensor (HIH–3610 from Honeywell), and a pressure sensor (XFAM from Fujikura Ltd.). Our integrated hardware–software system uses some machine learning principles and least square regression principle to identify at first a new gas sample, or a mixture, and then to estimate the concentrations. In particular we adopt a training model using the Support Vector Machine (SVM) approach with linear kernel to teach the system how discriminate among different gases. Then we apply another training model using the least square regression, to predict the concentrations. The experimental results demonstrate that the proposed multiclassification and regression scheme is effective in the identification of the tested VOCs of methanol and acetone with 96.61% correctness. The concentration prediction is obtained with 0.979 and 0.964 correlation coefficient for the predicted versus real concentrations of methanol and acetone, respectively.

Keywords: Electronic nose, Least square regression, Mixture ofgases, Support Vector Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2539
136 Studying the Effects of Economic and Financial Development as well as Institutional Quality on Environmental Destruction in the Upper-Middle Income Countries

Authors: Morteza Raei Dehaghi, Seyed Mohammad Mirhashemi

Abstract:

The current study explored the effect of economic development, financial development and institutional quality on environmental destruction in upper-middle income countries during the time period of 1999-2011. The dependent variable is logarithm of carbon dioxide emissions that can be considered as an index for destruction or quality of the environment given to its effects on the environment. Financial development and institutional development variables as well as some control variables were considered. In order to study cross-sectional correlation among the countries under study, Pesaran and Friz test was used. Since the results of both tests show cross-sectional correlation in the countries under study, seemingly unrelated regression method was utilized for model estimation. The results disclosed that Kuznets’ environmental curve hypothesis is confirmed in upper-middle income countries and also, financial development and institutional quality have a significant effect on environmental quality. The results of this study can be considered by policy makers in countries with different income groups to have access to a growth accompanied by improved environmental quality.

Keywords: Economic Development, Environmental Destruction, Financial Development, Institutional Development, Seemingly Unrelated Regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1949
135 Active Segment Selection Method in EEG Classification Using Fractal Features

Authors: Samira Vafaye Eslahi

Abstract:

BCI (Brain Computer Interface) is a communication machine that translates brain massages to computer commands. These machines with the help of computer programs can recognize the tasks that are imagined. Feature extraction is an important stage of the process in EEG classification that can effect in accuracy and the computation time of processing the signals. In this study we process the signal in three steps of active segment selection, fractal feature extraction, and classification. One of the great challenges in BCI applications is to improve classification accuracy and computation time together. In this paper, we have used student’s 2D sample t-statistics on continuous wavelet transforms for active segment selection to reduce the computation time. In the next level, the features are extracted from some famous fractal dimension estimation of the signal. These fractal features are Katz and Higuchi. In the classification stage we used ANFIS (Adaptive Neuro-Fuzzy Inference System) classifier, FKNN (Fuzzy K-Nearest Neighbors), LDA (Linear Discriminate Analysis), and SVM (Support Vector Machines). We resulted that active segment selection method would reduce the computation time and Fractal dimension features with ANFIS analysis on selected active segments is the best among investigated methods in EEG classification.

Keywords: EEG, Student’s t- statistics, BCI, Fractal Features, ANFIS, FKNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2120
134 Concept for Knowledge out of Sri Lankan Non-State Sector: Performances of Higher Educational Institutes and Successes of Its Sector

Authors: S. Jeyarajan

Abstract:

Concept of knowledge is discovered from conducted study for successive Competition in Sri Lankan Non-State Higher Educational Institutes. The Concept discovered out of collected Knowledge Management Practices from Emerald inside likewise reputed literatures and of Non-State Higher Educational sector. A test is conducted to reveal existences and its reason behind of these collected practices in Sri Lankan Non-State Higher Education Institutes. Further, unavailability of such study and uncertain on number of participants for data collection in the Sri Lankan context contributed selection of research method as qualitative method, which used attributes of Delphi Method to manage those likewise uncertainty. Data are collected under Dramaturgical Method, which contributes efficient usage of the Delphi method. Grounded theory is selected as data analysis techniques, which is conducted in intermixed discourse to manage different perspectives of data that are collected systematically through perspective and modified snowball sampling techniques. Data are then analysed using Grounded Theory Development Techniques in Intermix discourses to manage differences in Data. Consequently, Agreement in the results of Grounded theories and of finding in the Foreign Study is discovered in the analysis whereas present study conducted as Qualitative Research and The Foreign Study conducted as Quantitative Research. As such, the Present study widens the discovery in the Foreign Study. Further, having discovered reason behind of the existences, the Present result shows Concept for Knowledge from Sri Lankan Non-State sector to manage higher educational Institutes in successful manner.

Keywords: Adherence of snowball sampling into perspective sampling, Delphi method in qualitative method, grounded theory development in intermix discourses of analysis, knowledge management for success of higher educational institutes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 773
133 Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution

Authors: Saleem Z. Ramadan

Abstract:

This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the Pth percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.

Keywords: Reliability, Accelerated life testing, Cumulative exposure model, Bayesian estimation, Progressive Type-I censoring, Weibull distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2161
132 Featured based Segmentation of Color Textured Images using GLCM and Markov Random Field Model

Authors: Dipti Patra, Mridula J

Abstract:

In this paper, we propose a new image segmentation approach for colour textured images. The proposed method for image segmentation consists of two stages. In the first stage, textural features using gray level co-occurrence matrix(GLCM) are computed for regions of interest (ROI) considered for each class. ROI acts as ground truth for the classes. Ohta model (I1, I2, I3) is the colour model used for segmentation. Statistical mean feature at certain inter pixel distance (IPD) of I2 component was considered to be the optimized textural feature for further segmentation. In the second stage, the feature matrix obtained is assumed to be the degraded version of the image labels and modeled as Markov Random Field (MRF) model to model the unknown image labels. The labels are estimated through maximum a posteriori (MAP) estimation criterion using ICM algorithm. The performance of the proposed approach is compared with that of the existing schemes, JSEG and another scheme which uses GLCM and MRF in RGB colour space. The proposed method is found to be outperforming the existing ones in terms of segmentation accuracy with acceptable rate of convergence. The results are validated with synthetic and real textured images.

Keywords: Texture Image Segmentation, Gray Level Cooccurrence Matrix, Markov Random Field Model, Ohta colour space, ICM algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2173
131 MPPT Operation for PV Grid-connected System using RBFNN and Fuzzy Classification

Authors: A. Chaouachi, R. M. Kamel, K. Nagasaka

Abstract:

This paper presents a novel methodology for Maximum Power Point Tracking (MPPT) of a grid-connected 20 kW Photovoltaic (PV) system using neuro-fuzzy network. The proposed method predicts the reference PV voltage guarantying optimal power transfer between the PV generator and the main utility grid. The neuro-fuzzy network is composed of a fuzzy rule-based classifier and three Radial Basis Function Neural Networks (RBFNN). Inputs of the network (irradiance and temperature) are classified before they are fed into the appropriated RBFNN for either training or estimation process while the output is the reference voltage. The main advantage of the proposed methodology, comparing to a conventional single neural network-based approach, is the distinct generalization ability regarding to the nonlinear and dynamic behavior of a PV generator. In fact, the neuro-fuzzy network is a neural network based multi-model machine learning that defines a set of local models emulating the complex and non-linear behavior of a PV generator under a wide range of operating conditions. Simulation results under several rapid irradiance variations proved that the proposed MPPT method fulfilled the highest efficiency comparing to a conventional single neural network.

Keywords: MPPT, neuro-fuzzy, RBFN, grid-connected, photovoltaic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3182
130 Voltage Stability Margin-Based Approach for Placement of Distributed Generators in Power Systems

Authors: Oludamilare Bode Adewuyi, Yanxia Sun, Isaiah Gbadegesin Adebayo

Abstract:

Voltage stability analysis is crucial to the reliable and economic operation of power systems. The power system of developing nations is more susceptible to failures due to the continuously increasing load demand which is not matched with generation increase and efficient transmission infrastructures. Thus, most power systems are heavily stressed and the planning of extra generation from distributed generation sources needs to be efficiently done so as to ensure the security of the power system. In this paper, the performance of a relatively different approach using line voltage stability margin indicator, which has proven to have better accuracy, has been presented and compared with a conventional line voltage stability index for distributed generators (DGs) siting using the Nigerian 28 bus system. Critical Boundary Index (CBI) for voltage stability margin estimation was deployed to identify suitable locations for DG placement and the performance was compared with DG placement using Novel Line Stability Index (NLSI) approach. From the simulation results, both CBI and NLSI agreed greatly on suitable locations for DG on the test system; while CBI identified bus 18 as the most suitable at system overload, NLSI identified bus 8 to be the most suitable. Considering the effect of the DG placement at the selected buses on the voltage magnitude profile, the result shows that the DG placed on bus 18 identified by CBI improved the performance of the power system better.

Keywords: Voltage stability analysis, voltage collapse, voltage stability index, distributed generation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 456
129 Stochastic Subspace Modelling of Turbulence

Authors: M. T. Sichani, B. J. Pedersen, S. R. K. Nielsen

Abstract:

Turbulence of the incoming wind field is of paramount importance to the dynamic response of civil engineering structures. Hence reliable stochastic models of the turbulence should be available from which time series can be generated for dynamic response and structural safety analysis. In the paper an empirical cross spectral density function for the along-wind turbulence component over the wind field area is taken as the starting point. The spectrum is spatially discretized in terms of a Hermitian cross-spectral density matrix for the turbulence state vector which turns out not to be positive definite. Since the succeeding state space and ARMA modelling of the turbulence rely on the positive definiteness of the cross-spectral density matrix, the problem with the non-positive definiteness of such matrices is at first addressed and suitable treatments regarding it are proposed. From the adjusted positive definite cross-spectral density matrix a frequency response matrix is constructed which determines the turbulence vector as a linear filtration of Gaussian white noise. Finally, an accurate state space modelling method is proposed which allows selection of an appropriate model order, and estimation of a state space model for the vector turbulence process incorporating its phase spectrum in one stage, and its results are compared with a conventional ARMA modelling method.

Keywords: Turbulence, wind turbine, complex coherence, state space modelling, ARMA modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646
128 Fuzzy Ideology based Long Term Load Forecasting

Authors: Jagadish H. Pujar

Abstract:

Fuzzy Load forecasting plays a paramount role in the operation and management of power systems. Accurate estimation of future power demands for various lead times facilitates the task of generating power reliably and economically. The forecasting of future loads for a relatively large lead time (months to few years) is studied here (long term load forecasting). Among the various techniques used in forecasting load, artificial intelligence techniques provide greater accuracy to the forecasts as compared to conventional techniques. Fuzzy Logic, a very robust artificial intelligent technique, is described in this paper to forecast load on long term basis. The paper gives a general algorithm to forecast long term load. The algorithm is an Extension of Short term load forecasting method to Long term load forecasting and concentrates not only on the forecast values of load but also on the errors incorporated into the forecast. Hence, by correcting the errors in the forecast, forecasts with very high accuracy have been achieved. The algorithm, in the paper, is demonstrated with the help of data collected for residential sector (LT2 (a) type load: Domestic consumers). Load, is determined for three consecutive years (from April-06 to March-09) in order to demonstrate the efficiency of the algorithm and to forecast for the next two years (from April-09 to March-11).

Keywords: Fuzzy Logic Control (FLC), Data DependantFactors(DDF), Model Dependent Factors(MDF), StatisticalError(SE), Short Term Load Forecasting (STLF), MiscellaneousError(ME).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2469
127 Laser Data Based Automatic Generation of Lane-Level Road Map for Intelligent Vehicles

Authors: Zehai Yu, Hui Zhu, Linglong Lin, Huawei Liang, Biao Yu, Weixin Huang

Abstract:

With the development of intelligent vehicle systems, a high-precision road map is increasingly needed in many aspects. The automatic lane lines extraction and modeling are the most essential steps for the generation of a precise lane-level road map. In this paper, an automatic lane-level road map generation system is proposed. To extract the road markings on the ground, the multi-region Otsu thresholding method is applied, which calculates the intensity value of laser data that maximizes the variance between background and road markings. The extracted road marking points are then projected to the raster image and clustered using a two-stage clustering algorithm. Lane lines are subsequently recognized from these clusters by the shape features of their minimum bounding rectangle. To ensure the storage efficiency of the map, the lane lines are approximated to cubic polynomial curves using a Bayesian estimation approach. The proposed lane-level road map generation system has been tested on urban and expressway conditions in Hefei, China. The experimental results on the datasets show that our method can achieve excellent extraction and clustering effect, and the fitted lines can reach a high position accuracy with an error of less than 10 cm.

Keywords: Curve fitting, lane-level road map, line recognition, multi-thresholding, two-stage clustering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 512
126 Quantitative Genetics Researches on Milk Protein Systems of Romanian Grey Steppe Breed

Authors: V. Maciuc, Şt. Creangă, I. Gîlcă, V. Ujică

Abstract:

The paper makes part from a complex research project on Romanian Grey Steppe, a unique breed in terms of biological and cultural-historical importance, on the verge of extinction and which has been included in a preservation programme of genetic resources from Romania. The study of genetic polymorphism of protean fractions, especially kappa-casein, and the genotype relations of these lactoproteins with some quantitative and qualitative features of milk yield represents a current theme and a novelty for this breed. In the estimation of the genetic parameters we used R.E.M.L. (Restricted Maximum Likelihood) method. The main lactoprotein from milk, kappa - casein (K-cz), characterized in the specialized literature as a feature having a high degree of hereditary transmission, behaves as such in the nucleus under study, a value also confirmed by the heritability coefficient (h2 = 0.57 %). We must mention the medium values for milk and fat quantity (h2=0.26, 0.29 %) and the fat and protein percentage from milk having a high hereditary influence h2 = 0.71 - 0.63 %. Correlations between kappa-casein and the milk quantity are negative and strong. Between kappa-casein and other qualitative features of milk (fat content 0.58-0.67 % and protein content 0.77- 0.87%), there are positive and very strong correlations. At the same time, between kappa-casein and β casein (β-cz), β lactoglobulin (β- lg) respectively, correlations are positive having high values (0.37 – 0.45 %), indicating the same causes and determining factors for the two groups of features.

Keywords: breed, genetic preservation, lactoproteins, Romanian Grey Steppe

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1722
125 Estimation of Thermal Conductivity of Nanofluids Using MD-Stochastic Simulation Based Approach

Authors: Sujoy Das, M. M. Ghosh

Abstract:

The thermal conductivity of a fluid can be significantly enhanced by dispersing nano-sized particles in it, and the resultant fluid is termed as "nanofluid". A theoretical model for estimating the thermal conductivity of a nanofluid has been proposed here. It is based on the mechanism that evenly dispersed nanoparticles within a nanofluid undergo Brownian motion in course of which the nanoparticles repeatedly collide with the heat source. During each collision a rapid heat transfer occurs owing to the solidsolid contact. Molecular dynamics (MD) simulation of the collision of nanoparticles with the heat source has shown that there is a pulselike pick up of heat by the nanoparticles within 20-100 ps, the extent of which depends not only on thermal conductivity of the nanoparticles, but also on the elastic and other physical properties of the nanoparticle. After the collision the nanoparticles undergo Brownian motion in the base fluid and release the excess heat to the surrounding base fluid within 2-10 ms. The Brownian motion and associated temperature variation of the nanoparticles have been modeled by stochastic analysis. Repeated occurrence of these events by the suspended nanoparticles significantly contributes to the characteristic thermal conductivity of the nanofluids, which has been estimated by the present model for a ethylene glycol based nanofluid containing Cu-nanoparticles of size ranging from 8 to 20 nm, with Gaussian size distribution. The prediction of the present model has shown a reasonable agreement with the experimental data available in literature.

Keywords: Brownian dynamics, Molecular dynamics, Nanofluid, Thermal conductivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2263
124 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam

Authors: S. Golmohammadi, M. Noorian Bidgoli

Abstract:

Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the Rock Quality Designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and Stress Reduction Factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has been attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the Rock Engineering System (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.

Keywords: Q-system, Rock Engineering System, statistical analysis, rock mass, tunnel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 295
123 Experimental Investigation on Freeze-Concentration Process Desalting for Highly Saline Brines

Authors: H. Al-Jabli

Abstract:

Using the freeze-melting process for the disposing of high saline brines was the aim of the paper by confirming the performance estimation of the treatment system. A laboratory bench scale freezing technique test unit was designed, constructed, and tested at Doha Research Plant (DRP) in Kuwait. The principal unit operations that have been considered for the laboratory study are: ice crystallization, separation, washing, and melting. The applied process is characterized as “the secondary-refrigerant indirect freezing”, which is utilizing normal freezing concept. The high saline brine was used as definite feed water, i.e. average TDS of 250,000 ppm. Kuwait desalination plants were carried out in the experimental study to measure the performance of the proposed treatment system. Experimental analysis shows that the freeze-melting process is capable of dropping the TDS of the feed water from 249,482 ppm to 56,880 ppm of the freeze-melting process in the two-phase’s course, whereas overall recovery results of the salt passage and salt rejection are 31.11%, 19.05%, and 80.95%, correspondingly. Therefore, the freeze-melting process is encouraging for the proposed application, as it shows on the results, which approves the process capability of reducing a major amount of the dissolved salts of the high saline brine with reasonable sensible recovery. This process might be reasonable with other brine disposal processes.

Keywords: High saline brine, freeze-melting process, ice crystallization, brine disposal process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1059
122 Robot Operating System-Based SLAM for a Gazebo-Simulated Turtlebot2 in 2d Indoor Environment with Cartographer Algorithm

Authors: Wilayat Ali, Li Sheng, Waleed Ahmed

Abstract:

The ability of the robot to make simultaneously map of the environment and localize itself with respect to that environment is the most important element of mobile robots. To solve SLAM many algorithms could be utilized to build up the SLAM process and SLAM is a developing area in Robotics research. Robot Operating System (ROS) is one of the frameworks which provide multiple algorithm nodes to work with and provide a transmission layer to robots. Manyof these algorithms extensively in use are Hector SLAM, Gmapping and Cartographer SLAM. This paper describes a ROS-based Simultaneous localization and mapping (SLAM) library Google Cartographer mapping, which is open-source algorithm. The algorithm was applied to create a map using laser and pose data from 2d Lidar that was placed on a mobile robot. The model robot uses the gazebo package and simulated in Rviz. Our research work's primary goal is to obtain mapping through Cartographer SLAM algorithm in a static indoor environment. From our research, it is shown that for indoor environments cartographer is an applicable algorithm to generate 2d maps with LIDAR placed on mobile robot because it uses both odometry and poses estimation. The algorithm has been evaluated and maps are constructed against the SLAM algorithms presented by Turtlebot2 in the static indoor environment.

Keywords: SLAM, ROS, navigation, localization and mapping, Gazebo, Rviz, Turtlebot2, SLAM algorithms, 2d Indoor environment, Cartographer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1232
121 Open-Loop Vector Control of Induction Motor with Space Vector Pulse Width Modulation Technique

Authors: Karchung, S. Ruangsinchaiwanich

Abstract:

This paper presents open-loop vector control method of induction motor with space vector pulse width modulation (SVPWM) technique. Normally, the closed loop speed control is preferred and is believed to be more accurate. However, it requires a position sensor to track the rotor position which is not desirable to use it for certain workspace applications. This paper exhibits the performance of three-phase induction motor with the simplest control algorithm without the use of a position sensor nor an estimation block to estimate rotor position for sensorless control. The motor stator currents are measured and are transformed to synchronously rotating (d-q-axis) frame by use of Clarke and Park transformation. The actual control happens in this frame where the measured currents are compared with the reference currents. The error signal is fed to a conventional PI controller, and the corrected d-q voltage is generated. The controller outputs are transformed back to three phase voltages and are fed to SVPWM block which generates PWM signal for the voltage source inverter. The open loop vector control model along with SVPWM algorithm is modeled in MATLAB/Simulink software and is experimented and validated in TMS320F28335 DSP board.

Keywords: Electric drive, induction motor, open-loop vector control, space vector pulse width modulation technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 965
120 Estimation of the Minimum Floor Length Downstream Regulators under Different Flow Scenarios

Authors: Bakhiet, Shenouda, Gamal Abouzeid Abdel-Rahim, Norihiro Izumi

Abstract:

The correct design of the regulators structure requires complete prediction of the ultimate dimensions of the scour hole profile formed downstream the solid apron. The study of scour downstream regulator is studied either on solid aprons by means of velocity distribution or on movable bed by studying the topography of the scour hole formed in the downstream. In this paper, a new technique was developed to study the scour hole downstream regulators on movable beds. The study was divided into two categories; the first is to find out the sum of the lengths of rigid apron behind the gates in addition to the length of scour hole formed downstream, while the second is to find the minimum length of rigid apron behind the gates to prevent erosion downstream it. The study covers free and submerged hydraulic jump conditions in both symmetrical and asymmetrical under-gated regulations. From the comparison between the studied categories, we found that the minimum length of rigid apron to prevent scour (Ls) is greater than the sum of the lengths of rigid apron and that of scour hole formed behind it (L+Xs). On the other hand, the scour hole dimensions in case of submerged hydraulic jump is always greater than free one, also the scour hole dimensions in asymmetrical operation is greater than symmetrical one.

Keywords: Movable bed, Regulators, Scour, Symmetrical and asymmetrical operation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1780
119 Mobile Robot Control by Von Neumann Computer

Authors: E. V. Larkin, T. A. Akimenko, A. V. Bogomolov, A. N. Privalov

Abstract:

The digital control system of mobile robots (MR) control is considered. It is shown that sequential interpretation of control algorithm operators, unfolding in physical time, suggests the occurrence of time delays between inputting data from sensors and outputting data to actuators. Another destabilizing control factor is presence of backlash in the joints of an actuator with an executive unit. Complex model of control system, which takes into account the dynamics of the MR, the dynamics of the digital controller and backlash in actuators, is worked out. The digital controller model is divided into two parts: the first part describes the control law embedded in the controller in the form of a control program that realizes a polling procedure when organizing transactions to sensors and actuators. The second part of the model describes the time delays that occur in the Von Neumann-type controller when processing data. To estimate time intervals, the algorithm is represented in the form of an ergodic semi-Markov process. For an ergodic semi-Markov process of common form, a method is proposed for estimation a wandering time from one arbitrary state to another arbitrary state. Example shows how the backlash and time delays affect the quality characteristics of the MR control system functioning.

Keywords: Mobile robot, backlash, control algorithm, Von Neumann controller, semi-Markov process, time delay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 367
118 Brain Image Segmentation Using Conditional Random Field Based On Modified Artificial Bee Colony Optimization Algorithm

Authors: B. Thiagarajan, R. Bremananth

Abstract:

Tumor is an uncontrolled growth of tissues in any part of the body. Tumors are of different types and they have different characteristics and treatments. Brain tumor is inherently serious and life-threatening because of its character in the limited space of the intracranial cavity (space formed inside the skull). Locating the tumor within MR (magnetic resonance) image of brain is integral part of the treatment of brain tumor. This segmentation task requires classification of each voxel as either tumor or non-tumor, based on the description of the voxel under consideration. Many studies are going on in the medical field using Markov Random Fields (MRF) in segmentation of MR images. Even though the segmentation process is better, computing the probability and estimation of parameters is difficult. In order to overcome the aforementioned issues, Conditional Random Field (CRF) is used in this paper for segmentation, along with the modified artificial bee colony optimization and modified fuzzy possibility c-means (MFPCM) algorithm. This work is mainly focused to reduce the computational complexities, which are found in existing methods and aimed at getting higher accuracy. The efficiency of this work is evaluated using the parameters such as region non-uniformity, correlation and computation time. The experimental results are compared with the existing methods such as MRF with improved Genetic Algorithm (GA) and MRF-Artificial Bee Colony (MRF-ABC) algorithm.

Keywords: Conditional random field, Magnetic resonance, Markov random field, Modified artificial bee colony.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2948
117 Bee Parameter Determination via Weighted Centriod Modified Simplex and Constrained Response Surface Optimisation Methods

Authors: P. Luangpaiboon

Abstract:

Various intelligences and inspirations have been adopted into the iterative searching process called as meta-heuristics. They intelligently perform the exploration and exploitation in the solution domain space aiming to efficiently seek near optimal solutions. In this work, the bee algorithm, inspired by the natural foraging behaviour of honey bees, was adapted to find the near optimal solutions of the transportation management system, dynamic multi-zone dispatching. This problem prepares for an uncertainty and changing customers- demand. In striving to remain competitive, transportation system should therefore be flexible in order to cope with the changes of customers- demand in terms of in-bound and outbound goods and technological innovations. To remain higher service level but lower cost management via the minimal imbalance scenario, the rearrangement penalty of the area, in each zone, including time periods are also included. However, the performance of the algorithm depends on the appropriate parameters- setting and need to be determined and analysed before its implementation. BEE parameters are determined through the linear constrained response surface optimisation or LCRSOM and weighted centroid modified simplex methods or WCMSM. Experimental results were analysed in terms of best solutions found so far, mean and standard deviation on the imbalance values including the convergence of the solutions obtained. It was found that the results obtained from the LCRSOM were better than those using the WCMSM. However, the average execution time of experimental run using the LCRSOM was longer than those using the WCMSM. Finally a recommendation of proper level settings of BEE parameters for some selected problem sizes is given as a guideline for future applications.

Keywords: Meta-heuristic, Bee Algorithm, Dynamic Multi-Zone Dispatching, Linear Constrained Response SurfaceOptimisation Method, Weighted Centroid Modified Simplex Method

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1373
116 General Regression Neural Network and Back Propagation Neural Network Modeling for Predicting Radial Overcut in EDM: A Comparative Study

Authors: Raja Das, M. K. Pradhan

Abstract:

This paper presents a comparative study between two neural network models namely General Regression Neural Network (GRNN) and Back Propagation Neural Network (BPNN) are used to estimate radial overcut produced during Electrical Discharge Machining (EDM). Four input parameters have been employed: discharge current (Ip), pulse on time (Ton), Duty fraction (Tau) and discharge voltage (V). Recently, artificial intelligence techniques, as it is emerged as an effective tool that could be used to replace time consuming procedures in various scientific or engineering applications, explicitly in prediction and estimation of the complex and nonlinear process. The both networks are trained, and the prediction results are tested with the unseen validation set of the experiment and analysed. It is found that the performance of both the networks are found to be in good agreement with average percentage error less than 11% and the correlation coefficient obtained for the validation data set for GRNN and BPNN is more than 91%. However, it is much faster to train GRNN network than a BPNN and GRNN is often more accurate than BPNN. GRNN requires more memory space to store the model, GRNN features fast learning that does not require an iterative procedure, and highly parallel structure. GRNN networks are slower than multilayer perceptron networks at classifying new cases.

Keywords: Electrical-discharge machining, General Regression Neural Network, Back-propagation Neural Network, Radial Overcut.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3115
115 Stature Prediction Model Based On Hand Anthropometry

Authors: Arunesh Chandra, Pankaj Chandna, Surinder Deswal, Rajesh Kumar Mishra, Rajender Kumar

Abstract:

The arm length, hand length, hand breadth and middle finger length of 1540 right-handed industrial workers of Haryana state was used to assess the relationship between the upper limb dimensions and stature. Initially, the data were analyzed using basic univariate analysis and independent t-tests; then simple and multiple linear regression models were used to estimate stature using SPSS (version 17). There was a positive correlation between upper limb measurements (hand length, hand breadth, arm length and middle finger length) and stature (p < 0.01), which was highest for hand length. The accuracy of stature prediction ranged from ± 54.897 mm to ± 58.307 mm. The use of multiple regression equations gave better results than simple regression equations. This study provides new forensic standards for stature estimation from the upper limb measurements of male industrial workers of Haryana (India). The results of this research indicate that stature can be determined using hand dimensions with accuracy, when only upper limb is available due to any reasons likewise explosions, train/plane crashes, mutilated bodies, etc. The regression formula derived in this study will be useful for anatomists, archaeologists, anthropologists, design engineers and forensic scientists for fairly prediction of stature using regression equations.

Keywords: Anthropometric dimensions, Forensic identification, Industrial workers, Stature prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2961
114 Investigating the Regulation System of the Synchronous Motor Excitation Mode Serving as a Reactive Power Source

Authors: Baghdasaryan Marinka, Ulikyan Azatuhi

Abstract:

The efficient usage of the compensation abilities of the electrical drive synchronous motors used in production processes can essentially improve the technical and economic indices of the process.  Reducing the flows of the reactive electrical energy due to the compensation of reactive power allows to significantly reduce the load losses of power in the electrical networks. As a result of analyzing the scientific works devoted to the issues of regulating the excitation of the synchronous motors, the need for comprehensive investigation and estimation of the excitation mode has been substantiated. By means of the obtained transmission functions, in the Simulink environment of the software package MATLAB, the transition processes of the excitation mode have been studied. As a result of obtaining and estimating the graph of the Nyquist plot and the transient process, the necessity of developing the Proportional-Integral-Derivative (PID) regulator has been justified. The transient processes of the system of the PID regulator have been investigated, and the amplitude–phase characteristics of the system have been estimated. The analysis of the obtained results has shown that the regulation indices of the developed system have been improved. The developed system can be successfully applied for regulating the excitation voltage of different-power synchronous motors, operating with a changing load, ensuring a value of the power coefficient close to 1.

Keywords: Transient process, synchronous motor, excitation mode, regulator, reactive power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 688
113 Adsorption and Electrochemical Regeneration for Industrial Wastewater Treatment

Authors: H. M. Mohammad, A. Martin, N. Brown, N. Hodson, P. Hill, E. Roberts

Abstract:

Graphite intercalation compound (GIC) has been demonstrated to be a useful, low capacity and rapid adsorbent for the removal of organic micropollutants from water. The high electrical conductivity and low capacity of the material lends itself to electrochemical regeneration. Following electrochemical regeneration, equilibrium loading under similar conditions is reported to exceed that achieved by the fresh adsorbent. This behavior is reported in terms of the regeneration efficiency being greater than 100%. In this work, surface analysis techniques are employed to investigate the material in three states: ‘Fresh’, ‘Loaded’ and ‘Regenerated’. ‘Fresh’ GIC is shown to exhibit a hydrogen and oxygen rich surface layer approximately 150 nm thick. ‘Loaded’ GIC shows a similar but slightly thicker surface layer (approximately 370 nm thick) and significant enhancement in the hydrogen and oxygen abundance extending beyond 600 nm from the surface. 'Regenerated’ GIC shows an oxygen rich layer, slightly thicker than the fresh case at approximately 220 nm while showing a very much lower hydrogen enrichment at the surface. Results demonstrate that while the electrochemical regeneration effectively removes the phenol model pollutant, it also oxidizes the exposed carbon surface. These results may have a significant impact on the estimation of adsorbent life.

Keywords: Graphite, adsorbent, electrochemical, regeneration, phenol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 686
112 A Concept to Assess the Economic Importance of the On-Site Activities of ETICS

Authors: V. Sulakatko, F. U. Vogdt, I. Lill

Abstract:

Construction technology and on-site construction activities have a direct influence on the life cycle costs of energy efficiently renovated apartment buildings. The systematic inadequacies of the External Thermal Insulation Composite System (ETICS) which occur during the construction phase increase the risk for all stakeholders, reduce mechanical durability and increase the life cycle costs of the building. The economic effect of these shortcomings can be minimised if the risk of the most significant on-site activities is recognised. The objective of the presented ETICS economic assessment concept is to evaluate the economic influence of on-site shortcomings and reveal their significance to the foreseeable future repair costs. The model assembles repair techniques, discusses their direct cost calculation methods, argues over the proper usage of net present value over the life cycle of the building, and proposes a simulation tool to evaluate the risk of on-site activities. As the technique is dependent on the selected real interest rate, a sensitivity analysis is anticipated to determine the validity of the recommendations. After the verification of the model on the sample buildings by the industry, it is expected to increase economic rationality of resource allocation and reduce high-risk systematic shortcomings during the construction process of ETICS.

Keywords: Activity-based cost estimating, Cost estimation, ETICS, Life cycle costing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 828
111 The Applications of Quantum Mechanics Simulation for Solvent Selection in Chemicals Separation

Authors: Attapong T., Hong-Ming Ku, Nakarin M., Narin L., Alisa L, Jirut W.

Abstract:

The quantum mechanics simulation was applied for calculating the interaction force between 2 molecules based on atomic level. For the simple extractive distillation system, it is ternary components consisting of 2 closed boiling point components (A,lower boiling point and B, higher boiling point) and solvent (S). The quantum mechanics simulation was used to calculate the intermolecular force (interaction force) between the closed boiling point components and solvents consisting of intermolecular between A-S and B-S. The requirement of the promising solvent for extractive distillation is that solvent (S) has to form stronger intermolecular force with only one component than the other component (A or B). In this study, the systems of aromatic-aromatic, aromatic-cycloparaffin, and paraffindiolefin systems were selected as the demonstration for solvent selection. This study defined new term using for screening the solvents called relative interaction force which is calculated from the quantum mechanics simulation. The results showed that relative interaction force gave the good agreement with the literature data (relative volatilities from the experiment). The reasons are discussed. Finally, this study suggests that quantum mechanics results can improve the relative volatility estimation for screening the solvents leading to reduce time and money consuming

Keywords: Extractive distillation, Interaction force, Quamtum mechanic, Relative volatility, Solvent extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1593
110 Multistage Data Envelopment Analysis Model for Malmquist Productivity Index Using Grey's System Theory to Evaluate Performance of Electric Power Supply Chain in Iran

Authors: Mesbaholdin Salami, Farzad Movahedi Sobhani, Mohammad Sadegh Ghazizadeh

Abstract:

Evaluation of organizational performance is among the most important measures that help organizations and entities continuously improve their efficiency. Organizations can use the existing data and results from the comparison of units under investigation to obtain an estimation of their performance. The Malmquist Productivity Index (MPI) is an important index in the evaluation of overall productivity, which considers technological developments and technical efficiency at the same time. This article proposed a model based on the multistage MPI, considering limited data (Grey’s theory). This model can evaluate the performance of units using limited and uncertain data in a multistage process. It was applied by the electricity market manager to Iran’s electric power supply chain (EPSC), which contains uncertain data, to evaluate the performance of its actors. Results from solving the model showed an improvement in the accuracy of future performance of the units under investigation, using the Grey’s system theory. This model can be used in all case studies, in which MPI is used and there are limited or uncertain data.

Keywords: Malmquist Index, Grey's Theory, Charnes Cooper & Rhodes (CCR) Model, network data envelopment analysis, Iran electricity power chain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 553