Search results for: automatic parameter calibration
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1874

Search results for: automatic parameter calibration

1304 An Experimental Multi-Agent Robot System for Operating in Hazardous Environments

Authors: Y. J. Huang, J. D. Yu, B. W. Hong, C. H. Tai, T. C. Kuo

Abstract:

In this paper, a multi-agent robot system is presented. The system consists of four robots. The developed robots are able to automatically enter and patrol a harmful environment, such as the building infected with virus or the factory with leaking hazardous gas. Further, every robot is able to perform obstacle avoidance and search for the victims. Several operation modes are designed: remote control, obstacle avoidance, automatic searching, and so on.

Keywords: autonomous robot, field programmable gate array, obstacle avoidance, ultrasonic sensor, wireless communication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1778
1303 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: Diesel engine, machine learning, NOx emission, semi-empirical.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 855
1302 A Hybrid Particle Swarm Optimization-Nelder- Mead Algorithm (PSO-NM) for Nelson-Siegel- Svensson Calibration

Authors: Sofia Ayouche, Rachid Ellaia, Rajae Aboulaich

Abstract:

Today, insurers may use the yield curve as an indicator evaluation of the profit or the performance of their portfolios; therefore, they modeled it by one class of model that has the ability to fit and forecast the future term structure of interest rates. This class of model is the Nelson-Siegel-Svensson model. Unfortunately, many authors have reported a lot of difficulties when they want to calibrate the model because the optimization problem is not convex and has multiple local optima. In this context, we implement a hybrid Particle Swarm optimization and Nelder Mead algorithm in order to minimize by least squares method, the difference between the zero-coupon curve and the NSS curve.

Keywords: Optimization, zero-coupon curve, Nelson-Siegel- Svensson, Particle Swarm Optimization, Nelder-Mead Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491
1301 Calibration of the Radical Installation Limit Error of the Accelerometer in the Gravity Gradient Instrument

Authors: Danni Cong, Meiping Wu, Xiaofeng He, Junxiang Lian, Juliang Cao, Shaokuncai, Hao Qin

Abstract:

Gravity gradient instrument (GGI) is the core of the gravity gradiometer, so the structural error of the sensor has a great impact on the measurement results. In order not to affect the aimed measurement accuracy, limit error is required in the installation of the accelerometer. In this paper, based on the established measuring principle model, the radial installation limit error is calibrated, which is taken as an example to provide a method to calculate the other limit error of the installation under the premise of ensuring the accuracy of the measurement result. This method provides the idea for deriving the limit error of the geometry structure of the sensor, laying the foundation for the mechanical precision design and physical design.

Keywords: Gravity gradient sensor, radial installation limit error, accelerometer, uniaxial rotational modulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 932
1300 Ultra-Precise Hybrid Lens Distortion Correction

Authors: Christian Bräuer-Burchardt, Peter Kühmstedt, Gunther Notni

Abstract:

A new hybrid method to realise high-precision distortion determination for optical ultra-precision 3D measurement systems based on stereo cameras using active light projection is introduced. It consists of two phases: the basic distortion determination and the refinement. The refinement phase of the procedure uses a plane surface and projected fringe patterns as calibration tools to determine simultaneously the distortion of both cameras within an iterative procedure. The new technique may be performed in the state of the device “ready for measurement" which avoids errors by a later adjustment. A considerable reduction of distortion errors is achieved and leads to considerable improvements of the accuracy of 3D measurements, especially in the precise measurement of smooth surfaces.

Keywords: 3D Surface Measurement, Fringe Projection, Lens Distortion, Stereo.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2067
1299 A Novel Non-Uniformity Correction Algorithm Based On Non-Linear Fit

Authors: Yang Weiping, Zhang Zhilong, Zhang Yan, Chen Zengping

Abstract:

Infrared focal plane arrays (IRFPA) sensors, due to their high sensitivity, high frame frequency and simple structure, have become the most prominently used detectors in military applications. However, they suffer from a common problem called the fixed pattern noise (FPN), which severely degrades image quality and limits the infrared imaging applications. Therefore, it is necessary to perform non-uniformity correction (NUC) on IR image. The algorithms of non-uniformity correction are classified into two main categories, the calibration-based and scene-based algorithms. There exist some shortcomings in both algorithms, hence a novel non-uniformity correction algorithm based on non-linear fit is proposed, which combines the advantages of the two algorithms. Experimental results show that the proposed algorithm acquires a good effect of NUC with a lower non-uniformity ratio.

Keywords: Non-uniformity correction, non-linear fit, two-point correction, temporal Kalman filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2316
1298 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.

Keywords: Injection molding, shrinkage, six sigma, Taguchi parameter design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1382
1297 The Use of Artificial Neural Network in Option Pricing: The Case of S and P 100 Index Options

Authors: Zeynep İltüzer Samur, Gül Tekin Temur

Abstract:

Due to the increasing and varying risks that economic units face with, derivative instruments gain substantial importance, and trading volumes of derivatives have reached very significant level. Parallel with these high trading volumes, researchers have developed many different models. Some are parametric, some are nonparametric. In this study, the aim is to analyse the success of artificial neural network in pricing of options with S&P 100 index options data. Generally, the previous studies cover the data of European type call options. This study includes not only European call option but also American call and put options and European put options. Three data sets are used to perform three different ANN models. One only includes data that are directly observed from the economic environment, i.e. strike price, spot price, interest rate, maturity, type of the contract. The others include an extra input that is not an observable data but a parameter, i.e. volatility. With these detail data, the performance of ANN in put/call dimension, American/European dimension, moneyness dimension is analyzed and whether the contribution of the volatility in neural network analysis make improvement in prediction performance or not is examined. The most striking results revealed by the study is that ANN shows better performance when pricing call options compared to put options; and the use of volatility parameter as an input does not improve the performance.

Keywords: Option Pricing, Neural Network, S&P 100 Index, American/European options

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3085
1296 A Study on the Effect of Valve Timing on the Combustion and Emission Characteristics for a 4-cylinder PCCI Diesel Engine

Authors: Joonsup Han, Jaehyeon Lee, Hyungmin Kim, Kihyung Lee

Abstract:

PCCI engines can reduce NOx and PM emissions simultaneously without sacrificing thermal efficiency, but a low combustion temperature resulting from early fuel injection, and ignition occurring prior to TDC, can cause higher THC and CO emissions and fuel consumption. In conclusion, it was found that the PCCI combustion achieved by the 2-stage injection strategy with optimized calibration factors (e.g. EGR rate, injection pressure, swirl ratio, intake pressure, injection timing) can reduce NOx and PM emissions simultaneously. This research works are expected to provide valuable information conducive to a development of an innovative combustion engine that can fulfill upcoming stringent emission standards.

Keywords: Atkinson cycle, Diesel Engine, LIVC (Late intakevalve closing), PCCI (premixed charge compression ignition)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2598
1295 Flow Discharge Determination in Straight Compound Channels Using ANNs

Authors: A. Zahiri, A. A. Dehghani

Abstract:

Although many researchers have studied the flow hydraulics in compound channels, there are still many complicated problems in determination of their flow rating curves. Many different methods have been presented for these channels but extending them for all types of compound channels with different geometrical and hydraulic conditions is certainly difficult. In this study, by aid of nearly 400 laboratory and field data sets of geometry and flow rating curves from 30 different straight compound sections and using artificial neural networks (ANNs), flow discharge in compound channels was estimated. 13 dimensionless input variables including relative depth, relative roughness, relative width, aspect ratio, bed slope, main channel side slopes, flood plains side slopes and berm inclination and one output variable (flow discharge), have been used in ANNs. Comparison of ANNs model and traditional method (divided channel method-DCM) shows high accuracy of ANNs model results. The results of Sensitivity analysis showed that the relative depth with 47.6 percent contribution, is the most effective input parameter for flow discharge prediction. Relative width and relative roughness have 19.3 and 12.2 percent of importance, respectively. On the other hand, shape parameter, main channel and flood plains side slopes with 2.1, 3.8 and 3.8 percent of contribution, have the least importance.

Keywords: ANN model, compound channels, divided channel method (DCM), flow rating curve

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2558
1294 The Analysis of Defects Prediction in Injection Molding

Authors: Mehdi Moayyedian, Kazem Abhary, Romeo Marian

Abstract:

This paper presents an evaluation of a plastic defect in injection molding before it occurs in the process; it is known as the short shot defect. The evaluation of different parameters which affect the possibility of short shot defect is the aim of this paper. The analysis of short shot possibility is conducted via SolidWorks Plastics and Taguchi method to determine the most significant parameters. Finite Element Method (FEM) is employed to analyze two circular flat polypropylene plates of 1 mm thickness. Filling time, part cooling time, pressure holding time, melt temperature and gate type are chosen as process and geometric parameters, respectively. A methodology is presented herein to predict the possibility of the short-shot occurrence. The analysis determined melt temperature is the most influential parameter affecting the possibility of short shot defect with a contribution of 74.25%, and filling time with a contribution of 22%, followed by gate type with a contribution of 3.69%. It was also determined the optimum level of each parameter leading to a reduction in the possibility of short shot are gate type at level 1, filling time at level 3 and melt temperature at level 3. Finally, the most significant parameters affecting the possibility of short shot were determined to be melt temperature, filling time, and gate type.

Keywords: Injection molding, plastic defects, short shot, Taguchi method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1533
1293 Surface Modification of Titanium Alloy with Laser Treatment

Authors: Nassier A. Nassir, Robert Birch, D. Rico Sierra, S. P. Edwardson, G. Dearden, Zhongwei Guan

Abstract:

The effect of laser surface treatment parameters on the residual strength of titanium alloy has been investigated. The influence of the laser surface treatment on the bonding strength between the titanium and poly-ether-ketone-ketone (PEKK) surfaces was also evaluated and compared to those offered by titanium foils without surface treatment to optimize the laser parameters. Material characterization using an optical microscope was carried out to study the microstructure and to measure the mean roughness value of the titanium surface. The results showed that the surface roughness shows a significant dependency on the laser power parameters in which surface roughness increases with the laser power increment. Moreover, the results of the tensile tests have shown that there is no significant dropping in tensile strength for the treated samples comparing to the virgin ones. In order to optimize the laser parameter as well as the corresponding surface roughness, single-lap shear tests were conducted on pairs of the laser treated titanium stripes. The results showed that the bonding shear strength between titanium alloy and PEKK film increased with the surface roughness increment to a specific limit. After this point, it is interesting to note that there was no significant effect for the laser parameter on the bonding strength. This evidence suggests that it is not necessary to use very high power of laser to treat titanium surface to achieve a good bonding strength between titanium alloy and the PEKK film.

Keywords: Bonding strength, laser surface treatment, PEKK, titanium alloy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 859
1292 Enhanced Weighted Centroid Localization Algorithm for Indoor Environments

Authors: I. Nižetić Kosović, T. Jagušt

Abstract:

Lately, with the increasing number of location-based applications, demand for highly accurate and reliable indoor localization became urgent. This is a challenging problem, due to the measurement variance which is the consequence of various factors like obstacles, equipment properties and environmental changes in complex nature of indoor environments. In this paper we propose low-cost custom-setup infrastructure solution and localization algorithm based on the Weighted Centroid Localization (WCL) method. Localization accuracy is increased by several enhancements: calibration of RSSI values gained from wireless nodes, repetitive measurements of RSSI to exclude deviating values from the position estimation, and by considering orientation of the device according to the wireless nodes. We conducted several experiments to evaluate the proposed algorithm. High accuracy of ~1m was achieved.

Keywords: Indoor environment, received signal strength indicator, weighted centroid localization, wireless localization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3104
1291 Quality Parameters of Offset Printing Wastewater

Authors: Kiurski S. Jelena, Kecić S. Vesna, Aksentijević M. Snežana

Abstract:

Samples of tap and wastewater were collected in three offset printing facilities in Novi Sad, Serbia. Ten physicochemical parameters were analyzed within all collected samples: pH, conductivity, m - alkalinity, p - alkalinity, acidity, carbonate concentration, hydrogen carbonate concentration, active oxygen content, chloride concentration and total alkali content. All measurements were conducted using the standard analytical and instrumental methods. Comparing the obtained results for tap water and wastewater, a clear quality difference was noticeable, since all physicochemical parameters were significantly higher within wastewater samples. The study also involves the application of simple linear regression analysis on the obtained dataset. By using software package ORIGIN 5 the pH value was mutually correlated with other physicochemical parameters. Based on the obtained values of Pearson coefficient of determination a strong positive correlation between chloride concentration and pH (r = -0.943), as well as between acidity and pH (r = -0.855) was determined. In addition, statistically significant difference was obtained only between acidity and chloride concentration with pH values, since the values of parameter F (247.634 and 182.536) were higher than Fcritical (5.59). In this way, results of statistical analysis highlighted the most influential parameter of water contamination in offset printing, in the form of acidity and chloride concentration. The results showed that variable dependence could be represented by the general regression model: y = a0 + a1x+ k, which further resulted with matching graphic regressions.

Keywords: Pollution, printing industry, simple linear regression analysis, wastewater.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1674
1290 Principal Component Analysis-Ranking as a Variable Selection Method for the Simultaneous Spectrophotometric Determination of Phenol, Resorcinol and Catechol in Real Samples

Authors: Nahid Ghasemi, Mohammad Goodarzi, Morteza Khosravi

Abstract:

Simultaneous determination of multicomponents of phenol, resorcinol and catechol with a chemometric technique a PCranking artificial neural network (PCranking-ANN) algorithm is reported in this study. Based on the data correlation coefficient method, 3 representative PCs are selected from the scores of original UV spectral data (35 PCs) as the original input patterns for ANN to build a neural network model. The results obtained by iterating 8000 .The RMSEP for phenol, resorcinol and catechol with PCranking- ANN were 0.6680, 0.0766 and 0.1033, respectively. Calibration matrices were 0.50-21.0, 0.50-15.1 and 0.50-20.0 μg ml-1 for phenol, resorcinol and catechol, respectively. The proposed method was successfully applied for the determination of phenol, resorcinol and catechol in synthetic and water samples.

Keywords: Phenol, Resorcinol, Catechol, Principal componentrankingArtificial Neural Network, Chemometrics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1427
1289 The Effects of Soil Parameters on Efficiency of Essential Oil from Zingiber zerumbet (L.) Smith in Thailand

Authors: Worakrit Worananthakij, Kamonchanok Doungtadum, Nattagan Mingkwan, Supatsorn Chupong

Abstract:

Natural products from herb have been used in different aspects of life as a result of their various biological activities. Generally, plant growth and production of secondary compounds largely depend on environmental conditions. To better understand this correlation, study on biological activity and soil parameter is necessary. This research aims to study the soil parameters which affect the efficiency of the antioxidant activity of essential oils extracted from the Zingiber zerumbet in three areas of Thailand, including Min Buri district, Bangkok province; Muang district, Chiang Mai province and Kaeng Sanam Nang district, Nakhon Ratchasima province. The soil samples in each area were collected and analyzed in the laboratory. The essential oil of Z. zerumbet in each province was extracted and tested for antioxidant activity by hydrodistillation method and DPPH (2,2-diphenyl-1-picrylhydrazyl radical) assay, respectively. The results showed that, the soil parameters such as pH, nitrogen, potassium and phosphorus elements and exchange of cations of soil specimen from Nakhon Ratchasima province were the highest (P<0.05) (6.10 ±0.03, 0.15 ± 0.04 percent of total nitrogen, 16.67 ± 0.46 mg/L, 3.35 ± 0.65 mg/kg and 12.87 ± 0.11 cmol/kg, respectively). In addition, IC50 (Inhibition Concentrtion of antioxidant at 50%) of Z. zerumbet essential oil collected from Nakhon Ratchasima showed the highest value (P<0.05) (1,400 µg/mL). In conclusion, the soil parameters are once important factor for the efficiency of essential oils extract from Z. zerumbet.

Keywords: Antioxidant, essential oil, herb, soil parameter, Zingiber zerumbet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1319
1288 Assessment Power and Frequency Oscillation Damping Using POD Controller and Proposed FOD Controller

Authors: Yahya Naderi, Tohid Rahimi, Babak Yousefi, Seyed Hossein Hosseini

Abstract:

Today’s modern interconnected power system is highly complex in nature. In this, one of the most important requirements during the operation of the electric power system is the reliability and security. Power and frequency oscillation damping mechanism improve the reliability. Because of power system stabilizer (PSS) low speed response against of major fault such as three phase short circuit, FACTs devise that can control the network condition in very fast time, are becoming popular. But FACTs capability can be seen in a major fault present when nonlinear models of FACTs devise and power system equipment are applied. To realize this aim, the model of multi-machine power system with FACTs controller is developed in MATLAB/SIMULINK using Sim Power System (SPS) blockiest. Among the FACTs device, Static synchronous series compensator (SSSC) due to high speed changes its reactance characteristic inductive to capacitive, is effective power flow controller. Tuning process of controller parameter can be performed using different method. But Genetic Algorithm (GA) ability tends to use it in controller parameter tuning process. In this paper firstly POD controller is used to power oscillation damping. But in this station, frequency oscillation dos not has proper damping situation. So FOD controller that is tuned using GA is using that cause to damp out frequency oscillation properly and power oscillation damping has suitable situation.

Keywords: Power oscillation damping (POD), frequency oscillation damping (FOD), Static synchronous series compensator (SSSC), Genetic Algorithm (GA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3163
1287 Atrial Fibrillation Analysis Based on Blind Source Separation in 12-lead ECG

Authors: Pei-Chann Chang, Jui-Chien Hsieh, Jyun-Jie Lin, Feng-Ming Yeh

Abstract:

Atrial Fibrillation is the most common sustained arrhythmia encountered by clinicians. Because of the invisible waveform of atrial fibrillation in atrial activation for human, it is necessary to develop an automatic diagnosis system. 12-Lead ECG now is available in hospital and is appropriate for using Independent Component Analysis to estimate the AA period. In this research, we also adopt a second-order blind identification approach to transform the sources extracted by ICA to more precise signal and then we use frequency domain algorithm to do the classification. In experiment, we gather a significant result of clinical data.

Keywords: 12-Lead ECG, Atrial Fibrillation, Blind SourceSeparation, Kurtosis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
1286 Transformation Method CIM to PIM: From Business Processes Models Defined in BPMN to Use Case and Class Models Defined in UML

Authors: Y. Rhazali, Y. Hadi, A. Mouloudi

Abstract:

This paper proposes a method to automatic transformation of CIM level to PIM level respecting the MDA approach. Our proposal is based on creating a good CIM level through well-defined rules allowing as achieving rich models that contain relevant information to facilitate the task of the transformation to the PIM level. We define, thereafter, an appropriate PIM level through various UML diagram. Next, we propose set rules to move from CIM to PIM. Our method follows the MDA approach by considering the business dimension in the CIM level through the use BPMN, standard modeling business of OMG, and the use of UML in PIM advocated by MDA in this level.

Keywords: Model transformation, MDA, CIM, PIM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3673
1285 Unsteady Laminar Boundary Layer Forced Flow in the Region of the Stagnation Point on a Stretching Flat Sheet

Authors: A. T. Eswara

Abstract:

This paper analyses the unsteady, two-dimensional stagnation point flow of an incompressible viscous fluid over a flat sheet when the flow is started impulsively from rest and at the same time, the sheet is suddenly stretched in its own plane with a velocity proportional to the distance from the stagnation point. The partial differential equations governing the laminar boundary layer forced convection flow are non-dimensionalised using semi-similar transformations and then solved numerically using an implicit finitedifference scheme known as the Keller-box method. Results pertaining to the flow and heat transfer characteristics are computed for all dimensionless time, uniformly valid in the whole spatial region without any numerical difficulties. Analytical solutions are also obtained for both small and large times, respectively representing the initial unsteady and final steady state flow and heat transfer. Numerical results indicate that the velocity ratio parameter is found to have a significant effect on skin friction and heat transfer rate at the surface. Furthermore, it is exposed that there is a smooth transition from the initial unsteady state flow (small time solution) to the final steady state (large time solution).

Keywords: Forced flow, Keller-box method, Stagnation point, Stretching flat sheet, Unsteady laminar boundary layer, Velocity ratio parameter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695
1284 Algorithm for Bleeding Determination Based On Object Recognition and Local Color Features in Capsule Endoscopy

Authors: Yong-Gyu Lee, Jin Hee Park, Youngdae Seo, Gilwon Yoon

Abstract:

Automatic determination of blood in less bright or noisy capsule endoscopic images is difficult due to low S/N ratio. Especially it may not be accurate to analyze these images due to the influence of external disturbance. Therefore, we proposed detection methods that are not dependent only on color bands. In locating bleeding regions, the identification of object outlines in the frame and features of their local colors were taken into consideration. The results showed that the capability of detecting bleeding was much improved.

Keywords: Endoscopy, object recognition, bleeding, image processing, RGB.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1939
1283 IKEv1 and IKEv2: A Quantitative Analyses

Authors: H.Soussi, M.Hussain, H.Afifi, D.Seret

Abstract:

Key management is a vital component in any modern security protocol. Due to scalability and practical implementation considerations automatic key management seems a natural choice in significantly large virtual private networks (VPNs). In this context IETF Internet Key Exchange (IKE) is the most promising protocol under permanent review. We have made a humble effort to pinpoint IKEv2 net gain over IKEv1 due to recent modifications in its original structure, along with a brief overview of salient improvements between the two versions. We have used US National Institute of Technology NIIST VPN simulator to get some comparisons of important performance metrics.

Keywords: Quantitative Analyses, IKEv1, IKEv2, NIIST.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4593
1282 Schema and Data Migration of a Relational Database RDB to the Extensible Markup Language XML

Authors: Alae El Alami, Mohamed Bahaj

Abstract:

This article discusses the passage of RDB to XML documents (schema and data) based on metadata and semantic enrichment, which makes the RDB under flattened shape and is enriched by the object concept. The integration and exploitation of the object concept in the XML uses a syntax allowing for the verification of the conformity of the document XML during the creation. The information extracted from the RDB is therefore analyzed and filtered in order to adjust according to the structure of the XML files and the associated object model. Those implemented in the XML document through a SQL query are built dynamically. A prototype was implemented to realize automatic migration, and so proves the effectiveness of this particular approach.

Keywords: RDB, XML, DTD, semantic enrichment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1824
1281 The Incorporation of In in GaAsN as a Means of N Fraction Calibration

Authors: H. Hashim, B. F. Usher

Abstract:

InGaAsN and GaAsN epitaxial layers with similar nitrogen compositions in a sample were successfully grown on a GaAs (001) substrate by solid source molecular beam epitaxy. An electron cyclotron resonance nitrogen plasma source has been used to generate atomic nitrogen during the growth of the nitride layers. The indium composition changed from sample to sample to give compressive and tensile strained InGaAsN layers. Layer characteristics have been assessed by high-resolution x-ray diffraction to determine the relationship between the lattice constant of the GaAs1-yNy layer and the fraction x of In. The objective was to determine the In fraction x in an InxGa1-xAs1-yNy epitaxial layer which exactly cancels the strain present in a GaAs1-yNy epitaxial layer with the same nitrogen content when grown on a GaAs substrate.

Keywords: Indium, molecular beam epitaxy, nitrogen, straincancellation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1413
1280 Non-Local Behavior of a Mixed-Mode Crack in a Functionally Graded Piezoelectric Medium

Authors: Nidhal Jamia, Sami El-Borgi

Abstract:

In this paper, the problem of a mixed-Mode crack embedded in an infinite medium made of a functionally graded piezoelectric material (FGPM) with crack surfaces subjected to electro-mechanical loadings is investigated. Eringen’s non-local theory of elasticity is adopted to formulate the governing electro-elastic equations. The properties of the piezoelectric material are assumed to vary exponentially along a perpendicular plane to the crack. Using Fourier transform, three integral equations are obtained in which the unknown variables are the jumps of mechanical displacements and electric potentials across the crack surfaces. To solve the integral equations, the unknowns are directly expanded as a series of Jacobi polynomials, and the resulting equations solved using the Schmidt method. In contrast to the classical solutions based on the local theory, it is found that no mechanical stress and electric displacement singularities are present at the crack tips when nonlocal theory is employed to investigate the problem. A direct benefit is the ability to use the calculated maximum stress as a fracture criterion. The primary objective of this study is to investigate the effects of crack length, material gradient parameter describing FGPMs, and lattice parameter on the mechanical stress and electric displacement field near crack tips.

Keywords: Functionally graded piezoelectric material, mixed-mode crack, non-local theory, Schmidt method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 998
1279 Using Textual Pre-Processing and Text Mining to Create Semantic Links

Authors: Ricardo Avila, Gabriel Lopes, Vania Vidal, Jose Macedo

Abstract:

This article offers a approach to the automatic discovery of semantic concepts and links in the domain of Oil Exploration and Production (E&P). Machine learning methods combined with textual pre-processing techniques were used to detect local patterns in texts and, thus, generate new concepts and new semantic links. Even using more specific vocabularies within the oil domain, our approach has achieved satisfactory results, suggesting that the proposal can be applied in other domains and languages, requiring only minor adjustments.

Keywords: Semantic links, data mining, linked data, SKOS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1063
1278 Non-contact Gaze Tracking with Head Movement Adaptation based on Single Camera

Authors: Ying Huang, Zhiliang Wang, An Ping

Abstract:

With advances in computer vision, non-contact gaze tracking systems are heading towards being much easier to operate and more comfortable for use, the technique proposed in this paper is specially designed for achieving these goals. For the convenience in operation, the proposal aims at the system with simple configuration which is composed of a fixed wide angle camera and dual infrared illuminators. Then in order to enhance the usability of the system based on single camera, a self-adjusting method which is called Real-time gaze Tracking Algorithm with head movement Compensation (RTAC) is developed for estimating the gaze direction under natural head movement and simplifying the calibration procedure at the same time. According to the actual evaluations, the average accuracy of about 1° is achieved over a field of 20×15×15 cm3.

Keywords: computer vision, gaze tracking, human-computer interaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1920
1277 A Commercial Building Plug Load Management System That Uses Internet of Things Technology to Automatically Identify Plugged-In Devices and Their Locations

Authors: Amy LeBar, Kim L. Trenbath, Bennett Doherty, William Livingood

Abstract:

Plug and process loads (PPLs) account for a large portion of U.S. commercial building energy use. There is a huge potential to reduce whole building consumption by targeting PPLs for energy savings measures or implementing some form of plug load management (PLM). Despite this potential, there has yet to be a widely adopted commercial PLM technology. This paper describes the Automatic Type and Location Identification System (ATLIS), a PLM system framework with automatic and dynamic load detection (ADLD). ADLD gives PLM systems the ability to automatically identify devices as they are plugged into the outlets of a building. The ATLIS framework takes advantage of smart, connected devices to identify device locations in a building, meter and control their power, and communicate this information to a central database. ATLIS includes five primary capabilities: location identification, communication, control, energy metering, and data storage. A laboratory proof of concept (PoC) demonstrated all but the energy metering capability, and these capabilities were validated using a series of system tests. The PoC was able to identify when a device was plugged into an outlet and the location of the device in the building. When a device was moved, the PoC’s dashboard and database were automatically updated with the new location. The PoC implemented controls to devices from the system dashboard so that devices maintained correct schedules regardless of where they were plugged in within the building. ATLIS’s primary technology application is improved PLM, but other applications include asset management, energy audits, and interoperability for grid-interactive efficient buildings. An ATLIS-based system could also be used to direct power to critical devices, such as ventilators, during a brownout or blackout. Such a framework is an opportunity to make PLM more widespread and reduce the amount of energy consumed by PPLs in current and future commercial buildings.

Keywords: commercial buildings, grid-interactive efficient buildings, miscellaneous electric loads, plug loads, plug load management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 875
1276 Face Recognition: A Literature Review

Authors: A. S. Tolba, A.H. El-Baz, A.A. El-Harby

Abstract:

The task of face recognition has been actively researched in recent years. This paper provides an up-to-date review of major human face recognition research. We first present an overview of face recognition and its applications. Then, a literature review of the most recent face recognition techniques is presented. Description and limitations of face databases which are used to test the performance of these face recognition algorithms are given. A brief summary of the face recognition vendor test (FRVT) 2002, a large scale evaluation of automatic face recognition technology, and its conclusions are also given. Finally, we give a summary of the research results.

Keywords: Combined classifiers, face recognition, graph matching, neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7724
1275 Wireless Body Area Network’s Mitigation Method Using Equalization

Authors: Savita Sindhu, Shruti Vashist

Abstract:

A wireless body area sensor network (WBASN) is composed of a central node and heterogeneous sensors to supervise the physiological signals and functions of the human body. This overwhelmimg area has stimulated new research and calibration processes, especially in the area of WBASN’s attainment and fidelity. In the era of mobility or imbricated WBASN’s, system performance incomparably degrades because of unstable signal integrity. Hence, it is mandatory to define mitigation techniques in the design to avoid interference. There are various mitigation methods available e.g. diversity techniques, equalization, viterbi decoder etc. This paper presents equalization mitigation scheme in WBASNs to improve the signal integrity. Eye diagrams are also given to represent accuracy of the signal. Maximum no. of symbols is taken to authenticate the signal which in turn results in accuracy and increases the overall performance of the system.

Keywords: Wireless body area network, equalizer, RLS, LMS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 812