Search results for: participatory error correction process
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6628

Search results for: participatory error correction process

6268 Maximum Norm Analysis of a Nonmatching Grids Method for Nonlinear Elliptic Boundary Value Problem −Δu = f(u)

Authors: Abida Harbi

Abstract:

We provide a maximum norm analysis of a finite element Schwarz alternating method for a nonlinear elliptic boundary value problem of the form -Δu = f(u), on two overlapping sub domains with non matching grids. We consider a domain which is the union of two overlapping sub domains where each sub domain has its own independently generated grid. The two meshes being mutually independent on the overlap region, a triangle belonging to one triangulation does not necessarily belong to the other one. Under a Lipschitz assumption on the nonlinearity, we establish, on each sub domain, an optimal L∞ error estimate between the discrete Schwarz sequence and the exact solution of the boundary value problem.

Keywords: Error estimates, Finite elements, Nonlinear PDEs, Schwarz method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2719
6267 Forecasting Malaria Cases in Bujumbura

Authors: Hermenegilde Nkurunziza, Albrecht Gebhardt, Juergen Pilz

Abstract:

The focus in this work is to assess which method allows a better forecasting of malaria cases in Bujumbura ( Burundi) when taking into account association between climatic factors and the disease. For the period 1996-2007, real monthly data on both malaria epidemiology and climate in Bujumbura are described and analyzed. We propose a hierarchical approach to achieve our objective. We first fit a Generalized Additive Model to malaria cases to obtain an accurate predictor, which is then used to predict future observations. Various well-known forecasting methods are compared leading to different results. Based on in-sample mean average percentage error (MAPE), the multiplicative exponential smoothing state space model with multiplicative error and seasonality performed better.

Keywords: Burundi, Forecasting, Malaria, Regressionmodel, State space model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1946
6266 Comparison of the Distillation Curve Obtained Experimentally with the Curve Extrapolated by a Commercial Simulator

Authors: Lívia B. Meirelles, Erika C. A. N. Chrisman, Flávia B. de Andrade, Lilian C. M. de Oliveira

Abstract:

True Boiling Point distillation (TBP) is one of the most common experimental techniques for the determination of petroleum properties. This curve provides information about the performance of petroleum in terms of its cuts. The experiment is performed in a few days. Techniques are used to determine the properties faster with a software that calculates the distillation curve when a little information about crude oil is known. In order to evaluate the accuracy of distillation curve prediction, eight points of the TBP curve and specific gravity curve (348 K and 523 K) were inserted into the HYSYS Oil Manager, and the extended curve was evaluated up to 748 K. The methods were able to predict the curve with the accuracy of 0.6%-9.2% error (Software X ASTM), 0.2%-5.1% error (Software X Spaltrohr).

Keywords: Distillation curve, petroleum distillation, simulation, true boiling point curve.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1572
6265 A New Divide and Conquer Software Process Model

Authors: Hina Gull, Farooque Azam, Wasi Haider Butt, Sardar Zafar Iqbal

Abstract:

The software system goes through a number of stages during its life and a software process model gives a standard format for planning, organizing and running a project. The article presents a new software development process model named as “Divide and Conquer Process Model", based on the idea first it divides the things to make them simple and then gathered them to get the whole work done. The article begins with the backgrounds of different software process models and problems in these models. This is followed by a new divide and conquer process model, explanation of its different stages and at the end edge over other models is shown.

Keywords: Process Model, Waterfall, divide and conquer, Requirements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1895
6264 Numerical Simulation of the Flowing of Ice Slurry in Seawater Pipe of Polar Ships

Authors: Li Xu, Huanbao Jiang, Zhenfei Huang, Lailai Zhang

Abstract:

In recent years, as global warming, the sea-ice extent of North Arctic undergoes an evident decrease and Arctic channel has attracted the attention of shipping industry. Ice crystals existing in the seawater of Arctic channel which enter the seawater system of the ship with the seawater were found blocking the seawater pipe. The appearance of cooler paralysis, auxiliary machine error and even ship power system paralysis may be happened if seriously. In order to reduce the effect of high temperature in auxiliary equipment, seawater system will use external ice-water to participate in the cooling cycle and achieve the state of its flow. The distribution of ice crystals in seawater pipe can be achieved. As the ice slurry system is solid liquid two-phase system, the flow process of ice-water mixture is very complex and diverse. In this paper, the flow process in seawater pipe of ice slurry is simulated with fluid dynamics simulation software based on k-ε turbulence model. As the ice packing fraction is a key factor effecting the distribution of ice crystals, the influence of ice packing fraction on the flowing process of ice slurry is analyzed. In this work, the simulation results show that as the ice packing fraction is relatively large, the distribution of ice crystals is uneven in the flowing process of the seawater which has such disadvantage as increase the possibility of blocking, that will provide scientific forecasting methods for the forming of ice block in seawater piping system. It has important significance for the reliability of the operating of polar ships in the future.

Keywords: Ice slurry, seawater pipe, ice packing fraction, numerical simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1337
6263 Design and Characterization of a CMOS Process Sensor Utilizing Vth Extractor Circuit

Authors: Rohana Musa, Yuzman Yusoff, Chia Chieu Yin, Hanif Che Lah

Abstract:

This paper presents the design and characterization of a low power Complementary Metal Oxide Semiconductor (CMOS) process sensor. The design is targeted for implementation using Silterra’s 180 nm CMOS process technology. The proposed process sensor employs a voltage threshold (Vth) extractor architecture for detection of variations in the fabrication process. The process sensor generates output voltages in the range of 401 mV (fast-fast corner) to 443 mV (slow-slow corner) at nominal condition. The power dissipation for this process sensor is 6.3 µW with a supply voltage of 1.8V with a silicon area of 190 µm X 60 µm. The preliminary result of this process sensor that was fabricated indicates a close resemblance between test and simulated results.

Keywords: CMOS Process sensor, Process, Voltage and Temperature (PVT) sensor, threshold extractor circuit, Vth extractor circuit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 691
6262 Joint Design of MIMO Relay Networks Based on MMSE Criterion

Authors: Seungwon Choi, Seungri Jin, Ayoung Heo, Jung-Hyun Park, Dong-Jo Park

Abstract:

This paper deals with wireless relay communication systems in which multiple sources transmit information to the destination node by the help of multiple relays. We consider a signal forwarding technique based on the minimum mean-square error (MMSE) approach with multiple antennas for each relay. A source-relay-destination joint design strategy is proposed with power constraints at the destination and the source nodes. Simulation results confirm that the proposed joint design method improves the average MSE performance compared with that of conventional MMSE relaying schemes.

Keywords: minimum mean squre error (MMSE), multiple relay, MIMO.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1670
6261 Robust UKF Insensitive to Measurement Faults for Pico Satellite Attitude Estimation

Authors: Halil Ersin Soken, Chingiz Hajiyev

Abstract:

In the normal operation conditions of a pico satellite, conventional Unscented Kalman Filter (UKF) gives sufficiently good estimation results. However, if the measurements are not reliable because of any kind of malfunction in the estimation system, UKF gives inaccurate results and diverges by time. This study, introduces Robust Unscented Kalman Filter (RUKF) algorithms with the filter gain correction for the case of measurement malfunctions. By the use of defined variables named as measurement noise scale factor, the faulty measurements are taken into the consideration with a small weight and the estimations are corrected without affecting the characteristic of the accurate ones. Two different RUKF algorithms, one with single scale factor and one with multiple scale factors, are proposed and applied for the attitude estimation process of a pico satellite. The results of these algorithms are compared for different types of measurement faults in different estimation scenarios and recommendations about their applications are given.

Keywords: attitude algorithms, Kalman filters, robustestimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1588
6260 A New Image Psychovisual Coding Quality Measurement based Region of Interest

Authors: M. Nahid, A. Bajit, A. Tamtaoui, E. H. Bouyakhf

Abstract:

To model the human visual system (HVS) in the region of interest, we propose a new objective metric evaluation adapted to wavelet foveation-based image compression quality measurement, which exploits a foveation setup filter implementation technique in the DWT domain, based especially on the point and region of fixation of the human eye. This model is then used to predict the visible divergences between an original and compressed image with respect to this region field and yields an adapted and local measure error by removing all peripheral errors. The technique, which we call foveation wavelet visible difference prediction (FWVDP), is demonstrated on a number of noisy images all of which have the same local peak signal to noise ratio (PSNR), but visibly different errors. We show that the FWVDP reliably predicts the fixation areas of interest where error is masked, due to high image contrast, and the areas where the error is visible, due to low image contrast. The paper also suggests ways in which the FWVDP can be used to determine a visually optimal quantization strategy for foveation-based wavelet coefficients and to produce a quantitative local measure of image quality.

Keywords: Human Visual System, Image Quality, ImageCompression, foveation wavelet, region of interest ROI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1460
6259 Star-Hexagon Transformer Supported UPQC

Authors: Yash Pal, A.Swarup, Bhim Singh

Abstract:

A new topology of unified power quality conditioner (UPQC) is proposed for different power quality (PQ) improvement in a three-phase four-wire (3P-4W) distribution system. For neutral current mitigation, a star-hexagon transformer is connected in shunt near the load along with three-leg voltage source inverters (VSIs) based UPQC. For the mitigation of source neutral current, the uses of passive elements are advantageous over the active compensation due to ruggedness and less complexity of control. In addition to this, by connecting a star-hexagon transformer for neutral current mitigation the over all rating of the UPQC is reduced. The performance of the proposed topology of 3P-4W UPQC is evaluated for power-factor correction, load balancing, neutral current mitigation and mitigation of voltage and currents harmonics. A simple control algorithm based on Unit Vector Template (UVT) technique is used as a control strategy of UPQC for mitigation of different PQ problems. In this control scheme, the current/voltage control is applied over the fundamental supply currents/voltages instead of fast changing APFs currents/voltages, thereby reducing the computational delay. Moreover, no extra control is required for neutral source current compensation; hence the numbers of current sensors are reduced. The performance of the proposed topology of UPQC is analyzed through simulations results using MATLAB software with its Simulink and Power System Block set toolboxes.

Keywords: Power-factor correction, Load balancing, UPQC, Voltage and Current harmonics, Neutral current mitigation, Starhexagon transformer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2293
6258 Stochastic Resonance in Nonlinear Signal Detection

Authors: Youguo Wang, Lenan Wu

Abstract:

Stochastic resonance (SR) is a phenomenon whereby the signal transmission or signal processing through certain nonlinear systems can be improved by adding noise. This paper discusses SR in nonlinear signal detection by a simple test statistic, which can be computed from multiple noisy data in a binary decision problem based on a maximum a posteriori probability criterion. The performance of detection is assessed by the probability of detection error Per . When the input signal is subthreshold signal, we establish that benefit from noise can be gained for different noises and confirm further that the subthreshold SR exists in nonlinear signal detection. The efficacy of SR is significantly improved and the minimum of Per can dramatically approach to zero as the sample number increases. These results show the robustness of SR in signal detection and extend the applicability of SR in signal processing.

Keywords: Probability of detection error, signal detection, stochastic resonance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487
6257 Automatic Generation Control of an Interconnected Power System with Capacitive Energy Storage

Authors: Rajesh Joseph Abraham, D. Das, Amit Patra

Abstract:

This paper is concerned with the application of small rating Capacitive Energy Storage units for the improvement of Automatic Generation Control of a multiunit multiarea power system. Generation Rate Constraints are also considered in the investigations. Integral Squared Error technique is used to obtain the optimal integral gain settings by minimizing a quadratic performance index. Simulation studies reveal that with CES units, the deviations in area frequencies and inter-area tie-power are considerably improved in terms of peak deviations and settling time as compared to that obtained without CES units.

Keywords: Automatic Generation Control, Capacitive EnergyStorage, Integral Squared Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2758
6256 Determination of Cd, Zn, K, pH, TNV, Organic Material and Electrical Conductivity (EC) Distribution in Agricultural Soils using Geostatistics and GIS (Case Study: South- Western of Natanz- Iran)

Authors: Abbas Hani, Seyed Ali Hoseini Abari

Abstract:

Soil chemical and physical properties have important roles in compartment of the environment and agricultural sustainability and human health. The objectives of this research is determination of spatial distribution patterns of Cd, Zn, K, pH, TNV, organic material and electrical conductivity (EC) in agricultural soils of Natanz region in Esfehan province. In this study geostatistic and non-geostatistic methods were used for prediction of spatial distribution of these parameters. 64 composite soils samples were taken at 0-20 cm depth. The study area is located in south of NATANZ agricultural lands with area of 21660 hectares. Spatial distribution of Cd, Zn, K, pH, TNV, organic material and electrical conductivity (EC) was determined using geostatistic and geographic information system. Results showed that Cd, pH, TNV and K data has normal distribution and Zn, OC and EC data had not normal distribution. Kriging, Inverse Distance Weighting (IDW), Local Polynomial Interpolation (LPI) and Redial Basis functions (RBF) methods were used to interpolation. Trend analysis showed that organic carbon in north-south and east to west did not have trend while K and TNV had second degree trend. We used some error measurements include, mean absolute error(MAE), mean squared error (MSE) and mean biased error(MBE). Ordinary kriging(exponential model), LPI(Local polynomial interpolation), RBF(radial basis functions) and IDW methods have been chosen as the best methods to interpolating of the soil parameters. Prediction maps by disjunctive kriging was shown that in whole study area was intensive shortage of organic matter and more than 63.4 percent of study area had shortage of K amount.

Keywords: Electrical conductivity, Geostatistics, Geographical Information System, TNV

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2645
6255 An Adaptive Least-squares Mixed Finite Element Method for Pseudo-parabolic Integro-differential Equations

Authors: Zilong Feng, Hong Li, Yang Liu, Siriguleng He

Abstract:

In this article, an adaptive least-squares mixed finite element method is studied for pseudo-parabolic integro-differential equations. The solutions of least-squares mixed weak formulation and mixed finite element are proved. A posteriori error estimator is constructed based on the least-squares functional and the posteriori errors are obtained.

Keywords: Pseudo-parabolic integro-differential equation, least squares mixed finite element method, adaptive method, a posteriori error estimates.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1284
6254 Improvement of Parallel Compressor Model in Dealing Outlet Unequal Pressure Distribution

Authors: Kewei Xu, Jens Friedrich, Kevin Dwinger, Wei Fan, Xijin Zhang

Abstract:

Parallel Compressor Model (PCM) is a simplified approach to predict compressor performance with inlet distortions. In PCM calculation, it is assumed that the sub-compressors’ outlet static pressure is uniform and therefore simplifies PCM calculation procedure. However, if the compressor’s outlet duct is not long and straight, such assumption frequently induces error ranging from 10% to 15%. This paper provides a revised calculation method of PCM that can correct the error. The revised method employs energy equation, momentum equation and continuity equation to acquire needed parameters and replace the equal static pressure assumption. Based on the revised method, PCM is applied on two compression system with different blades types. The predictions of their performance in non-uniform inlet conditions are yielded through the revised calculation method and are employed to evaluate the method’s efficiency. Validating the results by experimental data, it is found that although little deviation occurs, calculated result agrees well with experiment data whose error ranges from 0.1% to 3%. Therefore, this proves the revised calculation method of PCM possesses great advantages in predicting the performance of the distorted compressor with limited exhaust duct.

Keywords: Parallel Compressor Model (PCM), Revised Calculation Method, Inlet Distortion, Outlet Unequal Pressure Distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1641
6253 A Quantitative Approach to Strategic Design of Component-Based Business Process Models

Authors: Eakong Atiptamvaree, Twittie Senivongse

Abstract:

A new paradigm for software design and development models software by its business process, translates the model into a process execution language, and has it run by a supporting execution engine. This process-oriented paradigm promotes modeling of software by less technical users or business analysts as well as rapid development. Since business process models may be shared by different organizations and sometimes even by different business domains, it is interesting to apply a technique used in traditional software component technology to design reusable business processes. This paper discusses an approach to apply a technique for software component fabrication to the design of process-oriented software units, called process components. These process components result from decomposing a business process of a particular application domain into subprocesses with an aim that the process components can be reusable in different process-based software models. The approach is quantitative because the quality of process component design is measured from technical features of the process components. The approach is also strategic because the measured quality is determined against business-oriented component management goals. A software tool has been developed to measure how good a process component design is, according to the required managerial goals and comparing to other designs. We also discuss how we benefit from reusable process components.

Keywords: Business process model, process component, component management goals, measurement

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1632
6252 Measuring Process Component Design on Achieving Managerial Goals

Authors: Eakong Atiptamvaree, Twittie Senivongse

Abstract:

Process-oriented software development is a new software development paradigm in which software design is modeled by a business process which is in turn translated into a process execution language for execution. The building blocks of this paradigm are software units that are composed together to work according to the flow of the business process. This new paradigm still exhibits the characteristic of the applications built with the traditional software component technology. This paper discusses an approach to apply a traditional technique for software component fabrication to the design of process-oriented software units, called process components. These process components result from decomposing a business process of a particular application domain into subprocesses, and these process components can be reused to design the business processes of other application domains. The decomposition considers five managerial goals, namely cost effectiveness, ease of assembly, customization, reusability, and maintainability. The paper presents how to design or decompose process components from a business process model and measure some technical features of the design that would affect the managerial goals. A comparison between the measurement values from different designs can tell which process component design is more appropriate for the managerial goals that have been set. The proposed approach can be applied in Web Services environment which accommodates process-oriented software development.

Keywords: Business Process Model, Managerial Goals, ProcessComponent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1473
6251 Heat Stress Monitor by Using Low-Cost Temperature and Humidity Sensors

Authors: Kiattisak Batsungnoen, Thanatchai Kulworawanichpong

Abstract:

The aim of this study is to develop a cost-effective WBGT heat stress monitor which provides precise heat stress measurement. The proposed device employs SHT15 and DS18B20 as a temperature and humidity sensors, respectively, incorporating with ATmega328 microcontroller. The developed heat stress monitor was calibrated and adjusted to that of the standard temperature and humidity sensors in the laboratory. The results of this study illustrated that the mean percentage error and the standard deviation from the measurement of the globe temperature was 2.33 and 2.71 respectively, while 0.94 and 1.02 were those of the dry bulb temperature, 0.79 and 0.48 were of the wet bulb temperature, and 4.46 and 1.60 were of the relative humidity sensor. This device is relatively low-cost and the measurement error is acceptable.

Keywords: Heat stress monitor, WBGT, Temperature and Humidity Sensors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2455
6250 Selection of Rayleigh Damping Coefficients for Seismic Response Analysis of Soil Layers

Authors: Huai-Feng Wang, Meng-Lin Lou, Ru-Lin Zhang

Abstract:

One good analysis method in seismic response analysis is direct time integration, which widely adopts Rayleigh damping. An approach is presented for selection of Rayleigh damping coefficients to be used in seismic analyses to produce a response that is consistent with Modal damping response. In the presented approach, the expression of the error of peak response, acquired through complete quadratic combination method, and Rayleigh damping coefficients was set up and then the coefficients were produced by minimizing the error. Two finite element modes of soil layers, excited by 28 seismic waves, were used to demonstrate the feasibility and validity.

Keywords: Rayleigh damping, modal damping, damping coefficients, seismic response analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2861
6249 Predicting Oil Content of Fresh Palm Fruit Using Transmission-Mode Ultrasonic Technique

Authors: Sutthawee Suwannarat, Thanate Khaorapapong, Mitchai Chongcheawchamnan

Abstract:

In this paper, an ultrasonic technique is proposed to predict oil content in a fresh palm fruit. This is accomplished by measuring the attenuation based on ultrasonic transmission mode. Several palm fruit samples with known oil content by Soxhlet extraction (ISO9001:2008) were tested with our ultrasonic measurement. Amplitude attenuation data results for all palm samples were collected. The Feedforward Neural Networks (FNNs) are applied to predict the oil content for the samples. The Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) of the FNN model for predicting oil content percentage are 7.6186 and 5.2287 with the correlation coefficient (R) of 0.9193.

Keywords: Non-destructive, ultrasonic testing, oil content, fresh palm fruit, neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1771
6248 A C1-Conforming Finite Element Method for Nonlinear Fourth-Order Hyperbolic Equation

Authors: Yang Liu, Hong Li, Siriguleng He, Wei Gao, Zhichao Fang

Abstract:

In this paper, the C1-conforming finite element method is analyzed for a class of nonlinear fourth-order hyperbolic partial differential equation. Some a priori bounds are derived using Lyapunov functional, and existence, uniqueness and regularity for the weak solutions are proved. Optimal error estimates are derived for both semidiscrete and fully discrete schemes.

Keywords: Nonlinear fourth-order hyperbolic equation, Lyapunov functional, existence, uniqueness and regularity, conforming finite element method, optimal error estimates.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1843
6247 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks

Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone

Abstract:

Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.

Keywords: Artificial Neural Network, Data Mining, Electroencephalogram, Epilepsy, Feature Extraction, Seizure Detection, Signal Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1260
6246 Fast Document Segmentation Using Contourand X-Y Cut Technique

Authors: Boontee Kruatrachue, Narongchai Moongfangklang, Kritawan Siriboon

Abstract:

This paper describes fast and efficient method for page segmentation of document containing nonrectangular block. The segmentation is based on edge following algorithm using small window of 16 by 32 pixels. This segmentation is very fast since only border pixels of paragraph are used without scanning the whole page. Still, the segmentation may contain error if the space between them is smaller than the window used in edge following. Consequently, this paper reduce this error by first identify the missed segmentation point using direction information in edge following then, using X-Y cut at the missed segmentation point to separate the connected columns. The advantage of the proposed method is the fast identification of missed segmentation point. This methodology is faster with fewer overheads than other algorithms that need to access much more pixel of a document.

Keywords: Contour Direction Technique, Missed SegmentationPoints, Page Segmentation, Recursive X-Y Cut Technique

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2734
6245 Effects of Canned Cycles and Cutting Parameters on Hole Quality in Cryogenic Drilling of Aluminum 6061-6T

Authors: M. N. Islam, B. Boswell, Y. R. Ginting

Abstract:

The influence of canned cycles and cutting parameters on hole quality in cryogenic drilling has been investigated experimentally and analytically. A three-level, three-parameter experiment was conducted by using the design-of-experiment methodology. The three levels of independent input parameters were the following: for canned cycles—a chip-breaking canned cycle (G73), a spot drilling canned cycle (G81), and a deep hole canned cycle (G83); for feed rates—0.2, 0.3, and 0.4 mm/rev; and for cutting speeds—60, 75, and 100 m/min. The selected work and tool materials were aluminum 6061-6T and high-speed steel (HSS), respectively. For cryogenic cooling, liquid nitrogen (LN2) was used and was applied externally. The measured output parameters were the three widely used quality characteristics of drilled holes—diameter error, circularity, and surface roughness. Pareto ANOVA was applied for analyzing the results. The findings revealed that the canned cycle has a significant effect on diameter error (contribution ratio 44.09%) and small effects on circularity and surface finish (contribution ratio 7.25% and 6.60%, respectively). The best results for the dimensional accuracy and surface roughness were achieved by G81. G73 produced the best circularity results; however, for dimensional accuracy, it was the worst level.

Keywords: Circularity, diameter error, drilling canned cycle, Pareto ANOVA, surface roughness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1109
6244 A Control Strategy Based on UTT and ISCT for 3P4W UPQC

Authors: Yash Pal, A.Swarup, Bhim Singh

Abstract:

This paper presents a novel control strategy of a threephase four-wire Unified Power Quality (UPQC) for an improvement in power quality. The UPQC is realized by integration of series and shunt active power filters (APFs) sharing a common dc bus capacitor. The shunt APF is realized using a thee-phase, four leg voltage source inverter (VSI) and the series APF is realized using a three-phase, three leg VSI. A control technique based on unit vector template technique (UTT) is used to get the reference signals for series APF, while instantaneous sequence component theory (ISCT) is used for the control of Shunt APF. The performance of the implemented control algorithm is evaluated in terms of power-factor correction, load balancing, neutral source current mitigation and mitigation of voltage and current harmonics, voltage sag and swell in a three-phase four-wire distribution system for different combination of linear and non-linear loads. In this proposed control scheme of UPQC, the current/voltage control is applied over the fundamental supply currents/voltages instead of fast changing APFs currents/voltages, there by reducing the computational delay and the required sensors. MATLAB/Simulink based simulations are obtained, which support the functionality of the UPQC. MATLAB/Simulink based simulations are obtained, which support the functionality of the UPQC.

Keywords: Power Quality, UPQC, Harmonics, Load Balancing, Power Factor Correction, voltage harmonic mitigation, currentharmonic mitigation, voltage sag, swell

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2227
6243 Using the Monte Carlo Simulation to Predict the Assembly Yield

Authors: C. Chahin, M. C. Hsu, Y. H. Lin, C. Y. Huang

Abstract:

Electronics Products that achieve high levels of integrated communications, computing and entertainment, multimedia features in small, stylish and robust new form factors are winning in the market place. Due to the high costs that an industry may undergo and how a high yield is directly proportional to high profits, IC (Integrated Circuit) manufacturers struggle to maximize yield, but today-s customers demand miniaturization, low costs, high performance and excellent reliability making the yield maximization a never ending research of an enhanced assembly process. With factors such as minimum tolerances, tighter parameter variations a systematic approach is needed in order to predict the assembly process. In order to evaluate the quality of upcoming circuits, yield models are used which not only predict manufacturing costs but also provide vital information in order to ease the process of correction when the yields fall below expectations. For an IC manufacturer to obtain higher assembly yields all factors such as boards, placement, components, the material from which the components are made of and processes must be taken into consideration. Effective placement yield depends heavily on machine accuracy and the vision of the system which needs the ability to recognize the features on the board and component to place the device accurately on the pads and bumps of the PCB. There are currently two methods for accurate positioning, using the edge of the package and using solder ball locations also called footprints. The only assumption that a yield model makes is that all boards and devices are completely functional. This paper will focus on the Monte Carlo method which consists in a class of computational algorithms (information processed algorithms) which depends on repeated random samplings in order to compute the results. This method utilized in order to recreate the simulation of placement and assembly processes within a production line.

Keywords: Monte Carlo simulation, placement yield, PCBcharacterization, electronics assembly

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2137
6242 Packet Losses Interpretation in Mobile Internet

Authors: Hossam el-ddin Mostafa, Pavel Čičak

Abstract:

The mobile users with Laptops need to have an efficient access to i.e. their home personal data or to the Internet from any place in the world, regardless of their location or point of attachment, especially while roaming outside the home subnet. An efficient interpretation of packet losses problem that is encountered from this roaming is to the centric of all aspects in this work, to be over-highlighted. The main previous works, such as BER-systems, Amigos, and ns-2 implementation that are considered to be in conjunction with that problem under study are reviewed and discussed. Their drawbacks and limitations, of stopping only at monitoring, and not to provide an actual solution for eliminating or even restricting these losses, are mentioned. Besides that, the framework around which we built a Triple-R sequence as a costeffective solution to eliminate the packet losses and bridge the gap between subnets, an area that until now has been largely neglected, is presented. The results show that, in addition to the high bit error rate of wireless mobile networks, mainly the low efficiency of mobile-IP registration procedure is a direct cause of these packet losses. Furthermore, the output of packet losses interpretation resulted an illustrated triangle of the registration process. This triangle should be further researched and analyzed in our future work.

Keywords: Amigos, BER-systems, ns-2 implementation, packetlosses, registration process, roaming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1436
6241 Order Reduction by Least-Squares Methods about General Point ''a''

Authors: Integral square error, Least-squares, Markovparameters, Moment matching, Order reduction.

Abstract:

The concept of order reduction by least-squares moment matching and generalised least-squares methods has been extended about a general point ?a?, to obtain the reduced order models for linear, time-invariant dynamic systems. Some heuristic criteria have been employed for selecting the linear shift point ?a?, based upon the means (arithmetic, harmonic and geometric) of real parts of the poles of high order system. It is shown that the resultant model depends critically on the choice of linear shift point ?a?. The validity of the criteria is illustrated by solving a numerical example and the results are compared with the other existing techniques.

Keywords: Integral square error, Least-squares, Markovparameters, Moment matching, Order reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1660
6240 Optimization of Electromagnetic Interference Measurement by Convolutional Neural Network

Authors: Hussam Elias, Ninovic Perez, Holger Hirsch

Abstract:

With ever-increasing use of equipment, device or more generally any electrical or electronic system, the chance of Electromagnetic incompatibility incidents has considerably increased which demands more attention to ensure the possible risks of these technologies. Therefore, complying with certain Electromagnetic compatibility (EMC) rules and not overtaking an acceptable level of radiated emissions are utmost importance for the diffusion of electronic products. In this paper, developed measure tool and a convolutional neural network were used to propose a method to reduce the required time to carry out the final measurement phase of Electromagnetic interference (EMI) measurement according to the norm EN 55032 by predicting the radiated emission and determining the height of the antenna that meets the maximum radiation value.

Keywords: Antenna height, Convolutional Neural Network, Electromagnetic Compatibility, Mean Absolute Error, position error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 99
6239 The Multi-objective Optimization for the SLS Process Parameters Based on Analytic Hierarchy Process

Authors: Yang Laixia, Deng Jun, Li Dichen, Bai Yang

Abstract:

The forming process parameters of Selective Laser Sintering(SLS) directly affect the forming efficiency and forming quality. Therefore, to determine reasonable process parameters is particularly important. In this paper, the weight of each target of the forming quality and efficiency is firstly calculated with the Analytic Hierarchy Process. And then the size of each target is measured by orthogonal experiment. Finally, the sum of the product of each target with the weight is compared to the process parameters in each group and obtained the optimal molding process parameters.

Keywords: Analytic Hierarchy Process, Multi-objective optimization, Orthogonal test, Selective Laser Sintering

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2006