Search results for: normalized normal constraint method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8871

Search results for: normalized normal constraint method

8871 Supplier Selection by Bi-Objectives Mixed Integer Program Approach

Authors: K.-H. Yang

Abstract:

In the past, there was a lot of excellent research studies conducted on topics related to supplier selection. Because the considered factors of supplier selection are complicated and difficult to be quantified, most researchers deal supplier selection issues by qualitative approaches. Compared to qualitative approaches, quantitative approaches are less applicable in the real world. This study tried to apply the quantitative approach to study a supplier selection problem with considering operation cost and delivery reliability. By those factors, this study applies Normalized Normal Constraint Method to solve the dual objectives mixed integer program of the supplier selection problem.

Keywords: Bi-objectives MIP, normalized normal constraint method, supplier selection, quantitative approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 931
8870 Discovery of Sequential Patterns Based On Constraint Patterns

Authors: Shigeaki Sakurai, Youichi Kitahata, Ryohei Orihara

Abstract:

This paper proposes a method that discovers sequential patterns corresponding to user-s interests from sequential data. This method expresses the interests as constraint patterns. The constraint patterns can define relationships among attributes of the items composing the data. The method recursively decomposes the constraint patterns into constraint subpatterns. The method evaluates the constraint subpatterns in order to efficiently discover sequential patterns satisfying the constraint patterns. Also, this paper applies the method to the sequential data composed of stock price indexes and verifies its effectiveness through comparing it with a method without using the constraint patterns.

Keywords: Sequential pattern mining, Constraint pattern, Attribute constraint, Stock price indexes

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1380
8869 Subband Adaptive Filter Exploiting Sparsity of System

Authors: Young-Seok Choi

Abstract:

This paper presents a normalized subband adaptive filtering (NSAF) algorithm to cope with the sparsity condition of an underlying system in the context of compressive sensing. By regularizing a weighted l1-norm of the filter taps estimate onto the cost function of the NSAF and utilizing a subgradient analysis, the update recursion of the l1-norm constraint NSAF is derived. Considering two distinct weighted l1-norm regularization cases, two versions of the l1-norm constraint NSAF are presented. Simulation results clearly indicate the superior performance of the proposed l1-norm constraint NSAFs comparing with the classical NSAF.

Keywords: Subband adaptive filtering, sparsity constraint, weighted l1-norm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1482
8868 Bi-Directional Evolutionary Topology Optimization Based on Critical Fatigue Constraint

Authors: Khodamorad Nabaki, Jianhu Shen, Xiaodong Huang

Abstract:

This paper develops a method for considering the critical fatigue stress as a constraint in the Bi-directional Evolutionary Structural Optimization (BESO) method. Our aim is to reach an optimal design in which high cycle fatigue failure does not occur for a specific life time. The critical fatigue stress is calculated based on modified Goodman criteria and used as a stress constraint in our topology optimization problem. Since fatigue generally does not occur for compressive stresses, we use the p-norm approach of the stress measurement that considers the highest tensile principal stress in each point as stress measure to calculate the sensitivity numbers. The BESO method has been extended to minimize volume an object subjected to the critical fatigue stress constraint. The optimization results are compared with the results from the compliance minimization problem which shows clearly the merits of our newly developed approach.

Keywords: Topology optimization, BESO method, p-norm, fatigue constraint.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 996
8867 A Kernel Based Rejection Method for Supervised Classification

Authors: Abdenour Bounsiar, Edith Grall, Pierre Beauseroy

Abstract:

In this paper we are interested in classification problems with a performance constraint on error probability. In such problems if the constraint cannot be satisfied, then a rejection option is introduced. For binary labelled classification, a number of SVM based methods with rejection option have been proposed over the past few years. All of these methods use two thresholds on the SVM output. However, in previous works, we have shown on synthetic data that using thresholds on the output of the optimal SVM may lead to poor results for classification tasks with performance constraint. In this paper a new method for supervised classification with rejection option is proposed. It consists in two different classifiers jointly optimized to minimize the rejection probability subject to a given constraint on error rate. This method uses a new kernel based linear learning machine that we have recently presented. This learning machine is characterized by its simplicity and high training speed which makes the simultaneous optimization of the two classifiers computationally reasonable. The proposed classification method with rejection option is compared to a SVM based rejection method proposed in recent literature. Experiments show the superiority of the proposed method.

Keywords: rejection, Chow's rule, error-reject tradeoff, SupportVector Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1398
8866 New Data Reuse Adaptive Filters with Noise Constraint

Authors: Young-Seok Choi

Abstract:

We present a new framework of the data-reusing (DR) adaptive algorithms by incorporating a constraint on noise, referred to as a noise constraint. The motivation behind this work is that the use of the statistical knowledge of the channel noise can contribute toward improving the convergence performance of an adaptive filter in identifying a noisy linear finite impulse response (FIR) channel. By incorporating the noise constraint into the cost function of the DR adaptive algorithms, the noise constrained DR (NC-DR) adaptive algorithms are derived. Experimental results clearly indicate their superior performance over the conventional DR ones.

Keywords: Adaptive filter, data-reusing, least-mean square (LMS), affine projection (AP), noise constraint.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577
8865 3D Model Retrieval based on Normal Vector Interpolation Method

Authors: Ami Kim, Oubong Gwun, Juwhan Song

Abstract:

In this paper, we proposed the distribution of mesh normal vector direction as a feature descriptor of a 3D model. A normal vector shows the entire shape of a model well. The distribution of normal vectors was sampled in proportion to each polygon's area so that the information on the surface with less surface area may be less reflected on composing a feature descriptor in order to enhance retrieval performance. At the analysis result of ANMRR, the enhancement of approx. 12.4%~34.7% compared to the existing method has also been indicated.

Keywords: Interpolated Normal Vector, Feature Descriptor, 3DModel Retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1436
8864 Some New Bounds for a Real Power of the Normalized Laplacian Eigenvalues

Authors: Ayşe Dilek Maden

Abstract:

For a given a simple connected graph, we present some new bounds via a new approach for a special topological index given by the sum of the real number power of the non-zero normalized Laplacian eigenvalues. To use this approach presents an advantage not only to derive old and new bounds on this topic but also gives an idea how some previous results in similar area can be developed.

Keywords: Degree Kirchhoff index, normalized Laplacian eigenvalue, spanning tree.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2154
8863 Quick Spatial Assessment of Drought Information Derived from MODIS Imagery Using Amplitude Analysis

Authors: Meng-Lung Lin, Qiubing Wang, Fujun Sun, Tzu-How Chu, Yi-Shiang Shiu

Abstract:

The normalized difference vegetation index (NDVI) and normalized difference moisture index (NDMI) derived from the moderate resolution imaging spectroradiometer (MODIS) have been widely used to identify spatial information of drought condition. The relationship between NDVI and NDMI has been analyzed using Pearson correlation analysis and showed strong positive relationship. The drought indices have detected drought conditions and identified spatial extents of drought. A comparison between normal year and drought year demonstrates that the amplitude analysis considered both vegetation and moisture condition is an effective method to identify drought condition. We proposed the amplitude analysis is useful for quick spatial assessment of drought information at a regional scale.

Keywords: NDVI, NDMI, Drought, remote sensing, spatialassessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2269
8862 Data-Reusing Adaptive Filtering Algorithms with Adaptive Error Constraint

Authors: Young-Seok Choi

Abstract:

We present a family of data-reusing and affine projection algorithms. For identification of a noisy linear finite impulse response channel, a partial knowledge of a channel, especially noise, can be used to improve the performance of the adaptive filter. Motivated by this fact, the proposed scheme incorporates an estimate of a knowledge of noise. A constraint, called the adaptive noise constraint, estimates an unknown information of noise. By imposing this constraint on a cost function of data-reusing and affine projection algorithms, a cost function based on the adaptive noise constraint and Lagrange multiplier is defined. Minimizing the new cost function leads to the adaptive noise constrained (ANC) data-reusing and affine projection algorithms. Experimental results comparing the proposed schemes to standard data-reusing and affine projection algorithms clearly indicate their superior performance.

Keywords: Data-reusing, affine projection algorithm, error constraint, system identification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1550
8861 Constraint Active Contour Model with Application to Automated Three-Dimensional Airway Wall Segmentation

Authors: Kuo-Lung Lor, Chi-Hsuan Tsou, Yeun-Chung Chang, Chung-Ming Chen

Abstract:

For evaluating the severity of Chronic Obstructive Pulmonary Disease (COPD), one is interested in inspecting the airway wall thickening due to inflammation. Although airway segmentations have being well developed to reconstruct in high order, airway wall segmentation remains a challenge task. While tackling such problem as a multi-surface segmentation, the interrelation within surfaces needs to be considered. We propose a new method for three-dimensional airway wall segmentation using spring structural active contour model. The method incorporates the gravitational field of the image and repelling force field of the inner lumen as the soft constraint and the geometric spring structure of active contour as the hard constraint to approximate a three-dimensional coupled surface readily for thickness measurements. The results show the preservation of topology constraints of coupled surfaces. In conclusion, our springy, soft-tissue-like structure ensures the globally optimal solution and waives the shortness following by the inevitable improper inner surface constraint.

Keywords: active contour model, airway wall, COPD, geometric spring structure

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1519
8860 Rating and Generating Sudoku Puzzles Based On Constraint Satisfaction Problems

Authors: Bahare Fatemi, Seyed Mehran Kazemi, Nazanin Mehrasa

Abstract:

Sudoku is a logic-based combinatorial puzzle game which people in different ages enjoy playing it. The challenging and addictive nature of this game has made it a ubiquitous game. Most magazines, newspapers, puzzle books, etc. publish lots of Sudoku puzzles every day. These puzzles often come in different levels of difficulty so that all people, from beginner to expert, can play the game and enjoy it. Generating puzzles with different levels of difficulty is a major concern of Sudoku designers. There are several works in the literature which propose ways of generating puzzles having a desirable level of difficulty. In this paper, we propose a method based on constraint satisfaction problems to evaluate the difficulty of the Sudoku puzzles. Then we propose a hill climbing method to generate puzzles with different levels of difficulty. Whereas other methods are usually capable of generating puzzles with only few number of difficulty levels, our method can be used to generate puzzles with arbitrary number of different difficulty levels. We test our method by generating puzzles with different levels of difficulty and having a group of 15 people solve all the puzzles and recording the time they spend for each puzzle.

Keywords: Constraint satisfaction problem, generating Sudoku puzzles, hill climbing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3142
8859 Robust Image Registration Based on an Adaptive Normalized Mutual Information Metric

Authors: Huda Algharib, Amal Algharib, Hanan Algharib, Ali Mohammad Alqudah

Abstract:

Image registration is an important topic for many imaging systems and computer vision applications. The standard image registration techniques such as Mutual information/ Normalized mutual information -based methods have a limited performance because they do not consider the spatial information or the relationships between the neighbouring pixels or voxels. In addition, the amount of image noise may significantly affect the registration accuracy. Therefore, this paper proposes an efficient method that explicitly considers the relationships between the adjacent pixels, where the gradient information of the reference and scene images is extracted first, and then the cosine similarity of the extracted gradient information is computed and used to improve the accuracy of the standard normalized mutual information measure. Our experimental results on different data types (i.e. CT, MRI and thermal images) show that the proposed method outperforms a number of image registration techniques in terms of the accuracy.

Keywords: Image registration, mutual information, image gradients, Image transformations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 846
8858 Two Area Power Systems Economic Dispatch Problem Solving Considering Transmission Capacity Constraints

Authors: M. Zarei, A. Roozegar, R. Kazemzadeh, J.M. Kauffmann

Abstract:

This paper describes an efficient and practical method for economic dispatch problem in one and two area electrical power systems with considering the constraint of the tie transmission line capacity constraint. Direct search method (DSM) is used with some equality and inequality constraints of the production units with any kind of fuel cost function. By this method, it is possible to use several inequality constraints without having difficulty for complex cost functions or in the case of unavailability of the cost function derivative. To minimize the number of total iterations in searching, process multi-level convergence is incorporated in the DSM. Enhanced direct search method (EDSM) for two area power system will be investigated. The initial calculation step size that causes less iterations and then less calculation time is presented. Effect of the transmission tie line capacity, between areas, on economic dispatch problem and on total generation cost will be studied; line compensation and active power with reactive power dispatch are proposed to overcome the high generation costs for this multi-area system.

Keywords: Economic dispatch, Power System Operation, Direct Search Method, Transmission Capacity Constraint.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2426
8857 Blueprinting of a Normalized Supply Chain Processes: Results in Implementing Normalized Software Systems

Authors: Bassam Istanbouli

Abstract:

With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them.  In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies;  the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system.  Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.

Keywords: Blueprint, ERP, SDLC, Modular.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 337
8856 Variable Regularization Parameter Normalized Least Mean Square Adaptive Filter

Authors: Young-Seok Choi

Abstract:

We present a normalized LMS (NLMS) algorithm with robust regularization. Unlike conventional NLMS with the fixed regularization parameter, the proposed approach dynamically updates the regularization parameter. By exploiting a gradient descent direction, we derive a computationally efficient and robust update scheme for the regularization parameter. In simulation, we demonstrate the proposed algorithm outperforms conventional NLMS algorithms in terms of convergence rate and misadjustment error.

Keywords: Regularization, normalized LMS, system identification, robustness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1832
8855 Application of a Fracture-Mechanics Approach to Gas Pipelines

Authors: Ľubomír Gajdoš, Martin Šperl

Abstract:

This study offers a new simple method for assessing an axial part-through crack in a pipe wall. The method utilizes simple approximate expressions for determining the fracture parameters K, J, and employs these parameters to determine critical dimensions of a crack on the basis of equality between the J-integral and the J-based fracture toughness of the pipe steel. The crack tip constraint is taken into account by the so-called plastic constraint factor C, by which the uniaxial yield stress in the J-integral equation is multiplied. The results of the prediction of the fracture condition are verified by burst tests on test pipes.

Keywords: Axial crack, Fracture-mechanics, J integral, Pipeline wall.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2899
8854 Comparing Abused and Normal Male Students in Tehran Guidance Schools: Emphasizing the Co-Dependency of Their Mothers

Authors: Mohamad Saleh Sangin Ostadi, Esmail Safari, Somayeh Akbari, Kaveh Qaderi Bagajan

Abstract:

The aim of this study is to compare abused and normal male students in Tehran guidance schools with emphasis on the co-dependency of their mothers. The method of this study is based on survey method and comparison (Ex-Post Facto). The method of sampling is also multi-stage cluster. Accordingly, we did sampling from secondary schools of education and training in Tehran, including 12 schools with levels of first, second and third. Each of the schools represents the three – high, medium and low- economic and social conditions. In the following, three classes from every school and 20 students from each class were randomly selected. By (CTQ) abused and normal students were separated that 670 children were recognized as normal and 50 children as abused. Then, 50 children were randomly selected from normal group and compared with abused group. Using Spanned-Fischer Co-dependency Scale, we compared mothers of abused and normal students. The results showed that mothers of the abused children have higher co- dependency average comparing to the mothers of the normal children.

Keywords: Co-dependency, child abuse, abused children, parental psychological health.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1655
8853 A Preliminary X-Ray Study on Human-Hair Microstructures for a Health-State Indicator

Authors: Phannee Saengkaew, Weerasak Ussawawongaraya, Sasiphan Khaweerat, Supagorn Rugmai, Sirisart Ouajai, Jiraporn Luengviriya, Sakuntam Sanorpim, Manop Tirarattanasompot, Somboon Rhianphumikarakit

Abstract:

We present a preliminary x-ray study on human-hair microstructures for a health-state indicator, in particular a cancer case. As an uncomplicated and low-cost method of x-ray technique, the human-hair microstructure was analyzed by wide-angle x-ray diffractions (XRD) and small-angle x-ray scattering (SAXS). The XRD measurements exhibited the simply reflections at the d-spacing of 28 Å, 9.4 Å and 4.4 Å representing to the periodic distance of the protein matrix of the human-hair macrofibrous and the diameter and the repeated spacing of the polypeptide alpha helixes of the photofibrils of the human-hair microfibrous, respectively. When compared to the normal cases, the unhealthy cases including to the breast- and ovarian-cancer cases obtained higher normalized ratios of the x-ray diffracting peaks of 9.4 Å and 4.4 Å. This likely resulted from the varied distributions of microstructures by a molecular alteration. As an elemental analysis by x-ray fluorescence (XRF), the normalized quantitative ratios of zinc(Zn)/calcium(Ca) and iron(Fe)/calcium(Ca) were determined. Analogously, both Zn/Ca and Fe/Ca ratios of the unhealthy cases were obtained higher than both of the normal cases were. Combining the structural analysis by XRD measurements and the elemental analysis by XRF measurements exhibited that the modified fibrous microstructures of hair samples were in relation to their altered elemental compositions. Therefore, these microstructural and elemental analyses of hair samples will be benefit to associate with a diagnosis of cancer and genetic diseases. This functional method would lower a risk of such diseases by the early diagnosis. However, the high-intensity x-ray source, the highresolution x-ray detector, and more hair samples are necessarily desired to develop this x-ray technique and the efficiency would be enhanced by including the skin and fingernail samples with the human-hair analysis.

Keywords: Human-hair analysis, XRD, SAXS, breast cancer, health-state indicator

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2514
8852 Impulse Response Shortening for Discrete Multitone Transceivers using Convex Optimization Approach

Authors: Ejaz Khan, Conor Heneghan

Abstract:

In this paper we propose a new criterion for solving the problem of channel shortening in multi-carrier systems. In a discrete multitone receiver, a time-domain equalizer (TEQ) reduces intersymbol interference (ISI) by shortening the effective duration of the channel impulse response. Minimum mean square error (MMSE) method for TEQ does not give satisfactory results. In [1] a new criterion for partially equalizing severe ISI channels to reduce the cyclic prefix overhead of the discrete multitone transceiver (DMT), assuming a fixed transmission bandwidth, is introduced. Due to specific constrained (unit morm constraint on the target impulse response (TIR)) in their method, the freedom to choose optimum vector (TIR) is reduced. Better results can be obtained by avoiding the unit norm constraint on the target impulse response (TIR). In this paper we change the cost function proposed in [1] to the cost function of determining the maximum of a determinant subject to linear matrix inequality (LMI) and quadratic constraint and solve the resulting optimization problem. Usefulness of the proposed method is shown with the help of simulations.

Keywords: Equalizer, target impulse response, convex optimization, matrix inequality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
8851 Graph-based High Level Motion Segmentation using Normalized Cuts

Authors: Sungju Yun, Anjin Park, Keechul Jung

Abstract:

Motion capture devices have been utilized in producing several contents, such as movies and video games. However, since motion capture devices are expensive and inconvenient to use, motions segmented from captured data was recycled and synthesized to utilize it in another contents, but the motions were generally segmented by contents producers in manual. Therefore, automatic motion segmentation is recently getting a lot of attentions. Previous approaches are divided into on-line and off-line, where on-line approaches segment motions based on similarities between neighboring frames and off-line approaches segment motions by capturing the global characteristics in feature space. In this paper, we propose a graph-based high-level motion segmentation method. Since high-level motions consist of several repeated frames within temporal distances, we consider all similarities among all frames within the temporal distance. This is achieved by constructing a graph, where each vertex represents a frame and the edges between the frames are weighted by their similarity. Then, normalized cuts algorithm is used to partition the constructed graph into several sub-graphs by globally finding minimum cuts. In the experiments, the results using the proposed method showed better performance than PCA-based method in on-line and GMM-based method in off-line, as the proposed method globally segment motions from the graph constructed based similarities between neighboring frames as well as similarities among all frames within temporal distances.

Keywords: Capture Devices, High-Level Motion, Motion Segmentation, Normalized Cuts

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1268
8850 Extended Constraint Mask Based One-Bit Transform for Low-Complexity Fast Motion Estimation

Authors: Oğuzhan Urhan

Abstract:

In this paper, an improved motion estimation (ME) approach based on weighted constrained one-bit transform is proposed for block-based ME employed in video encoders. Binary ME approaches utilize low bit-depth representation of the original image frames with a Boolean exclusive-OR based hardware efficient matching criterion to decrease computational burden of the ME stage. Weighted constrained one-bit transform (WC‑1BT) based approach improves the performance of conventional C-1BT based ME employing 2-bit depth constraint mask instead of a 1-bit depth mask. In this work, the range of constraint mask is further extended to increase ME performance of WC-1BT approach. Experiments reveal that the proposed method provides better ME accuracy compared existing similar ME methods in the literature.

Keywords: Fast motion estimation, low-complexity motion estimation, video coding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 803
8849 A Bi-Objective Model to Address Simultaneous Formulation of Project Scheduling and Material Ordering

Authors: Babak H. Tabrizi, Seyed Farid Ghaderi

Abstract:

Concurrent planning of project scheduling and material ordering has been increasingly addressed within last decades as an approach to improve the project execution costs. Therefore, we have taken the problem into consideration in this paper, aiming to maximize schedules quality robustness, in addition to minimize the relevant costs. In this regard, a bi-objective mathematical model is developed to formulate the problem. Moreover, it is possible to utilize the all-unit discount for materials purchasing. The problem is then solved by the E-constraint method, and the Pareto front is obtained for a variety of robustness values. The applicability and efficiency of the proposed model is tested by different numerical instances, finally.

Keywords: E-constraint method, material ordering, project management, project scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1959
8848 Generator Capability Curve Constraint for PSO Based Optimal Power Flow

Authors: Mat Syai'in, Adi Soeprijanto, Takashi Hiyama

Abstract:

An optimal power flow (OPF) based on particle swarm optimization (PSO) was developed with more realistic generator security constraint using the capability curve instead of only Pmin/Pmax and Qmin/Qmax. Neural network (NN) was used in designing digital capability curve and the security check algorithm. The algorithm is very simple and flexible especially for representing non linear generation operation limit near steady state stability limit and under excitation operation area. In effort to avoid local optimal power flow solution, the particle swarm optimization was implemented with enough widespread initial population. The objective function used in the optimization process is electric production cost which is dominated by fuel cost. The proposed method was implemented at Java Bali 500 kV power systems contain of 7 generators and 20 buses. The simulation result shows that the combination of generator power output resulted from the proposed method was more economic compared with the result using conventional constraint but operated at more marginal operating point.

Keywords: Optimal Power Flow, Generator Capability Curve, Particle Swarm Optimization, Neural Network

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2512
8847 A Normalization-based Robust Watermarking Scheme Using Zernike Moments

Authors: Say Wei Foo, Qi Dong

Abstract:

Digital watermarking has become an important technique for copyright protection but its robustness against attacks remains a major problem. In this paper, we propose a normalizationbased robust image watermarking scheme. In the proposed scheme, original host image is first normalized to a standard form. Zernike transform is then applied to the normalized image to calculate Zernike moments. Dither modulation is adopted to quantize the magnitudes of Zernike moments according to the watermark bit stream. The watermark extracting method is a blind method. Security analysis and false alarm analysis are then performed. The quality degradation of watermarked image caused by the embedded watermark is visually transparent. Experimental results show that the proposed scheme has very high robustness against various image processing operations and geometric attacks.

Keywords: Image watermarking, Image normalization, Zernike moments, Robustness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1712
8846 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: Constraint parameters, derivative matrix, magnetocardiography, regular term, total variation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 625
8845 Basicity of Jordanian Natural Clays Studied by Pyrrole-tpd and Catalytic Conversion of Methylbutynol

Authors: M. Z. Alsawalha

Abstract:

The main objective of this study is to investigate basic properties of different natural clays, by two methods. The first method is a gas phase conversion of methylbutynol (MBOH). The second method is the application of Pyrrole-tpd. Based on the product distribution from the first method, the acidic, basic and coordinately unsaturated sites were differentiated. It was shown that both the conversion and the selectivity for basic products did not change with reaction time. Nevertheless, a deviation from the stoichiometric ratio R of formed acetylene to acetone was observed (R=0.8…0.97). The conversion normalized to the surface area was used for establishing the activity sequence: White kaolinite > red kaolinite > bentonite > zeolite > di­ato­mite. In addition, the results were compared with synthetic amorphous alumosilicates and typical basic materials like MgO and ZnO. The basic properties were characterized using the Pyrrole-tpd.  The Pyrrole-tpd results showed the same basicity sequence as the MBOH gas phase reaction.

Keywords: Alumosilicates, basic surface properties, natural clays, normalized conversions with acetylene and acetone, pyrrole-TPD adsorption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1050
8844 Optimal Solution of Constraint Satisfaction Problems

Authors: Jeffrey L. Duffany

Abstract:

An optimal solution for a large number of constraint satisfaction problems can be found using the technique of substitution and elimination of variables analogous to the technique that is used to solve systems of equations. A decision function f(A)=max(A2) is used to determine which variables to eliminate. The algorithm can be expressed in six lines and is remarkable in both its simplicity and its ability to find an optimal solution. However it is inefficient in that it needs to square the updated A matrix after each variable elimination. To overcome this inefficiency the algorithm is analyzed and it is shown that the A matrix only needs to be squared once at the first step of the algorithm and then incrementally updated for subsequent steps, resulting in significant improvement and an algorithm complexity of O(n3).

Keywords: Algorithm, complexity, constraint, np-complete.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1384
8843 Combining Variable Ordering Heuristics for Improving Search Algorithms Performance

Authors: Abdolreza Hatamlou, Yusef Farhang, Mohammad Reza Meybodi

Abstract:

Variable ordering heuristics are used in constraint satisfaction algorithms. Different characteristics of various variable ordering heuristics are complementary. Therefore we have tried to get the advantages of all heuristics to improve search algorithms performance for solving constraint satisfaction problems. This paper considers combinations based on products and quotients, and then a newer form of combination based on weighted sums of ratings from a set of base heuristics, some of which result in definite improvements in performance.

Keywords: Constraint Satisfaction Problems, Variable Ordering Heuristics, Combination, Search Algorithms

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1315
8842 An AK-Chart for the Non-Normal Data

Authors: Chia-Hau Liu, Tai-Yue Wang

Abstract:

Traditional multivariate control charts assume that measurement from manufacturing processes follows a multivariate normal distribution. However, this assumption may not hold or may be difficult to verify because not all the measurement from manufacturing processes are normal distributed in practice. This study develops a new multivariate control chart for monitoring the processes with non-normal data. We propose a mechanism based on integrating the one-class classification method and the adaptive technique. The adaptive technique is used to improve the sensitivity to small shift on one-class classification in statistical process control. In addition, this design provides an easy way to allocate the value of type I error so it is easier to be implemented. Finally, the simulation study and the real data from industry are used to demonstrate the effectiveness of the propose control charts.

Keywords: Multivariate control chart, statistical process control, one-class classification method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2215