Search results for: Function estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3060

Search results for: Function estimation

2700 Robust Stabilization against Unknown Consensus Network

Authors: Myung-Gon Yoon, Jung-Ho Moon, Tae Kwon Ha

Abstract:

This paper studies a robust stabilization problem of a single agent in a multi-agent consensus system composed of identical agents, when the network topology of the system is completely unknown. It is shown that the transfer function of an agent in a consensus system can be described as a multiplicative perturbation of the isolated agent transfer function in frequency domain. From an existing robust stabilization result, we present sufficient conditions for a robust stabilization of an agent against unknown network topology.

Keywords: Multi-agent System, Robust Stabilization, Transfer Function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1869
2699 Discrete Polyphase Matched Filtering-based Soft Timing Estimation for Mobile Wireless Systems

Authors: Thomas O. Olwal, Michael A. van Wyk, Barend J. van Wyk

Abstract:

In this paper we present a soft timing phase estimation (STPE) method for wireless mobile receivers operating in low signal to noise ratios (SNRs). Discrete Polyphase Matched (DPM) filters, a Log-maximum a posterior probability (MAP) and/or a Soft-output Viterbi algorithm (SOVA) are combined to derive a new timing recovery (TR) scheme. We apply this scheme to wireless cellular communication system model that comprises of a raised cosine filter (RCF), a bit-interleaved turbo-coded multi-level modulation (BITMM) scheme and the channel is assumed to be memory-less. Furthermore, no clock signals are transmitted to the receiver contrary to the classical data aided (DA) models. This new model ensures that both the bandwidth and power of the communication system is conserved. However, the computational complexity of ideal turbo synchronization is increased by 50%. Several simulation tests on bit error rate (BER) and block error rate (BLER) versus low SNR reveal that the proposed iterative soft timing recovery (ISTR) scheme outperforms the conventional schemes.

Keywords: discrete polyphase matched filters, maximum likelihood estimators, soft timing phase estimation, wireless mobile systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1692
2698 A New Method for Multiobjective Optimization Based on Learning Automata

Authors: M. R. Aghaebrahimi, S. H. Zahiri, M. Amiri

Abstract:

The necessity of solving multi dimensional complicated scientific problems beside the necessity of several objective functions optimization are the most motive reason of born of artificial intelligence and heuristic methods. In this paper, we introduce a new method for multiobjective optimization based on learning automata. In the proposed method, search space divides into separate hyper-cubes and each cube is considered as an action. After gathering of all objective functions with separate weights, the cumulative function is considered as the fitness function. By the application of all the cubes to the cumulative function, we calculate the amount of amplification of each action and the algorithm continues its way to find the best solutions. In this Method, a lateral memory is used to gather the significant points of each iteration of the algorithm. Finally, by considering the domination factor, pareto front is estimated. Results of several experiments show the effectiveness of this method in comparison with genetic algorithm based method.

Keywords: Function optimization, Multiobjective optimization, Learning automata.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1678
2697 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data

Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L Duan

Abstract:

The conditional density characterizes the distribution of a response variable y given other predictor x, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zP , zN]. The zP component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zN component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since zP represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of x-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of z, such as Gaussian mixture, fails to generate interpretable results.

Keywords: Conditional density estimation, image generation, normalizing flow, supervised dimension reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 167
2696 Application of an Analytical Model to Obtain Daily Flow Duration Curves for Different Hydrological Regimes in Switzerland

Authors: Ana Clara Santos, Maria Manuela Portela, Bettina Schaefli

Abstract:

This work assesses the performance of an analytical model framework to generate daily flow duration curves, FDCs, based on climatic characteristics of the catchments and on their streamflow recession coefficients. According to the analytical model framework, precipitation is considered to be a stochastic process, modeled as a marked Poisson process, and recession is considered to be deterministic, with parameters that can be computed based on different models. The analytical model framework was tested for three case studies with different hydrological regimes located in Switzerland: pluvial, snow-dominated and glacier. For that purpose, five time intervals were analyzed (the four meteorological seasons and the civil year) and two developments of the model were tested: one considering a linear recession model and the other adopting a nonlinear recession model. Those developments were combined with recession coefficients obtained from two different approaches: forward and inverse estimation. The performance of the analytical framework when considering forward parameter estimation is poor in comparison with the inverse estimation for both, linear and nonlinear models. For the pluvial catchment, the inverse estimation shows exceptional good results, especially for the nonlinear model, clearing suggesting that the model has the ability to describe FDCs. For the snow-dominated and glacier catchments the seasonal results are better than the annual ones suggesting that the model can describe streamflows in those conditions and that future efforts should focus on improving and combining seasonal curves instead of considering single annual ones.

Keywords: Analytical streamflow distribution, stochastic process, linear and non-linear recession, hydrological modelling, daily discharges.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 646
2695 Object Recognition in Color Images by the Self Configuring System MEMORI

Authors: Michela Lecca

Abstract:

System MEMORI automatically detects and recognizes rotated and/or rescaled versions of the objects of a database within digital color images with cluttered background. This task is accomplished by means of a region grouping algorithm guided by heuristic rules, whose parameters concern some geometrical properties and the recognition score of the database objects. This paper focuses on the strategies implemented in MEMORI for the estimation of the heuristic rule parameters. This estimation, being automatic, makes the system a self configuring and highly user-friendly tool.

Keywords: Automatic Object Recognition, Clustering, Contentbased Image Retrieval System, Image Segmentation, Region Adjacency Graph, Region Grouping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1202
2694 Application of Wavelet Neural Networks in Optimization of Skeletal Buildings under Frequency Constraints

Authors: Mohammad Reza Ghasemi, Amin Ghorbani

Abstract:

The main goal of the present work is to decrease the computational burden for optimum design of steel frames with frequency constraints using a new type of neural networks called Wavelet Neural Network. It is contested to train a suitable neural network for frequency approximation work as the analysis program. The combination of wavelet theory and Neural Networks (NN) has lead to the development of wavelet neural networks. Wavelet neural networks are feed-forward networks using wavelet as activation function. Wavelets are mathematical functions within suitable inner parameters, which help them to approximate arbitrary functions. WNN was used to predict the frequency of the structures. In WNN a RAtional function with Second order Poles (RASP) wavelet was used as a transfer function. It is shown that the convergence speed was faster than other neural networks. Also comparisons of WNN with the embedded Artificial Neural Network (ANN) and with approximate techniques and also with analytical solutions are available in the literature.

Keywords: Weight Minimization, Frequency Constraints, Steel Frames, ANN, WNN, RASP Function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1741
2693 Impact of the Existence of One-Way Functionson the Conceptual Difficulties of Quantum Measurements

Authors: Arkady Bolotin

Abstract:

One-way functions are functions that are easy to compute but hard to invert. Their existence is an open conjecture; it would imply the existence of intractable problems (i.e. NP-problems which are not in the P complexity class). If true, the existence of one-way functions would have an impact on the theoretical framework of physics, in particularly, quantum mechanics. Such aspect of one-way functions has never been shown before. In the present work, we put forward the following. We can calculate the microscopic state (say, the particle spin in the z direction) of a macroscopic system (a measuring apparatus registering the particle z-spin) by the system macroscopic state (the apparatus output); let us call this association the function F. The question is: can we compute the function F in the inverse direction? In other words, can we compute the macroscopic state of the system through its microscopic state (the preimage F -1)? In the paper, we assume that the function F is a one-way function. The assumption implies that at the macroscopic level the Schrödinger equation becomes unfeasible to compute. This unfeasibility plays a role of limit of the validity of the linear Schrödinger equation.

Keywords: One-way functions, P versus NP problem, quantummeasurements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1308
2692 Electromagnetic Source Direction of Arrival Estimation via Virtual Antenna Array

Authors: Meiling Yang, Shuguo Xie, Yilong Zhu

Abstract:

Nowadays, due to diverse electric products and complex electromagnetic environment, the localization and troubleshooting of the electromagnetic radiation source is urgent and necessary especially on the condition of far field. However, based on the existing DOA positioning method, the system or devices are complex, bulky and expensive. To address this issue, this paper proposes a single antenna radiation source localization method. A single antenna moves to form a virtual antenna array combined with DOA and MUSIC algorithm to position accurately, meanwhile reducing the cost and simplify the equipment. As shown in the results of simulations and experiments, the virtual antenna array DOA estimation modeling is correct and its positioning is credible.

Keywords: Virtual antenna array, DOA, localization, far field.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 970
2691 Effect of a Linear-Exponential Penalty Functionon the GA-s Efficiency in Optimization of a Laminated Composite Panel

Authors: A. Abedian, M. H. Ghiasi, B. Dehghan-Manshadi

Abstract:

A stiffened laminated composite panel (1 m length × 0.5m width) was optimized for minimum weight and deflection under several constraints using genetic algorithm. Here, a significant study on the performance of a penalty function with two kinds of static and dynamic penalty factors was conducted. The results have shown that linear dynamic penalty factors are more effective than the static ones. Also, a specially combined linear-exponential function has shown to perform more effective than the previously mentioned penalty functions. This was then resulted in the less sensitivity of the GA to the amount of penalty factor.

Keywords: Genetic algorithms, penalty function, stiffenedcomposite panel, finite element method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1678
2690 The Statistical Properties of Filtered Signals

Authors: Ephraim Gower, Thato Tsalaile, Monageng Kgwadi, Malcolm Hawksford.

Abstract:

In this paper, the statistical properties of filtered or convolved signals are considered by deriving the resulting density functions as well as the exact mean and variance expressions given a prior knowledge about the statistics of the individual signals in the filtering or convolution process. It is shown that the density function after linear convolution is a mixture density, where the number of density components is equal to the number of observations of the shortest signal. For circular convolution, the observed samples are characterized by a single density function, which is a sum of products.

Keywords: Circular Convolution, linear Convolution, mixture density function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1516
2689 A Reversible CMOS AD / DA Converter Implemented with Pseudo Floating-Gate

Authors: Omid Mirmotahari, Yngvar Berg, Ahmad Habibizad Navin

Abstract:

Reversible logic is becoming more and more prominent as the technology sets higher demands on heat, power, scaling and stability. Reversible gates are able at any time to "undo" the current step or function. Multiple-valued logic has the advantage of transporting and evaluating higher bits each clock cycle than binary. Moreover, we demonstrate in this paper, combining these disciplines we can construct powerful multiple-valued reversible logic structures. In this paper a reversible block implemented by pseudo floatinggate can perform AD-function and a DA-function as its reverse application.

Keywords: Reversible logic, bi-directional, Pseudo floating-gate(PFG), multiple-valued logic (MVL).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1604
2688 Piezoelectric Transducer Modeling: with System Identification (SI) Method

Authors: Nora Taghavi, Ali Sadr

Abstract:

System identification is the process of creating models of dynamic process from input- output signals. The aim of system identification can be identified as “ to find a model with adjustable parameters and then to adjust them so that the predicted output matches the measured output". This paper presents a method of modeling and simulating with system identification to achieve the maximum fitness for transformation function. First by using optimized KLM equivalent circuit for PVDF piezoelectric transducer and assuming different inputs including: sinuside, step and sum of sinusides, get the outputs, then by using system identification toolbox in MATLAB, we estimate the transformation function from inputs and outputs resulted in last program. Then compare the fitness of transformation function resulted from using ARX,OE(Output- Error) and BJ(Box-Jenkins) models in system identification toolbox and primary transformation function form KLM equivalent circuit.

Keywords: PVDF modeling, ARX, BJ(Box-Jenkins), OE(Output-Error), System Identification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2747
2687 Ratio Type Estimators of the Population Mean Based on Ranked Set Sampling

Authors: Said Ali Al-Hadhrami

Abstract:

Ranked set sampling (RSS) was first suggested to increase the efficiency of the population mean. It has been shown that this method is highly beneficial to the estimation based on simple random sampling (SRS). There has been considerable development and many modifications were done on this method. When a concomitant variable is available, ratio estimation based on ranked set sampling was proposed. This ratio estimator is more efficient than that based on SRS. In this paper some ratio type estimators of the population mean based on RSS are suggested. These estimators are found to be more efficient than the estimators of similar form using simple random sample.

Keywords: Bias, Efficiency, Ranked Set Sampling, Ratio Type Estimator

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1374
2686 Ψ-Eventual Stability of Differential System with Impulses

Authors: Bhanu Gupta

Abstract:

In this paper, the criteria of Ψ-eventual stability have been established for generalized impulsive differential systems of multiple dependent variables. The sufficient conditions have been obtained using piecewise continuous Lyapunov function. An example is given to support our theoretical result.

Keywords: impulsive differential equations, Lyapunov function, eventual stability

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4052
2685 A Survey on Quasi-Likelihood Estimation Approaches for Longitudinal Set-ups

Authors: Naushad Mamode Khan

Abstract:

The Com-Poisson (CMP) model is one of the most popular discrete generalized linear models (GLMS) that handles both equi-, over- and under-dispersed data. In longitudinal context, an integer-valued autoregressive (INAR(1)) process that incorporates covariate specification has been developed to model longitudinal CMP counts. However, the joint likelihood CMP function is difficult to specify and thus restricts the likelihood-based estimating methodology. The joint generalized quasi-likelihood approach (GQL-I) was instead considered but is rather computationally intensive and may not even estimate the regression effects due to a complex and frequently ill-conditioned covariance structure. This paper proposes a new GQL approach for estimating the regression parameters (GQL-III) that is based on a single score vector representation. The performance of GQL-III is compared with GQL-I and separate marginal GQLs (GQL-II) through some simulation experiments and is proved to yield equally efficient estimates as GQL-I and is far more computationally stable.

Keywords: Longitudinal, Com-Poisson, Ill-conditioned, INAR(1), GLMS, GQL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1776
2684 Direct Transient Stability Assessment of Stressed Power Systems

Authors: E. Popov, N. Yorino, Y. Zoka, Y. Sasaki, H. Sugihara

Abstract:

This paper discusses the performance of critical trajectory method (CTrj) for power system transient stability analysis under various loading settings and heavy fault condition. The method obtains Controlling Unstable Equilibrium Point (CUEP) which is essential for estimation of power system stability margins. The CUEP is computed by applying the CTrjto the boundary controlling unstable equilibrium point (BCU) method. The Proposed method computes a trajectory on the stability boundary that starts from the exit point and reaches CUEP under certain assumptions. The robustness and effectiveness of the method are demonstrated via six power system models and five loading conditions. As benchmark is used conventional simulation method whereas the performance is compared with and BCU Shadowing method.

Keywords: Power system, Transient stability, Critical trajectory method, Energy function method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2124
2683 A Preliminary Study on the Suitability of Data Driven Approach for Continuous Water Level Modeling

Authors: Muhammad Aqil, Ichiro Kita, Moses Macalinao

Abstract:

Reliable water level forecasts are particularly important for warning against dangerous flood and inundation. The current study aims at investigating the suitability of the adaptive network based fuzzy inference system for continuous water level modeling. A hybrid learning algorithm, which combines the least square method and the back propagation algorithm, is used to identify the parameters of the network. For this study, water levels data are available for a hydrological year of 2002 with a sampling interval of 1-hour. The number of antecedent water level that should be included in the input variables is determined by two statistical methods, i.e. autocorrelation function and partial autocorrelation function between the variables. Forecasting was done for 1-hour until 12-hour ahead in order to compare the models generalization at higher horizons. The results demonstrate that the adaptive networkbased fuzzy inference system model can be applied successfully and provide high accuracy and reliability for river water level estimation. In general, the adaptive network-based fuzzy inference system provides accurate and reliable water level prediction for 1-hour ahead where the MAPE=1.15% and correlation=0.98 was achieved. Up to 12-hour ahead prediction, the model still shows relatively good performance where the error of prediction resulted was less than 9.65%. The information gathered from the preliminary results provide a useful guidance or reference for flood early warning system design in which the magnitude and the timing of a potential extreme flood are indicated.

Keywords: Neural Network, Fuzzy, River, Forecasting

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1289
2682 Monotonicity of Dependence Concepts from Independent Random Vector into Dependent Random Vector

Authors: Guangpu Chen

Abstract:

When the failure function is monotone, some monotonic reliability methods are used to gratefully simplify and facilitate the reliability computations. However, these methods often work in a transformed iso-probabilistic space. To this end, a monotonic simulator or transformation is needed in order that the transformed failure function is still monotone. This note proves at first that the output distribution of failure function is invariant under the transformation. And then it presents some conditions under which the transformed function is still monotone in the newly obtained space. These concern the copulas and the dependence concepts. In many engineering applications, the Gaussian copulas are often used to approximate the real word copulas while the available information on the random variables is limited to the set of marginal distributions and the covariances. So this note catches an importance on the conditional monotonicity of the often used transformation from an independent random vector into a dependent random vector with Gaussian copulas.

Keywords: Monotonic, Rosenblatt, Nataf transformation, dependence concepts, completely positive matrices, Gaussiancopulas

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1212
2681 Eigenvalues of Particle Bound in Single and Double Delta Function Potentials through Numerical Analysis

Authors: Edward Aris D. Fajardo, Hamdi Muhyuddin D. Barra

Abstract:

This study employs the use of the fourth order Numerov scheme to determine the eigenstates and eigenvalues of particles, electrons in particular, in single and double delta function potentials. For the single delta potential, it is found that the eigenstates could only be attained by using specific potential depths. The depth of the delta potential well has a value that varies depending on the delta strength. These depths are used for each well on the double delta function potential and the eigenvalues are determined. There are two bound states found in the computation, one with a symmetric eigenstate and another one which is antisymmetric.

Keywords: Double Delta Potential, Eigenstates, Eigenvalue, Numerov Method, Single Delta Potential

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3062
2680 On Bayesian Analysis of Failure Rate under Topp Leone Distribution using Complete and Censored Samples

Authors: N. Feroze, M. Aslam

Abstract:

The article is concerned with analysis of failure rate (shape parameter) under the Topp Leone distribution using a Bayesian framework. Different loss functions and a couple of noninformative priors have been assumed for posterior estimation. The posterior predictive distributions have also been derived. A simulation study has been carried to compare the performance of different estimators. A real life example has been used to illustrate the applicability of the results obtained. The findings of the study suggest  that the precautionary loss function based on Jeffreys prior and singly type II censored samples can effectively be employed to obtain the Bayes estimate of the failure rate under Topp Leone distribution.

Keywords: loss functions, type II censoring, posterior distribution, Bayes estimators.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2560
2679 Speech Enhancement of Vowels Based on Pitch and Formant Frequency

Authors: R. Rishma Rodrigo, R. Radhika, M. Vanitha Lakshmi

Abstract:

Numerous signal processing based speech enhancement systems have been proposed to improve intelligibility in the presence of noise. Traditionally, studies of neural vowel encoding have focused on the representation of formants (peaks in vowel spectra) in the discharge patterns of the population of auditory-nerve (AN) fibers. A method is presented for recording high-frequency speech components into a low-frequency region, to increase audibility for hearing loss listeners. The purpose of the paper is to enhance the formant of the speech based on the Kaiser window. The pitch and formant of the signal is based on the auto correlation, zero crossing and magnitude difference function. The formant enhancement stage aims to restore the representation of formants at the level of the midbrain. A MATLAB software’s are used for the implementation of the system with low complexity is developed.

Keywords: Formant estimation, formant enhancement, pitch detection, speech analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1641
2678 Application of the Total Least Squares Estimation Method for an Aircraft Aerodynamic Model Identification

Authors: Zaouche Mohamed, Amini Mohamed, Foughali Khaled, Aitkaid Souhila, Bouchiha Nihad Sarah

Abstract:

The aerodynamic coefficients are important in the evaluation of an aircraft performance and stability-control characteristics. These coefficients also can be used in the automatic flight control systems and mathematical model of flight simulator. The study of the aerodynamic aspect of flying systems is a reserved domain and inaccessible for the developers. Doing tests in a wind tunnel to extract aerodynamic forces and moments requires a specific and expensive means. Besides, the glaring lack of published documentation in this field of study makes the aerodynamic coefficients determination complicated. This work is devoted to the identification of an aerodynamic model, by using an aircraft in virtual simulated environment. We deal with the identification of the system, we present an environment framework based on Software In the Loop (SIL) methodology and we use MicrosoftTM Flight Simulator (FS-2004) as the environment for plane simulation. We propose The Total Least Squares Estimation technique (TLSE) to identify the aerodynamic parameters, which are unknown, variable, classified and used in the expression of the piloting law. In this paper, we define each aerodynamic coefficient as the mean of its numerical values. All other variations are considered as modeling uncertainties that will be compensated by the robustness of the piloting control.

Keywords: Aircraft aerodynamic model, Microsoft flight simulator, MQ-1 Predator, total least squares estimation, piloting the aircraft.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1669
2677 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting

Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas

Abstract:

The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.

Keywords: Artificial neural network, low series manufacturing, polymer cutting, setup period estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 975
2676 Chemical Reaction Algorithm for Expectation Maximization Clustering

Authors: Li Ni, Pen ManMan, Li KenLi

Abstract:

Clustering is an intensive research for some years because of its multifaceted applications, such as biology, information retrieval, medicine, business and so on. The expectation maximization (EM) is a kind of algorithm framework in clustering methods, one of the ten algorithms of machine learning. Traditionally, optimization of objective function has been the standard approach in EM. Hence, research has investigated the utility of evolutionary computing and related techniques in the regard. Chemical Reaction Optimization (CRO) is a recently established method. So the property embedded in CRO is used to solve optimization problems. This paper presents an algorithm framework (EM-CRO) with modified CRO operators based on EM cluster problems. The hybrid algorithm is mainly to solve the problem of initial value sensitivity of the objective function optimization clustering algorithm. Our experiments mainly take the EM classic algorithm:k-means and fuzzy k-means as an example, through the CRO algorithm to optimize its initial value, get K-means-CRO and FKM-CRO algorithm. The experimental results of them show that there is improved efficiency for solving objective function optimization clustering problems.

Keywords: Chemical reaction optimization, expectation maximization, initial, objective function clustering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1293
2675 The Influence of Pad Thermal Diffusivity over Heat Transfer into the PCBs Structure

Authors: Mihai Brânzei, Ioan Plotog, Ion Pencea

Abstract:

The Pads have unique values of thermophysical properties (THP) having important contribution over heat transfer into the PCB structure. Materials with high thermal diffusivity (TD) rapidly adjust their temperature to that of their surroundings, because the HT is quick in compare to their volumetric heat capacity (VHC). In the paper is presenting the diffusivity tests (ASTM E1461 flash method) for PCBs with different core materials. In the experiments, the multilayer structure of PCBA was taken into consideration, an equivalent property referring to each of experimental structure be practically measured. Concerning to entire structure, the THP emphasize the major contribution of substrate in establishing of reflow soldering process (RSP) heat transfer necessities. This conclusion offer practical solution for heat transfer time constant calculation as function of thickness and substrate material diffusivity with an acceptable error estimation.

Keywords: heat transfer time constant, packaging, reflowsoldering process, thermal diffusivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2330
2674 Organization of the Purchasing Function for Innovation

Authors: Jasna Prester, Ivana Rašić Bakarić, Božidar Matijević

Abstract:

Innovations not only contribute to competitiveness of the company but have also positive effects on revenues. On average, product innovations account to 14 percent of companies’ sales. Innovation management has substantially changed during the last decade, because of growing reliance on external partners. As a consequence, a new task for purchasing arises, as firms need to understand which suppliers actually do have high potential contributing to the innovativeness of the firm and which do not. Proper organization of the purchasing function is important since for the majority of manufacturing companies deal with substantial material costs which pass through the purchasing function. In the past the purchasing function was largely seen as a transaction-oriented, clerical function but today purchasing is the intermediate with supply chain partners contributing to innovations, be it product or process innovations. Therefore, purchasing function has to be organized differently to enable firm innovation potential. However, innovations are inherently risky. There are behavioral risk (that some partner will take advantage of the other party), technological risk in terms of complexity of products and processes of manufacturing and incoming materials and finally market risks, which in fact judge the value of the innovation. These risks are investigated in this work. Specifically, technological risks which deal with complexity of the products, and processes will be investigated more thoroughly. Buying components or such high edge technologies necessities careful investigation of technical features and therefore is usually conducted by a team of experts. Therefore it is hypothesized that higher the technological risk, higher will be the centralization of the purchasing function as an interface with other supply chain members. Main contribution of this research lies is in the fact that analysis was performed on a large data set of 1493 companies, from 25 countries collected in the GMRG 4 survey. Most analyses of purchasing function are done by case study analysis of innovative firms. Therefore this study contributes with empirical evaluations that can be generalized.

Keywords: Purchasing function organization, innovation, technological risk, GMRG 4 survey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3721
2673 Angles of Arrival Estimation with Unitary Partial Propagator

Authors: Youssef Khmou, Said Safi

Abstract:

In this paper, we investigated the effect of real valued transformation of the spectral matrix of the received data for Angles Of Arrival estimation problem.  Indeed, the unitary transformation of Partial Propagator (UPP) for narrowband sources is proposed and applied on Uniform Linear Array (ULA).

Monte Carlo simulations proved the performance of the UPP spectrum comparatively with Forward Backward Partial Propagator (FBPP) and Unitary Propagator (UP). The results demonstrates that when some of the sources are fully correlated and closer than the Rayleigh angular limit resolution of the broadside array, the UPP method outperforms the FBPP in both of spatial resolution and complexity.

Keywords: DOA, Uniform Linear Array, Narrowband, Propagator, Real valued transformation, Subspace, Unitary Operator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2284
2672 Green Function and Eshelby Tensor Based on Mindlin’s 2nd Gradient Model: An Explicit Study of Spherical Inclusion Case

Authors: A. Selmi, A. Bisharat

Abstract:

Using Fourier transform and based on the Mindlin's 2nd gradient model that involves two length scale parameters, the Green's function, the Eshelby tensor, and the Eshelby-like tensor for a spherical inclusion are derived. It is proved that the Eshelby tensor consists of two parts; the classical Eshelby tensor and a gradient part including the length scale parameters which enable the interpretation of the size effect. When the strain gradient is not taken into account, the obtained Green's function and Eshelby tensor reduce to its analogue based on the classical elasticity. The Eshelby tensor in and outside the inclusion, the volume average of the gradient part and the Eshelby-like tensor are explicitly obtained. Unlike the classical Eshelby tensor, the results show that the components of the new Eshelby tensor vary with the position and the inclusion dimensions. It is demonstrated that the contribution of the gradient part should not be neglected.

Keywords: Eshelby tensor, Eshelby-like tensor, Green’s function, Mindlin’s 2nd gradient model, Spherical inclusion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 724
2671 Ultrasonic Echo Image Adaptive Watermarking Using the Just-Noticeable Difference Estimation

Authors: Amnach Khawne, Kazuhiko Hamamoto, Orachat Chitsobhuk

Abstract:

Most of the image watermarking methods, using the properties of the human visual system (HVS), have been proposed in literature. The component of the visual threshold is usually related to either the spatial contrast sensitivity function (CSF) or the visual masking. Especially on the contrast masking, most methods have not mention to the effect near to the edge region. Since the HVS is sensitive what happens on the edge area. This paper proposes ultrasound image watermarking using the visual threshold corresponding to the HVS in which the coefficients in a DCT-block have been classified based on the texture, edge, and plain area. This classification method enables not only useful for imperceptibility when the watermark is insert into an image but also achievable a robustness of watermark detection. A comparison of the proposed method with other methods has been carried out which shown that the proposed method robusts to blockwise memoryless manipulations, and also robust against noise addition.

Keywords: Medical image watermarking, Human Visual System, Image Adaptive Watermark

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1602