Search results for: scale optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3131

Search results for: scale optimization

2291 Application of Feed-Forward Neural Networks Autoregressive Models with Genetic Algorithm in Gross Domestic Product Prediction

Authors: E. Giovanis

Abstract:

In this paper we present a Feed-Foward Neural Networks Autoregressive (FFNN-AR) model with genetic algorithms training optimization in order to predict the gross domestic product growth of six countries. Specifically we propose a kind of weighted regression, which can be used for econometric purposes, where the initial inputs are multiplied by the neural networks final optimum weights from input-hidden layer of the training process. The forecasts are compared with those of the ordinary autoregressive model and we conclude that the proposed regression-s forecasting results outperform significant those of autoregressive model. Moreover this technique can be used in Autoregressive-Moving Average models, with and without exogenous inputs, as also the training process with genetics algorithms optimization can be replaced by the error back-propagation algorithm.

Keywords: Autoregressive model, Feed-Forward neuralnetworks, Genetic Algorithms, Gross Domestic Product

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658
2290 Integrated ACOR/IACOMV-R-SVM Algorithm

Authors: Hiba Basim Alwan, Ku Ruhana Ku-Mahamud

Abstract:

A direction for ACO is to optimize continuous and mixed (discrete and continuous) variables in solving problems with various types of data. Support Vector Machine (SVM), which originates from the statistical approach, is a present day classification technique. The main problems of SVM are selecting feature subset and tuning the parameters. Discretizing the continuous value of the parameters is the most common approach in tuning SVM parameters. This process will result in loss of information which affects the classification accuracy. This paper presents two algorithms that can simultaneously tune SVM parameters and select the feature subset. The first algorithm, ACOR-SVM, will tune SVM parameters, while the second IACOMV-R-SVM algorithm will simultaneously tune SVM parameters and select the feature subset. Three benchmark UCI datasets were used in the experiments to validate the performance of the proposed algorithms. The results show that the proposed algorithms have good performances as compared to other approaches.

Keywords: Continuous ant colony optimization, incremental continuous ant colony, simultaneous optimization, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 860
2289 Speaker Identification Using Admissible Wavelet Packet Based Decomposition

Authors: Mangesh S. Deshpande, Raghunath S. Holambe

Abstract:

Mel Frequency Cepstral Coefficient (MFCC) features are widely used as acoustic features for speech recognition as well as speaker recognition. In MFCC feature representation, the Mel frequency scale is used to get a high resolution in low frequency region, and a low resolution in high frequency region. This kind of processing is good for obtaining stable phonetic information, but not suitable for speaker features that are located in high frequency regions. The speaker individual information, which is non-uniformly distributed in the high frequencies, is equally important for speaker recognition. Based on this fact we proposed an admissible wavelet packet based filter structure for speaker identification. Multiresolution capabilities of wavelet packet transform are used to derive the new features. The proposed scheme differs from previous wavelet based works, mainly in designing the filter structure. Unlike others, the proposed filter structure does not follow Mel scale. The closed-set speaker identification experiments performed on the TIMIT database shows improved identification performance compared to other commonly used Mel scale based filter structures using wavelets.

Keywords: Speaker identification, Wavelet transform, Feature extraction, MFCC, GMM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1968
2288 Generalized Rough Sets Applied to Graphs Related to Urban Problems

Authors: Mihai Rebenciuc, Simona Mihaela Bibic

Abstract:

Branch of modern mathematics, graphs represent instruments for optimization and solving practical applications in various fields such as economic networks, engineering, network optimization, the geometry of social action, generally, complex systems including contemporary urban problems (path or transport efficiencies, biourbanism, & c.). In this paper is studied the interconnection of some urban network, which can lead to a simulation problem of a digraph through another digraph. The simulation is made univoc or more general multivoc. The concepts of fragment and atom are very useful in the study of connectivity in the digraph that is simulation - including an alternative evaluation of k- connectivity. Rough set approach in (bi)digraph which is proposed in premier in this paper contribute to improved significantly the evaluation of k-connectivity. This rough set approach is based on generalized rough sets - basic facts are presented in this paper.

Keywords: (Bi)digraphs, rough set theory, systems of interacting agents, complex systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1175
2287 Bin Bloom Filter Using Heuristic Optimization Techniques for Spam Detection

Authors: N. Arulanand, K. Premalatha

Abstract:

Bloom filter is a probabilistic and memory efficient data structure designed to answer rapidly whether an element is present in a set. It tells that the element is definitely not in the set but its presence is with certain probability. The trade-off to use Bloom filter is a certain configurable risk of false positives. The odds of a false positive can be made very low if the number of hash function is sufficiently large. For spam detection, weight is attached to each set of elements. The spam weight for a word is a measure used to rate the e-mail. Each word is assigned to a Bloom filter based on its weight. The proposed work introduces an enhanced concept in Bloom filter called Bin Bloom Filter (BBF). The performance of BBF over conventional Bloom filter is evaluated under various optimization techniques. Real time data set and synthetic data sets are used for experimental analysis and the results are demonstrated for bin sizes 4, 5, 6 and 7. Finally analyzing the results, it is found that the BBF which uses heuristic techniques performs better than the traditional Bloom filter in spam detection.

Keywords: Cuckoo search algorithm, levy’s flight, metaheuristic, optimal weight.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2243
2286 A New Heuristic Approach for the Large-Scale Generalized Assignment Problem

Authors: S. Raja Balachandar, K.Kannan

Abstract:

This paper presents a heuristic approach to solve the Generalized Assignment Problem (GAP) which is NP-hard. It is worth mentioning that many researches used to develop algorithms for identifying the redundant constraints and variables in linear programming model. Some of the algorithms are presented using intercept matrix of the constraints to identify redundant constraints and variables prior to the start of the solution process. Here a new heuristic approach based on the dominance property of the intercept matrix to find optimal or near optimal solution of the GAP is proposed. In this heuristic, redundant variables of the GAP are identified by applying the dominance property of the intercept matrix repeatedly. This heuristic approach is tested for 90 benchmark problems of sizes upto 4000, taken from OR-library and the results are compared with optimum solutions. Computational complexity is proved to be O(mn2) of solving GAP using this approach. The performance of our heuristic is compared with the best state-ofthe- art heuristic algorithms with respect to both the quality of the solutions. The encouraging results especially for relatively large size test problems indicate that this heuristic approach can successfully be used for finding good solutions for highly constrained NP-hard problems.

Keywords: Combinatorial Optimization Problem, Generalized Assignment Problem, Intercept Matrix, Heuristic, Computational Complexity, NP-Hard Problems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2333
2285 Optimization of New 25A-size Metal Gasket Design Based on Contact Width Considering Forming and Contact Stress Effect

Authors: Didik Nurhadiyanto , Moch Agus Choiron , Ken Kaminishi , Shigeyuki Haruyama

Abstract:

At the previous study of new metal gasket, contact width and contact stress were important design parameter for optimizing metal gasket performance. However, the range of contact stress had not been investigated thoroughly. In this study, we conducted a gasket design optimization based on an elastic and plastic contact stress analysis considering forming effect using FEM. The gasket model was simulated by using two simulation stages which is forming and tightening simulation. The optimum design based on an elastic and plastic contact stress was founded. Final evaluation was determined by helium leak quantity to check leakage performance of both type of gaskets. The helium leak test shows that a gasket based on the plastic contact stress design better than based on elastic stress design.

Keywords: Contact stress, metal gasket, plastic, elastic

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1742
2284 Optimization of Loudspeaker Part Design Parameters by Air Viscosity Damping Effect

Authors: Yue Hu, Xilu Zhao, Takao Yamaguchi, Manabu Sasajima, Yoshio Koike, Akira Hara

Abstract:

This study optimized the design parameters of a cone loudspeaker as an example of high flexibility of the product design. We developed an acoustic analysis software program that considers the impact of damping caused by air viscosity. In sound reproduction, it is difficult to optimize each parameter of the loudspeaker design. To overcome the limitation of the design problem in practice, this study presents an acoustic analysis algorithm to optimize the design parameters of the loudspeaker. The material character of cone paper and the loudspeaker edge were the design parameters, and the vibration displacement of the cone paper was the objective function. The results of the analysis showed that the design had high accuracy as compared to the predicted value. These results suggested that although the parameter design is difficult, with experience and intuition, the design can be performed easily using the optimized design found with the acoustic analysis software.

Keywords: Air viscosity, design parameters, loudspeaker, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1173
2283 Enhancing Predictive Accuracy in Pharmaceutical Sales Through an Ensemble Kernel Gaussian Process Regression Approach

Authors: Shahin Mirshekari, Mohammadreza Moradi, Hossein Jafari, Mehdi Jafari, Mohammad Ensaf

Abstract:

This research employs Gaussian Process Regression (GPR) with an ensemble kernel, integrating Exponential Squared, Revised Matérn, and Rational Quadratic kernels to analyze pharmaceutical sales data. Bayesian optimization was used to identify optimal kernel weights: 0.76 for Exponential Squared, 0.21 for Revised Matérn, and 0.13 for Rational Quadratic. The ensemble kernel demonstrated superior performance in predictive accuracy, achieving an R² score near 1.0, and significantly lower values in MSE, MAE, and RMSE. These findings highlight the efficacy of ensemble kernels in GPR for predictive analytics in complex pharmaceutical sales datasets.

Keywords: Gaussian Process Regression, Ensemble Kernels, Bayesian Optimization, Pharmaceutical Sales Analysis, Time Series Forecasting, Data Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 77
2282 Optimal Allocation of FACTS Devices for ATC Enhancement Using Bees Algorithm

Authors: R.Mohamad Idris, A.Khairuddin, M.W.Mustafa

Abstract:

In this paper, a novel method using Bees Algorithm is proposed to determine the optimal allocation of FACTS devices for maximizing the Available Transfer Capability (ATC) of power transactions between source and sink areas in the deregulated power system. The algorithm simultaneously searches the FACTS location, FACTS parameters and FACTS types. Two types of FACTS are simulated in this study namely Thyristor Controlled Series Compensator (TCSC) and Static Var Compensator (SVC). A Repeated Power Flow with FACTS devices including ATC is used to evaluate the feasible ATC value within real and reactive power generation limits, line thermal limits, voltage limits and FACTS operation limits. An IEEE30 bus system is used to demonstrate the effectiveness of the algorithm as an optimization tool to enhance ATC. A Genetic Algorithm technique is used for validation purposes. The results clearly indicate that the introduction of FACTS devices in a right combination of location and parameters could enhance ATC and Bees Algorithm can be efficiently used for this kind of nonlinear integer optimization.

Keywords: ATC, Bees Algorithm, TCSC, SVC

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3147
2281 Oscillation Criteria for Nonlinear Second-order Damped Delay Dynamic Equations on Time Scales

Authors: Da-Xue Chen, Guang-Hui Liu

Abstract:

In this paper, we establish several oscillation criteria for the nonlinear second-order damped delay dynamic equation r(t)|xΔ(t)|β-1xΔ(t)Δ + p(t)|xΔσ(t)|β-1xΔσ(t) + q(t)f(x(τ (t))) = 0 on an arbitrary time scale T, where β > 0 is a constant. Our results generalize and improve some known results in which β > 0 is a quotient of odd positive integers. Some examples are given to illustrate our main results.

Keywords: Oscillation, damped delay dynamic equation, time scale.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1262
2280 A Novel Multiresolution based Optimization Scheme for Robust Affine Parameter Estimation

Authors: J.Dinesh Peter

Abstract:

This paper describes a new method for affine parameter estimation between image sequences. Usually, the parameter estimation techniques can be done by least squares in a quadratic way. However, this technique can be sensitive to the presence of outliers. Therefore, parameter estimation techniques for various image processing applications are robust enough to withstand the influence of outliers. Progressively, some robust estimation functions demanding non-quadratic and perhaps non-convex potentials adopted from statistics literature have been used for solving these. Addressing the optimization of the error function in a factual framework for finding a global optimal solution, the minimization can begin with the convex estimator at the coarser level and gradually introduce nonconvexity i.e., from soft to hard redescending non-convex estimators when the iteration reaches finer level of multiresolution pyramid. Comparison has been made to find the performance of the results of proposed method with the results found individually using two different estimators.

Keywords: Image Processing, Affine parameter estimation, Outliers, Robust Statistics, Robust M-estimators

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443
2279 Environmental Potentials within the Production of Asphalt Mixtures

Authors: Florian Gschösser, Walter Purrer

Abstract:

The paper shows examples for the (environmental) optimization of production processes for asphalt mixtures applied for typical road pavements in Austria and Switzerland. The conducted “from-cradle-to-gate” LCA firstly analyzes the production one cubic meter of asphalt and secondly all material production processes for exemplary highway pavements applied in Austria and Switzerland. It is shown that environmental impacts can be reduced by the application of reclaimed asphalt pavement (RAP) and by the optimization of specific production characteristics, e.g. the reduction of the initial moisture of the mineral aggregate and the reduction of the mixing temperature by the application of low-viscosity and foam bitumen. The results of the LCA study demonstrate reduction potentials per cubic meter asphalt of up to 57 % (Global Warming Potential–GWP) and 77 % (Ozone depletion–ODP). The analysis per square meter of asphalt pavement determined environmental potentials of up to 40 % (GWP) and 56 % (ODP).

Keywords: Asphalt mixtures, environmental potentials, life cycle assessment, material production.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1080
2278 A Multi-Objective Evolutionary Algorithm of Neural Network for Medical Diseases Problems

Authors: Sultan Noman Qasem

Abstract:

This paper presents an evolutionary algorithm for solving multi-objective optimization problems-based artificial neural network (ANN). The multi-objective evolutionary algorithm used in this study is genetic algorithm while ANN used is radial basis function network (RBFN). The proposed algorithm named memetic elitist Pareto non-dominated sorting genetic algorithm-based RBFN (MEPGAN). The proposed algorithm is implemented on medical diseases problems. The experimental results indicate that the proposed algorithm is viable, and provides an effective means to design multi-objective RBFNs with good generalization capability and compact network structure. This study shows that MEPGAN generates RBFNs coming with an appropriate balance between accuracy and simplicity, comparing to the other algorithms found in literature.

Keywords: Radial basis function network, Hybrid learning, Multi-objective optimization, Genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2235
2277 A Decision Support System Based on Leprosy Scales

Authors: Dennys Robson Girardi, Hugo Bulegon, Claudia Maria Moro Barra

Abstract:

Leprosy is an infectious disease caused by Mycobacterium Leprae, this disease, generally, compromises the neural fibers, leading to the development of disability. Disabilities are changes that limit daily activities or social life of a normal individual. When comes to leprosy, the study of disability considered the functional limitation (physical disabilities), the limitation of activity and social participation, which are measured respectively by the scales: EHF, SALSA and PARTICIPATION SCALE. The objective of this work is to propose an on-line monitoring of leprosy patients, which is based on information scales EHF, SALSA and PARTICIPATION SCALE. It is expected that the proposed system is applied in monitoring the patient during treatment and after healing therapy of the disease. The correlations that the system is between the scales create a variety of information, presented the state of the patient and full of changes or reductions in disability. The system provides reports with information from each of the scales and the relationships that exist between them. This way, health professionals, with access to patient information, can intervene with techniques for the Prevention of Disability. Through the automated scale, the system shows the level of the patient and allows the patient, or the responsible, to take a preventive measure. With an online system, it is possible take the assessments and monitor patients from anywhere.

Keywords: Leprosy, Medical Informatics, Decision SupportSystem, Disability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2032
2276 An iTunes U App for Development of Metacognition Skills Delivered in the Enrichment Program Offered to Gifted Students at the Secondary Level

Authors: Maha Awad M. Almuttairi

Abstract:

This research aimed to measure the impact of the use of a mobile learning (iTunes U) app for the development of metacognition skills delivered in the enrichment program offered to gifted students at the secondary level in Jeddah. The author targeted a group of students on an experimental scale to evaluate the achievement. The research sample consisted of a group of 38 gifted female students. The scale of evaluation of the metacognition skills used to measure the performance of students in the enrichment program was as follows: Satisfaction scale for the assessment of the technique used and the final product form after completion of the program. Appropriate statistical treatment used includes Paired Samples T-Test Cronbach’s alpha formula and eta squared formula. It was concluded in the results the difference of α≤ 0.05, which means the performance of students in the skills of metacognition in favor of using iTunes U. In light of the conclusion of the experiment, a number of recommendations and suggestions were present; the most important benefit of mobile learning applications is to provide enrichment programs for gifted students in the Kingdom of Saudi Arabia, as well as conducting further research on mobile learning and gifted student teaching.

Keywords: Enrichment program, gifted students, metacognition skills.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 698
2275 Comparative Productivity Analysis of Median Scale Battery Cage and Deep Litter Housing Chicken Egg Production in Rivers State, Nigeria

Authors: D. I. Ekine, C. C. Akpanibah

Abstract:

This paper analyses the productivity of median scale battery cage and deep litter chicken egg producers in Rivers State, Nigeria. 90 battery cage and 90 deep litter farmers giving a total of 180 farmers were sampled through a multistage sampling procedure. Mean productivity was higher for the battery cage than the deep litter farmers at 2.65 and 2.33 respectively. Productivity of battery cage farmers were positively influenced by age, extension contacts, experience and feed quantity while the productivity of deep litter farmers was positively influenced by age, extension contacts, household size, experience and labour. The major constraints identified by both categories are high cost of feed, high price of day-old chick, inadequate finance, lack of credit and high cost of drug/vaccination. Furthermore, the work recommends that government should assist chicken egg farmers through subsidies of input resources and put policies to make financial institutions give out loans at low interest rate to the farmers. The farmers should abide by the recommended number of birds per unit area while stocking.

Keywords: Productivity, battery cage, deep litter, median scale, egg production.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 215
2274 Optimization of Diverter Box Configuration in a V94.2 Gas Turbine Exhaust System using Numerical Simulation

Authors: A. Mohajer, A. Noroozi, S. Norouzi

Abstract:

The bypass exhaust system of a 160 MW combined cycle has been modeled and analyzed using numerical simulation in 2D prospective. Analysis was carried out using the commercial numerical simulation software, FLUENT 6.2. All inputs were based on the technical data gathered from working conditions of a Siemens V94.2 gas turbine, installed in the Yazd power plant. This paper deals with reduction of pressure drop in bypass exhaust system using turning vanes mounted in diverter box in order to alleviate turbulent energy dissipation rate above diverter box. The geometry of such turning vanes has been optimized based on the flow pattern at diverter box inlet. The results show that the use of optimized turning vanes in diverter box can improve the flow pattern and eliminate vortices around sharp edges just before the silencer. Furthermore, this optimization could decrease the pressure drop in bypass exhaust system and leads to higher plant efficiency.

Keywords: Numerical simulation, Diverter box, Turning vanes, Exhaust system

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2788
2273 Precombining Adaptive LMMSE Detection for DS-CDMA Systems in Time Varying Channels: Non Blind and Blind Approaches

Authors: M. D. Kokate, T. R. Sontakke, P. W. Wani

Abstract:

This paper deals with an adaptive multiuser detector for direct sequence code division multiple-access (DS-CDMA) systems. A modified receiver, precombinig LMMSE is considered under time varying channel environment. Detector updating is performed with two criterions, mean square estimation (MSE) and MOE optimization technique. The adaptive implementation issues of these two schemes are quite different. MSE criterion updates the filter weights by minimizing error between data vector and adaptive vector. MOE criterion together with canonical representation of the detector results in a constrained optimization problem. Even though the canonical representation is very complicated under time varying channels, it is analyzed with assumption of average power profile of multipath replicas of user of interest. The performance of both schemes is studied for practical SNR conditions. Results show that for poor SNR, MSE precombining LMMSE is better than the blind precombining LMMSE but for greater SNR, MOE scheme outperforms with better result.

Keywords: LMMSE, MOE, MUD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1485
2272 Testing Database of Information System using Conceptual Modeling

Authors: Bogdan Walek, Cyril Klimes

Abstract:

This paper focuses on testing database of existing information system. At the beginning we describe the basic problems of implemented databases, such as data redundancy, poor design of database logical structure or inappropriate data types in columns of database tables. These problems are often the result of incorrect understanding of the primary requirements for a database of an information system. Then we propose an algorithm to compare the conceptual model created from vague requirements for a database with a conceptual model reconstructed from implemented database. An algorithm also suggests steps leading to optimization of implemented database. The proposed algorithm is verified by an implemented prototype. The paper also describes a fuzzy system which works with the vague requirements for a database of an information system, procedure for creating conceptual from vague requirements and an algorithm for reconstructing a conceptual model from implemented database.

Keywords: testing, database, relational database, information system, conceptual model, fuzzy, uncertain information, database testing, reconstruction, requirements, optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1432
2271 Optimum Shape and Design of Cooling Towers

Authors: A. M. El Ansary, A. A. El Damatty, A. O. Nassef

Abstract:

The aim of the current study is to develop a numerical tool that is capable of achieving an optimum shape and design of hyperbolic cooling towers based on coupling a non-linear finite element model developed in-house and a genetic algorithm optimization technique. The objective function is set to be the minimum weight of the tower. The geometric modeling of the tower is represented by means of B-spline curves. The finite element method is applied to model the elastic buckling behaviour of a tower subjected to wind pressure and dead load. The study is divided into two main parts. The first part investigates the optimum shape of the tower corresponding to minimum weight assuming constant thickness. The study is extended in the second part by introducing the shell thickness as one of the design variables in order to achieve an optimum shape and design. Design, functionality and practicality constraints are applied.

Keywords: B-splines, Cooling towers, Finite element, Genetic algorithm, Optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3236
2270 Dynamic Measurement System Modeling with Machine Learning Algorithms

Authors: Changqiao Wu, Guoqing Ding, Xin Chen

Abstract:

In this paper, ways of modeling dynamic measurement systems are discussed. Specially, for linear system with single-input single-output, it could be modeled with shallow neural network. Then, gradient based optimization algorithms are used for searching the proper coefficients. Besides, method with normal equation and second order gradient descent are proposed to accelerate the modeling process, and ways of better gradient estimation are discussed. It shows that the mathematical essence of the learning objective is maximum likelihood with noises under Gaussian distribution. For conventional gradient descent, the mini-batch learning and gradient with momentum contribute to faster convergence and enhance model ability. Lastly, experimental results proved the effectiveness of second order gradient descent algorithm, and indicated that optimization with normal equation was the most suitable for linear dynamic models.

Keywords: Dynamic system modeling, neural network, normal equation, second order gradient descent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 759
2269 Planning a Supply Chain with Risk and Environmental Objectives

Authors: Ghanima Al-Sharrah, Haitham M. Lababidi, Yusuf I. Ali

Abstract:

The main objective of the current work is to introduce sustainability factors in optimizing the supply chain model for process industries. The supply chain models are normally based on purely economic considerations related to costs and profits. To account for sustainability, two additional factors have been introduced; environment and risk. A supply chain for an entire petroleum organization has been considered for implementing and testing the proposed optimization models. The environmental and risk factors were introduced as indicators reflecting the anticipated impact of the optimal production scenarios on sustainability. The aggregation method used in extending the single objective function to multi-objective function is proven to be quite effective in balancing the contribution of each objective term. The results indicate that introducing sustainability factor would slightly reduce the economic benefit while improving the environmental and risk reduction performances of the process industries.

Keywords: Supply chain, optimization, LP models, risk, environmental indicators, multi-objective.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1478
2268 Comparison between Minimum Direct and Indirect Jerks of Linear Dynamic Systems

Authors: Tawiwat Veeraklaew, Nathasit Phathana-im, Songkit Heama

Abstract:

Both the minimum energy consumption and smoothness, which is quantified as a function of jerk, are generally needed in many dynamic systems such as the automobile and the pick-and-place robot manipulator that handles fragile equipments. Nevertheless, many researchers come up with either solely concerning on the minimum energy consumption or minimum jerk trajectory. This research paper proposes a simple yet very interesting relationship between the minimum direct and indirect jerks approaches in designing the time-dependent system yielding an alternative optimal solution. Extremal solutions for the cost functions of direct and indirect jerks are found using the dynamic optimization methods together with the numerical approximation. This is to allow us to simulate and compare visually and statistically the time history of control inputs employed by minimum direct and indirect jerk designs. By considering minimum indirect jerk problem, the numerical solution becomes much easier and yields to the similar results as minimum direct jerk problem.

Keywords: Optimization, Dynamic, Linear Systems, Jerks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1245
2267 Sparse Unmixing of Hyperspectral Data by Exploiting Joint-Sparsity and Rank-Deficiency

Authors: Fanqiang Kong, Chending Bian

Abstract:

In this work, we exploit two assumed properties of the abundances of the observed signatures (endmembers) in order to reconstruct the abundances from hyperspectral data. Joint-sparsity is the first property of the abundances, which assumes the adjacent pixels can be expressed as different linear combinations of same materials. The second property is rank-deficiency where the number of endmembers participating in hyperspectral data is very small compared with the dimensionality of spectral library, which means that the abundances matrix of the endmembers is a low-rank matrix. These assumptions lead to an optimization problem for the sparse unmixing model that requires minimizing a combined l2,p-norm and nuclear norm. We propose a variable splitting and augmented Lagrangian algorithm to solve the optimization problem. Experimental evaluation carried out on synthetic and real hyperspectral data shows that the proposed method outperforms the state-of-the-art algorithms with a better spectral unmixing accuracy.

Keywords: Hyperspectral unmixing, joint-sparse, low-rank representation, abundance estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 757
2266 Combining Minimum Energy and Minimum Direct Jerk of Linear Dynamic Systems

Authors: V. Tawiwat, P. Jumnong

Abstract:

Both the minimum energy consumption and smoothness, which is quantified as a function of jerk, are generally needed in many dynamic systems such as the automobile and the pick-and-place robot manipulator that handles fragile equipments. Nevertheless, many researchers come up with either solely concerning on the minimum energy consumption or minimum jerk trajectory. This research paper proposes a simple yet very interesting when combining the minimum energy and jerk of indirect jerks approaches in designing the time-dependent system yielding an alternative optimal solution. Extremal solutions for the cost functions of the minimum energy, the minimum jerk and combining them together are found using the dynamic optimization methods together with the numerical approximation. This is to allow us to simulate and compare visually and statistically the time history of state inputs employed by combining minimum energy and jerk designs. The numerical solution of minimum direct jerk and energy problem are exactly the same solution; however, the solutions from problem of minimum energy yield the similar solution especially in term of tendency.

Keywords: Optimization, Dynamic, Linear Systems, Jerks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1559
2265 An Overview of Construction and Demolition Waste as Coarse Aggregate in Concrete

Authors: S. R. Shamili, J. Karthikeyan

Abstract:

Fast development of the total populace and far and wide urbanization has surprisingly expanded the advancement of the construction industry. As a result of these activities, old structures are being demolished to make new buildings. Due to these large-scale demolitions, a huge amount of debris is generated all over the world, which results in a landfill. The use of construction and demolition waste as landfill causes groundwater contamination, which is hazardous. Using construction and demolition waste as aggregate can reduce the use of natural aggregates and the problem of mining. The objective of this study is to provide a detailed overview on how the construction and demolition waste material has been used as aggregate in structural concrete. In this study, the preparation, classification, and composition of construction and demolition wastes are also discussed.

Keywords: Aggregate, construction and demolition waste, landfill, large scale demolition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 613
2264 Optimization of Transmission Lines Loading in TNEP Using Decimal Codification Based GA

Authors: H. Shayeghi, M. Mahdavi

Abstract:

Transmission network expansion planning (TNEP) is a basic part of power system planning that determines where, when and how many new transmission lines should be added to the network. Up till now, various methods have been presented to solve the static transmission network expansion planning (STNEP) problem. But in all of these methods, lines adequacy rate has not been considered at the end of planning horizon, i.e., expanded network misses adequacy after some times and needs to be expanded again. In this paper, expansion planning has been implemented by merging lines loading parameter in the STNEP and inserting investment cost into the fitness function constraints using genetic algorithm. Expanded network will possess a maximum adequacy to provide load demand and also the transmission lines overloaded later. Finally, adequacy index could be defined and used to compare some designs that have different investment costs and adequacy rates. In this paper, the proposed idea has been tested on the Garvers network. The results show that the network will possess maximum efficiency economically.

Keywords: Adequacy Optimization, Transmission Expansion Planning, DCGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1799
2263 Correlating Site-Specific Meteorological Data and Power Availability for Small-Scale, Multi-Source Renewable Energy Systems

Authors: James D. Clark, Bernard H. Stark

Abstract:

The paper presents a modelling methodology for small scale multi-source renewable energy systems. Using historical site-specific weather data, the relationships of cost, availability and energy form are visualised as a function of the sizing of photovoltaic arrays, wind turbines, and battery capacity. The specific dependency of each site on its own particular weather patterns show that unique solutions exist for each site. It is shown that in certain cases the capital component cost can be halved if the desired theoretical demand availability is reduced from 100% to 99%.

Keywords: Energy Analysis, Forecasting, Distributed powergeneration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1360
2262 Artificial Neural Network Approach for Inventory Management Problem

Authors: Govind Shay Sharma, Randhir Singh Baghel

Abstract:

The stock management of raw materials and finished goods is a significant issue for industries in fulfilling customer demand. Optimization of inventory strategies is crucial to enhancing customer service, reducing lead times and costs, and meeting market demand. This paper suggests finding an approach to predict the optimum stock level by utilizing past stocks and forecasting the required quantities. In this paper, we utilized Artificial Neural Network (ANN) to determine the optimal value. The objective of this paper is to discuss the optimized ANN that can find the best solution for the inventory model. In the context of the paper, we mentioned that the k-means algorithm is employed to create homogeneous groups of items. These groups likely exhibit similar characteristics or attributes that make them suitable for being managed using uniform inventory control policies. The paper proposes a method that uses the neural fit algorithm to control the cost of inventory.

Keywords: Artificial Neural Network, inventory management, optimization, distributor center.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 138