Search results for: Optimization Algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3057

Search results for: Optimization Algorithms

417 Optimization of Distribution Network Configuration for Loss Reduction Using Artificial Bee Colony Algorithm

Authors: R. Srinivasa Rao, S.V.L. Narasimham, M. Ramalingaraju

Abstract:

Network reconfiguration in distribution system is realized by changing the status of sectionalizing switches to reduce the power loss in the system. This paper presents a new method which applies an artificial bee colony algorithm (ABC) for determining the sectionalizing switch to be operated in order to solve the distribution system loss minimization problem. The ABC algorithm is a new population based metaheuristic approach inspired by intelligent foraging behavior of honeybee swarm. The advantage of ABC algorithm is that it does not require external parameters such as cross over rate and mutation rate as in case of genetic algorithm and differential evolution and it is hard to determine these parameters in prior. The other advantage is that the global search ability in the algorithm is implemented by introducing neighborhood source production mechanism which is a similar to mutation process. To demonstrate the validity of the proposed algorithm, computer simulations are carried out on 14, 33, and 119-bus systems and compared with different approaches available in the literature. The proposed method has outperformed the other methods in terms of the quality of solution and computational efficiency.

Keywords: Distribution system, Network reconfiguration, Loss reduction, Artificial Bee Colony Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3762
416 Broadband Baseband Impedance Control for Linearity Enhancement in Microwave Devices

Authors: Muhammad Akmal Chaudhary

Abstract:

The out-of-band impedance environment is considered to be of paramount importance in engineering the in-band impedance environment. Presenting the frequency independent and constant outof- band impedances across the wide modulation bandwidth is extremely important for reliable device characterization for future wireless systems. This paper presents an out-of-band impedance optimization scheme based on simultaneous engineering of significant baseband components IF1 (twice the modulation frequency) and IF2 (four times the modulation frequency) and higher baseband components such as IF3 (six times the modulation frequency) and IF4 (eight times the modulation frequency) to engineer the in-band impedance environment. The investigations were carried out on a 10W GaN HEMT device driven to deliver a peak envelope power of approximately 40.5dBm under modulated excitation. The presentation of frequency independent baseband impedances to all the significant baseband components whilst maintaining the optimum termination for fundamental tones as well as reactive termination for 2nd harmonic under class-J mode of operation has outlined separate optimum impedances for best intermodulation (IM) linearity.

Keywords: Active load-pull, baseband, device characterisation, waveform measurements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1973
415 Motion Prediction and Motion Vector Cost Reduction during Fast Block Motion Estimation in MCTF

Authors: Karunakar A K, Manohara Pai M M

Abstract:

In 3D-wavelet video coding framework temporal filtering is done along the trajectory of motion using Motion Compensated Temporal Filtering (MCTF). Hence computationally efficient motion estimation technique is the need of MCTF. In this paper a predictive technique is proposed in order to reduce the computational complexity of the MCTF framework, by exploiting the high correlation among the frames in a Group Of Picture (GOP). The proposed technique applies coarse and fine searches of any fast block based motion estimation, only to the first pair of frames in a GOP. The generated motion vectors are supplied to the next consecutive frames, even to subsequent temporal levels and only fine search is carried out around those predicted motion vectors. Hence coarse search is skipped for all the motion estimation in a GOP except for the first pair of frames. The technique has been tested for different fast block based motion estimation algorithms over different standard test sequences using MC-EZBC, a state-of-the-art scalable video coder. The simulation result reveals substantial reduction (i.e. 20.75% to 38.24%) in the number of search points during motion estimation, without compromising the quality of the reconstructed video compared to non-predictive techniques. Since the motion vectors of all the pair of frames in a GOP except the first pair will have value ±1 around the motion vectors of the previous pair of frames, the number of bits required for motion vectors is also reduced by 50%.

Keywords: Motion Compensated Temporal Filtering, predictivemotion estimation, lifted wavelet transform, motion vector

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1619
414 Hippocampus Segmentation using a Local Prior Model on its Boundary

Authors: Dimitrios Zarpalas, Anastasios Zafeiropoulos, Petros Daras, Nicos Maglaveras

Abstract:

Segmentation techniques based on Active Contour Models have been strongly benefited from the use of prior information during their evolution. Shape prior information is captured from a training set and is introduced in the optimization procedure to restrict the evolution into allowable shapes. In this way, the evolution converges onto regions even with weak boundaries. Although significant effort has been devoted on different ways of capturing and analyzing prior information, very little thought has been devoted on the way of combining image information with prior information. This paper focuses on a more natural way of incorporating the prior information in the level set framework. For proof of concept the method is applied on hippocampus segmentation in T1-MR images. Hippocampus segmentation is a very challenging task, due to the multivariate surrounding region and the missing boundary with the neighboring amygdala, whose intensities are identical. The proposed method, mimics the human segmentation way and thus shows enhancements in the segmentation accuracy.

Keywords: Medical imaging & processing, Brain MRI segmentation, hippocampus segmentation, hippocampus-amygdala missingboundary, weak boundary segmentation, region based segmentation, prior information, local weighting scheme in level sets, spatialdistribution of labels, gradient distribution on boundary.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1752
413 A Novel Approach for Coin Identification using Eigenvalues of Covariance Matrix, Hough Transform and Raster Scan Algorithms

Authors: J. Prakash, K. Rajesh

Abstract:

In this paper we present a new method for coin identification. The proposed method adopts a hybrid scheme using Eigenvalues of covariance matrix, Circular Hough Transform (CHT) and Bresenham-s circle algorithm. The statistical and geometrical properties of the small and large Eigenvalues of the covariance matrix of a set of edge pixels over a connected region of support are explored for the purpose of circular object detection. Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain only a small number of non-zero elements, they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of the circumference pixels is identified using Raster scan algorithm which uses geometrical symmetry property. After finding circular objects, the proposed method uses the texture on the surface of the coins called texton, which are unique properties of coins, refers to the fundamental micro structure in generic natural images. This method has been tested on several real world images including coin and non-coin images. The performance is also evaluated based on the noise withstanding capability.

Keywords: Circular Hough Transform, Coin detection, Covariance matrix, Eigenvalues, Raster scan Algorithm, Texton.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1880
412 Production Planning for Animal Food Industry under Demand Uncertainty

Authors: Pirom Thangchitpianpol, Suttipong Jumroonrut

Abstract:

This research investigates the distribution of food demand for animal food and the optimum amount of that food production at minimum cost. The data consist of customer purchase orders for the food of laying hens, price of food for laying hens, cost per unit for the food inventory, cost related to food of laying hens in which the food is out of stock, such as fine, overtime, urgent purchase for material. They were collected from January, 1990 to December, 2013 from a factory in Nakhonratchasima province. The collected data are analyzed in order to explore the distribution of the monthly food demand for the laying hens and to see the rate of inventory per unit. The results are used in a stochastic linear programming model for aggregate planning in which the optimum production or minimum cost could be obtained. Programming algorithms in MATLAB and tools in Linprog software are used to get the solution. The distribution of the food demand for laying hens and the random numbers are used in the model. The study shows that the distribution of monthly food demand for laying has a normal distribution, the monthly average amount (unit: 30 kg) of production from January to December. The minimum total cost average for 12 months is Baht 62,329,181.77. Therefore, the production planning can reduce the cost by 14.64% from real cost.

Keywords: Animal food, Stochastic linear programming, Production planning, Demand Uncertainty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1915
411 Accurate Visualization of Graphs of Functions of Two Real Variables

Authors: Zeitoun D. G., Thierry Dana-Picard

Abstract:

The study of a real function of two real variables can be supported by visualization using a Computer Algebra System (CAS). One type of constraints of the system is due to the algorithms implemented, yielding continuous approximations of the given function by interpolation. This often masks discontinuities of the function and can provide strange plots, not compatible with the mathematics. In recent years, point based geometry has gained increasing attention as an alternative surface representation, both for efficient rendering and for flexible geometry processing of complex surfaces. In this paper we present different artifacts created by mesh surfaces near discontinuities and propose a point based method that controls and reduces these artifacts. A least squares penalty method for an automatic generation of the mesh that controls the behavior of the chosen function is presented. The special feature of this method is the ability to improve the accuracy of the surface visualization near a set of interior points where the function may be discontinuous. The present method is formulated as a minimax problem and the non uniform mesh is generated using an iterative algorithm. Results show that for large poorly conditioned matrices, the new algorithm gives more accurate results than the classical preconditioned conjugate algorithm.

Keywords: Function singularities, mesh generation, point allocation, visualization, collocation least squares method, Augmented Lagrangian method, Uzawa's Algorithm, Preconditioned Conjugate Gradien

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1708
410 Deep Reinforcement Learning Approach for Trading Automation in the Stock Market

Authors: Taylan Kabbani, Ekrem Duman

Abstract:

Deep Reinforcement Learning (DRL) algorithms can scale to previously intractable problems. The automation of profit generation in the stock market is possible using DRL, by combining  the financial assets price ”prediction” step and the ”allocation” step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. This work represents a DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem as a Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. We then solved the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm and achieved a 2.68 Sharpe ratio on the test dataset. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of DRL in financial markets over other types of machine learning and proves its credibility and advantages of strategic decision-making.

Keywords: Autonomous agent, deep reinforcement learning, MDP, sentiment analysis, stock market, technical indicators, twin delayed deep deterministic policy gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 524
409 Swarmed Discriminant Analysis for Multifunction Prosthesis Control

Authors: Rami N. Khushaba, Ahmed Al-Ani, Adel Al-Jumaily

Abstract:

One of the approaches enabling people with amputated limbs to establish some sort of interface with the real world includes the utilization of the myoelectric signal (MES) from the remaining muscles of those limbs. The MES can be used as a control input to a multifunction prosthetic device. In this control scheme, known as the myoelectric control, a pattern recognition approach is usually utilized to discriminate between the MES signals that belong to different classes of the forearm movements. Since the MES is recorded using multiple channels, the feature vector size can become very large. In order to reduce the computational cost and enhance the generalization capability of the classifier, a dimensionality reduction method is needed to identify an informative yet moderate size feature set. This paper proposes a new fuzzy version of the well known Fisher-s Linear Discriminant Analysis (LDA) feature projection technique. Furthermore, based on the fact that certain muscles might contribute more to the discrimination process, a novel feature weighting scheme is also presented by employing Particle Swarm Optimization (PSO) for estimating the weight of each feature. The new method, called PSOFLDA, is tested on real MES datasets and compared with other techniques to prove its superiority.

Keywords: Discriminant Analysis, Pattern Recognition, SignalProcessing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1556
408 Fabrication, Testing and Machinability Evaluation of Glass Fiber Reinforced Epoxy Composites

Authors: S. S. Panda, Arkesh Chouhan, Yogesh Deshpande

Abstract:

The present paper deals with designing and fabricating an apparatus for the speedy and accurate manufacturing of fiber reinforced composite lamina of different orientation, thickness and stacking sequences for testing. Properties derived through an analytical approach are verified through measuring the elastic modulus, ultimate tensile strength, flexural modulus and flexural strength of the samples. The 00 orientation ply looks stiffer compared to the 900 ply. Similarly, the flexural strength of 00 ply is higher than to the 900 ply. Sample machinability has been studied by conducting numbers of drilling based on Taguchi Design experiments. Multi Responses (Delamination and Damage grading) is obtained using the desirability approach and optimum cutting condition (spindle speed, feed and drill diameter), at which responses are minimized is obtained thereafter. Delamination increases nonlinearly with the increase in spindle speed. Similarly, the influence of the drill diameter on delamination is higher than the spindle speed and feed rate.

Keywords: Delamination, FRP composite, multi response optimization, Taguchi design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1233
407 Join and Meet Block Based Default Definite Decision Rule Mining from IDT and an Incremental Algorithm

Authors: Chen Wu, Jingyu Yang

Abstract:

Using maximal consistent blocks of tolerance relation on the universe in incomplete decision table, the concepts of join block and meet block are introduced and studied. Including tolerance class, other blocks such as tolerant kernel and compatible kernel of an object are also discussed at the same time. Upper and lower approximations based on those blocks are also defined. Default definite decision rules acquired from incomplete decision table are proposed in the paper. An incremental algorithm to update default definite decision rules is suggested for effective mining tasks from incomplete decision table into which data is appended. Through an example, we demonstrate how default definite decision rules based on maximal consistent blocks, join blocks and meet blocks are acquired and how optimization is done in support of discernibility matrix and discernibility function in the incomplete decision table.

Keywords: rough set, incomplete decision table, maximalconsistent block, default definite decision rule, join and meet block.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1288
406 Statistical Optimization of Enzymatic Hydrolysis of Potato (Solanum tuberosum) Starch by Immobilized α-amylase

Authors: N.Peatciyammal, B.Balachandar, M.Dinesh Kumar, K.Tamilarasan, C.Muthukumaran

Abstract:

Enzymatic hydrolysis of starch from natural sources finds potential application in commercial production of alcoholic beverage and bioethanol. In this study the effect of starch concentration, temperature, time and enzyme concentration were studied and optimized for hydrolysis of Potato starch powder (of mesh 80/120) into glucose syrup by immobilized (using Sodium arginate) α-amylase using central composite design. The experimental result on enzymatic hydrolysis of Potato starch was subjected to multiple linear regression analysis using MINITAB 14 software. Positive linear effect of starch concentration, enzyme concentration and time was observed on hydrolysis of Potato starch by α-amylase. The statistical significance of the model was validated by F-test for analysis of variance (p ≤ 0.01). The optimum value of starch concentration, enzyme concentration, temperature, time and were found to be 6% (w/v), 2% (w/v), 40°C and 80min respectively. The maximum glucose yield at optimum condition was 2.34 mg/mL.

Keywords: Alcoholic beverage, Central Composite Design, Enzymatic hydrolysis, Glucose yield, Potato Starch.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6038
405 Auditing of Building Information Modeling Application in Decoration Engineering Projects in China

Authors: Lan Luo

Abstract:

In China’s construction industry, it is a normal practice to separately subcontract the decoration engineering part from construction engineering, and Building Information Modeling (BIM) is also done separately. Application of BIM in decoration engineering should be integrated with other disciplines, but Chinese current practice makes this very difficult and complicated. Currently, there are three barriers in the auditing of BIM application in decoration engineering in China: heavy workload; scarcity of qualified professionals; and lack of literature concerning audit contents, standards, and methods. Therefore, it is significant to perform research on what (contents) should be evaluated, in which phase, and by whom (professional qualifications) in BIM application in decoration construction so that the application of BIM can be promoted in a better manner. Based on this consideration, four principles of BIM auditing are proposed: Comprehensiveness of information, accuracy of data, aesthetic attractiveness of appearance, and scheme optimization. In the model audit, three methods should be used: Collision, observation, and contrast. In addition, BIM auditing at six stages is discussed and a checklist for work items and results to be submitted is proposed. This checklist can be used for reference by decoration project participants.

Keywords: Audit, evaluation, dimensions, methods, standards, building information modeling application, decoration engineering projects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1868
404 Three Tier Indoor Localization System for Digital Forensics

Authors: Dennis L. Owuor, Okuthe P. Kogeda, Johnson I. Agbinya

Abstract:

Mobile localization has attracted a great deal of attention recently due to the introduction of wireless networks. Although several localization algorithms and systems have been implemented and discussed in the literature, very few researchers have exploited the gap that exists between indoor localization, tracking, external storage of location information and outdoor localization for the purpose of digital forensics during and after a disaster. The contribution of this paper lies in the implementation of a robust system that is capable of locating, tracking mobile device users and store location information for both indoor and partially outdoor the cloud. The system can be used during disaster to track and locate mobile phone users. The developed system is a mobile application built based on Android, Hypertext Preprocessor (PHP), Cascading Style Sheets (CSS), JavaScript and MATLAB for the Android mobile users. Using Waterfall model of software development, we have implemented a three level system that is able to track, locate and store mobile device information in secure database (cloud) on almost a real time basis. The outcome of the study showed that the developed system is efficient with regard to the tracking and locating mobile devices. The system is also flexible, i.e. can be used in any building with fewer adjustments. Finally, the system is accurate for both indoor and outdoor in terms of locating and tracking mobile devices.

Keywords: Indoor localization, waterfall, digital forensics, tracking and cloud.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 942
403 Software Reliability Prediction Model Analysis

Authors: L. Mirtskhulava, M. Khunjgurua, N. Lomineishvili, K. Bakuria

Abstract:

Software reliability prediction gives a great opportunity to measure the software failure rate at any point throughout system test. A software reliability prediction model provides with the technique for improving reliability. Software reliability is very important factor for estimating overall system reliability, which depends on the individual component reliabilities. It differs from hardware reliability in that it reflects the design perfection. Main reason of software reliability problems is high complexity of software. Various approaches can be used to improve the reliability of software. We focus on software reliability model in this article, assuming that there is a time redundancy, the value of which (the number of repeated transmission of basic blocks) can be an optimization parameter. We consider given mathematical model in the assumption that in the system may occur not only irreversible failures, but also a failure that can be taken as self-repairing failures that significantly affect the reliability and accuracy of information transfer. Main task of the given paper is to find a time distribution function (DF) of instructions sequence transmission, which consists of random number of basic blocks. We consider the system software unreliable; the time between adjacent failures has exponential distribution.

Keywords: Exponential distribution, conditional mean time to failure, distribution function, mathematical model, software reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1680
402 Distances over Incomplete Diabetes and Breast Cancer Data Based on Bhattacharyya Distance

Authors: Loai AbdAllah, Mahmoud Kaiyal

Abstract:

Missing values in real-world datasets are a common problem. Many algorithms were developed to deal with this problem, most of them replace the missing values with a fixed value that was computed based on the observed values. In our work, we used a distance function based on Bhattacharyya distance to measure the distance between objects with missing values. Bhattacharyya distance, which measures the similarity of two probability distributions. The proposed distance distinguishes between known and unknown values. Where the distance between two known values is the Mahalanobis distance. When, on the other hand, one of them is missing the distance is computed based on the distribution of the known values, for the coordinate that contains the missing value. This method was integrated with Wikaya, a digital health company developing a platform that helps to improve prevention of chronic diseases such as diabetes and cancer. In order for Wikaya’s recommendation system to work distance between users need to be measured. Since there are missing values in the collected data, there is a need to develop a distance function distances between incomplete users profiles. To evaluate the accuracy of the proposed distance function in reflecting the actual similarity between different objects, when some of them contain missing values, we integrated it within the framework of k nearest neighbors (kNN) classifier, since its computation is based only on the similarity between objects. To validate this, we ran the algorithm over diabetes and breast cancer datasets, standard benchmark datasets from the UCI repository. Our experiments show that kNN classifier using our proposed distance function outperforms the kNN using other existing methods.

Keywords: Missing values, distance metric, Bhattacharyya distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 781
401 An Approach for Coagulant Dosage Optimization Using Soft Jar Test: A Case Study of Bangkhen Water Treatment Plant

Authors: Ninlawat Phuangchoke, Waraporn Viyanon, Setta Sasananan

Abstract:

The most important process of the water treatment plant process is coagulation, which uses alum and poly aluminum chloride (PACL). Therefore, determining the dosage of alum and PACL is the most important factor to be prescribed. This research applies an artificial neural network (ANN), which uses the Levenberg–Marquardt algorithm to create a mathematical model (Soft Jar Test) for chemical dose prediction, as used for coagulation, such as alum and PACL, with input data consisting of turbidity, pH, alkalinity, conductivity, and, oxygen consumption (OC) of the Bangkhen Water Treatment Plant (BKWTP), under the authority of the Metropolitan Waterworks Authority of Thailand. The data were collected from 1 January 2019 to 31 December 2019 in order to cover the changing seasons of Thailand. The input data of ANN are divided into three groups: training set, test set, and validation set. The coefficient of determination and the mean absolute errors of the alum model are 0.73, 3.18 and the PACL model are 0.59, 3.21, respectively.

Keywords: Soft jar test, jar test, water treatment plant process, artificial neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 664
400 Intelligent Recognition of Diabetes Disease via FCM Based Attribute Weighting

Authors: Kemal Polat

Abstract:

In this paper, an attribute weighting method called fuzzy C-means clustering based attribute weighting (FCMAW) for classification of Diabetes disease dataset has been used. The aims of this study are to reduce the variance within attributes of diabetes dataset and to improve the classification accuracy of classifier algorithm transforming from non-linear separable datasets to linearly separable datasets. Pima Indians Diabetes dataset has two classes including normal subjects (500 instances) and diabetes subjects (268 instances). Fuzzy C-means clustering is an improved version of K-means clustering method and is one of most used clustering methods in data mining and machine learning applications. In this study, as the first stage, fuzzy C-means clustering process has been used for finding the centers of attributes in Pima Indians diabetes dataset and then weighted the dataset according to the ratios of the means of attributes to centers of theirs. Secondly, after weighting process, the classifier algorithms including support vector machine (SVM) and k-NN (k- nearest neighbor) classifiers have been used for classifying weighted Pima Indians diabetes dataset. Experimental results show that the proposed attribute weighting method (FCMAW) has obtained very promising results in the classification of Pima Indians diabetes dataset.

Keywords: Fuzzy C-means clustering, Fuzzy C-means clustering based attribute weighting, Pima Indians diabetes dataset, SVM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1763
399 Double Manifold Sliding Mode Observer for Sensorless Control of Multiphase Induction Machine under Fault Condition

Authors: Mohammad Jafarifar

Abstract:

Multiphase Induction Machine (IM) is normally controlled using rotor field oriented vector control. Under phase(s) loss, the machine currents can be optimally controlled to satisfy certain optimization criteria. In this paper we discuss the performance of double manifold sliding mode observer (DM-SMO) in Sensorless control of multiphase induction machine under unsymmetrical condition (one phase loss). This observer is developed using the IM model in the stationary reference frame. DM-SMO is constructed by adding extra feedback term to conventional single mode sliding mode observer (SM-SMO) which proposed in many literature. This leads to a fully convergent observer that also yields an accurate estimate of the speed and stator currents. It will be shown by the simulation results that the estimated speed and currents by the method are very well and error between real and estimated quantities is negligible. Also parameter sensitivity analysis shows that this method is rather robust against parameter variation.

Keywords: Multiphase induction machine, field oriented control, sliding mode, unsymmetrical condition, manifold.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1817
398 An Overview on Aluminum Matrix Composites: Liquid State Processing

Authors: S. P. Jordan, G. Christian, S. P. Jeffs

Abstract:

Modern composite materials are increasingly being chosen in replacement of heavier metallic material systems within many engineering fields including aerospace and automotive industries. The increasing push towards satisfying environmental targets are fuelling new material technologies and manufacturing processes. This paper will introduce materials and manufacturing processes using metal matrix composites along with manufacturing processes optimized at Alvant Ltd., based in Basingstoke in the UK which offers modern, cost effective, selectively reinforced composites for light-weighting applications within engineering. An overview and introduction into modern optimized manufacturing methods capable of producing viable replacements for heavier metallic and lower temperature capable polymer composites are offered. A review of the capabilities and future applications of this viable material is discussed to highlight the potential involved in further optimization of old manufacturing techniques, to fully realize the potential to lightweight material using cost-effective methods.

Keywords: Aluminum matrix composites, light-weighting, hybrid squeeze casting, strategically placed reinforcements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 720
397 Decreasing of Displacements of Prestressed Cable Truss

Authors: V. Goremikins, K. Rocens, D. Serdjuks

Abstract:

Suspended cable structures are most preferable for large spans covering due to rational use of structural materials, but the problem of suspended cable structures is initial shape change under the action of non-symmetrical load. The problem can be solved by increasing of relation of dead weight and imposed load, but this methods cause increasing of materials consumption.Prestressed cable truss usage is another way how the problem of shape change under the action of non-symmetrical load can be fixed. The better results can be achieved if we replace top chord with cable truss with cross web. Rational structure of the cable truss for prestressed cable truss top chord was developed using optimization realized in FEM program ANSYS 12 environment. Single cable and cable truss model work was discovered.Analytical and model testing results indicate, that usage of cable truss with the cross web as a top chord of prestressed cable truss instead of single cable allows to reduce total displacements by 13-16% in the case of non-symmetrical load. In case of uniformly distributed load single cable is preferable.

Keywords: Cable trusses, Non-symmetrical load, Cable truss models, Vertical displacements

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1889
396 Optimization of Lean Methodologies in the Textile Industry Using Design of Experiments

Authors: Ahmad Yame, Ahad Ali, Badih Jawad, Daw Al-Werfalli Mohamed Nasser, Sabah Abro

Abstract:

Industries in general have a lot of waste. Wool textile company, Baniwalid, Libya has many complex problems that led to enormous waste generated due to the lack of lean strategies, expertise, technical support and commitment. To successfully address waste at wool textile company, this study will attempt to develop a methodical approach that integrates lean manufacturing tools to optimize performance characteristics such as lead time and delivery. This methodology will utilize Value Stream Mapping (VSM) techniques to identify the process variables that affect production. Once these variables are identified, Design of Experiments (DOE) Methodology will be used to determine the significantly influential process variables, these variables are then controlled and set at their optimal to achieve optimal levels of productivity, quality, agility, efficiency and delivery to analyze the outputs of the simulation model for different lean configurations. The goal of this research is to investigate how the tools of lean manufacturing can be adapted from the discrete to the continuous manufacturing environment and to evaluate their benefits at a specific industrial.

Keywords: Lean manufacturing, DOE, value stream mapping, textiles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1941
395 Dimensionality Reduction of PSSM Matrix and its Influence on Secondary Structure and Relative Solvent Accessibility Predictions

Authors: Rafał Adamczak

Abstract:

State-of-the-art methods for secondary structure (Porter, Psi-PRED, SAM-T99sec, Sable) and solvent accessibility (Sable, ACCpro) predictions use evolutionary profiles represented by the position specific scoring matrix (PSSM). It has been demonstrated that evolutionary profiles are the most important features in the feature space for these predictions. Unfortunately applying PSSM matrix leads to high dimensional feature spaces that may create problems with parameter optimization and generalization. Several recently published suggested that applying feature extraction for the PSSM matrix may result in improvements in secondary structure predictions. However, none of the top performing methods considered here utilizes dimensionality reduction to improve generalization. In the present study, we used simple and fast methods for features selection (t-statistics, information gain) that allow us to decrease the dimensionality of PSSM matrix by 75% and improve generalization in the case of secondary structure prediction compared to the Sable server.

Keywords: Secondary structure prediction, feature selection, position specific scoring matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1936
394 Application of Legendre Transformation to Portfolio Optimization

Authors: Peter Benneth, Tsaroh N. Theophilus, Prince Benjamin

Abstract:

This research work aims at studying the application of Legendre Transformation Method (LTM) to Hamilton Jacobi Bellman (HJB) equation which is an example of optimal control problem. We discuss the steps involved in modelling the HJB equation as it relates to mathematical finance by applying the Ito’s lemma and maximum principle theorem. By applying the LTM and dual theory, the resultant HJB equation is transformed to a linear Partial Differential Equation (PDE). Also, the Optimal Investment Strategy (OIS) and the optimal value function were obtained under the exponential utility function. Furthermore, some numerical results were also presented with observations that the OIS under exponential utility is directly proportional to the appreciation rate of the risky asset and inversely proportional to the instantaneous volatility, predetermined interest rate, risk averse coefficient. Finally, it was observed that the optimal fund size is an increasing function of the risk free interest rate. This result is consistent with some existing results.

Keywords: Legendre transformation method, Optimal investment strategy, Ito’s lemma, Hamilton Jacobi Bellman equation, Geometric Brownian motion, financial market.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 67
393 Optimization of Diluted Organic Acid Pretreatment on Rice Straw Using Response Surface Methodology

Authors: Rotchanaphan Hengaroonprasan, Malinee Sriariyanun, Prapakorn Tantayotai, Supacharee Roddecha, Kraipat Cheenkachorn

Abstract:

Lignocellolusic material is a substance that is resistant to be degraded by microorganisms or hydrolysis enzymes. To be used as materials for biofuel production, it needs pretreatment process to improve efficiency of hydrolysis. In this work, chemical pretreatments on rice straw using three diluted organic acids, including acetic acid, citric acid, oxalic acid, were optimized. Using Response Surface Methodology (RSM), the effect of three pretreatment parameters, acid concentration, treatment time, and reaction temperature, on pretreatment efficiency were statistically evaluated. The results indicated that dilute oxalic acid pretreatment led to the highest enhancement of enzymatic saccharification by commercial cellulase and yielded sugar up to 10.67 mg/ml when using 5.04% oxalic acid at 137.11 oC for 30.01 min. Compared to other acid pretreatment by acetic acid, citric acid, and hydrochloric acid, the maximum sugar yields are 7.07, 6.30, and 8.53 mg/ml, respectively. Here, it was demonstrated that organic acids can be used for pretreatment of lignocellulosic materials to enhance of hydrolysis process, which could be integrated to other applications for various biorefinery processes. 

Keywords: Lignocellolusic biomass, pretreatment, organic acid response surface methodology, biorefinery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2400
392 Application of Java-based Pointcuts in Aspect Oriented Programming (AOP) for Data Race Detection

Authors: Sadaf Khalid, Fahim Arif

Abstract:

Wide applicability of concurrent programming practices in developing various software applications leads to different concurrency errors amongst which data race is the most important. Java provides greatest support for concurrent programming by introducing various concurrency packages. Aspect oriented programming (AOP) is modern programming paradigm facilitating the runtime interception of events of interest and can be effectively used to handle the concurrency problems. AspectJ being an aspect oriented extension to java facilitates the application of concepts of AOP for data race detection. Volatile variables are usually considered thread safe, but they can become the possible candidates of data races if non-atomic operations are performed concurrently upon them. Various data race detection algorithms have been proposed in the past but this issue of volatility and atomicity is still unaddressed. The aim of this research is to propose some suggestions for incorporating certain conditions for data race detection in java programs at the volatile fields by taking into account support for atomicity in java concurrency packages and making use of pointcuts. Two simple test programs will demonstrate the results of research. The results are verified on two different Java Development Kits (JDKs) for the purpose of comparison.

Keywords: Aspect Bench Compiler (abc), Aspect OrientedProgramming (AOP), AspectJ, Aspects, Concurrency packages, Concurrent programming, Cross-cutting Concerns, Data race, Eclipse, Java, Java Development Kits (JDKs), Pointcuts

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1930
391 Sorption of Nickel by Hypnea Valentiae: Application of Response Surface Methodology

Authors: M. Rajasimman, K. Murugaiyan

Abstract:

In this work, sorption of nickel from aqueous solution on hypnea valentiae, red macro algae, was investigated. Batch experiments have been carried out to find the effect of various parameters such as pH, temperature, sorbent dosage, metal concentration and contact time on the sorption of nickel using hypnea valentiae. Response surface methodology (RSM) is employed to optimize the process parameters. Based on the central composite design, quadratic model was developed to correlate the process variables to the response. The most influential factor on each experimental design response was identified from the analysis of variance (ANOVA). The optimum conditions for the sorption of nickel were found to be: pH – 5.1, temperature – 36.8oC, sorbent dosage – 5.1 g/L, metal concentration – 100 mg/L and contact time – 30 min. At these optimized conditions the maximum removal of nickel was found to be 91.97%. A coefficient of determination R2 value 0.9548 shows the fitness of response surface methodology in this work.

Keywords: Optimization, metal, Hypnea valentia, response surface methodology, red algae.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1662
390 Optimization of Surface Finish in Milling Operation Using Live Tooling via Taguchi Method

Authors: Harish Kumar Ponnappan, Joseph C. Chen

Abstract:

The main objective of this research is to optimize the surface roughness of a milling operation on AISI 1018 steel using live tooling on a HAAS ST-20 lathe. In this study, Taguchi analysis is used to optimize the milling process by investigating the effect of different machining parameters on surface roughness. The L9 orthogonal array is designed with four controllable factors with three different levels each and an uncontrollable factor, resulting in 18 experimental runs. The optimal parameters determined from Taguchi analysis were feed rate – 76.2 mm/min, spindle speed 1150 rpm, depth of cut – 0.762 mm and 2-flute TiN coated high-speed steel as tool material. The process capability Cp and process capability index Cpk values were improved from 0.62 and -0.44 to 1.39 and 1.24 respectively. The average surface roughness values from the confirmation runs were 1.30 µ, decreasing the defect rate from 87.72% to 0.01%. The purpose of this study is to efficiently utilize the Taguchi design to optimize the surface roughness in a milling operation using live tooling.

Keywords: Live tooling, surface roughness, Taguchi analysis, Computer Numerical Control (CNC) milling operation, CNC turning operation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 771
389 Selection of Best Band Combination for Soil Salinity Studies using ETM+ Satellite Images (A Case study: Nyshaboor Region,Iran)

Authors: Sanaeinejad, S. H.; A. Astaraei, . P. Mirhoseini.Mousavi, M. Ghaemi,

Abstract:

One of the main environmental problems which affect extensive areas in the world is soil salinity. Traditional data collection methods are neither enough for considering this important environmental problem nor accurate for soil studies. Remote sensing data could overcome most of these problems. Although satellite images are commonly used for these studies, however there are still needs to find the best calibration between the data and real situations in each specified area. Neyshaboor area, North East of Iran was selected as a field study of this research. Landsat satellite images for this area were used in order to prepare suitable learning samples for processing and classifying the images. 300 locations were selected randomly in the area to collect soil samples and finally 273 locations were reselected for further laboratory works and image processing analysis. Electrical conductivity of all samples was measured. Six reflective bands of ETM+ satellite images taken from the study area in 2002 were used for soil salinity classification. The classification was carried out using common algorithms based on the best composition bands. The results showed that the reflective bands 7, 3, 4 and 1 are the best band composition for preparing the color composite images. We also found out, that hybrid classification is a suitable method for identifying and delineation of different salinity classes in the area.

Keywords: Soil salinity, Remote sensing, Image processing, ETM+, Nyshaboor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2021
388 Modeling, Analysis and Control of a Smart Composite Structure

Authors: Nader H. Ghareeb, Mohamed S. Gaith, Sayed M. Soleimani

Abstract:

In modern engineering, weight optimization has a priority during the design of structures. However, optimizing the weight can result in lower stiffness and less internal damping, causing the structure to become excessively prone to vibration. To overcome this problem, active or smart materials are implemented. The coupled electromechanical properties of smart materials, used in the form of piezoelectric ceramics in this work, make these materials well-suited for being implemented as distributed sensors and actuators to control the structural response. The smart structure proposed in this paper is composed of a cantilevered steel beam, an adhesive or bonding layer, and a piezoelectric actuator. The static deflection of the structure is derived as function of the piezoelectric voltage, and the outcome is compared to theoretical and experimental results from literature. The relation between the voltage and the piezoelectric moment at both ends of the actuator is also investigated and a reduced finite element model of the smart structure is created and verified. Finally, a linear controller is implemented and its ability to attenuate the vibration due to the first natural frequency is demonstrated.

Keywords: Active linear control, Lyapunov stability theorem, piezoelectricity, smart structure, static deflection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1497