Search results for: vector optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4279

Search results for: vector optimization

3439 Hidro-IA: An Artificial Intelligent Tool Applied to Optimize the Operation Planning of Hydrothermal Systems with Historical Streamflow

Authors: Thiago Ribeiro de Alencar, Jacyro Gramulia Junior, Patricia Teixeira Leite

Abstract:

The area of the electricity sector that deals with energy needs by the hydroelectric in a coordinated manner is called Operation Planning of Hydrothermal Power Systems (OPHPS). The purpose of this is to find a political operative to provide electrical power to the system in a given period, with reliability and minimal cost. Therefore, it is necessary to determine an optimal schedule of generation for each hydroelectric, each range, so that the system meets the demand reliably, avoiding rationing in years of severe drought, and that minimizes the expected cost of operation during the planning, defining an appropriate strategy for thermal complementation. Several optimization algorithms specifically applied to this problem have been developed and are used. Although providing solutions to various problems encountered, these algorithms have some weaknesses, difficulties in convergence, simplification of the original formulation of the problem, or owing to the complexity of the objective function. An alternative to these challenges is the development of techniques for simulation optimization and more sophisticated and reliable, it can assist the planning of the operation. Thus, this paper presents the development of a computational tool, namely Hydro-IA for solving optimization problem identified and to provide the User an easy handling. Adopted as intelligent optimization technique is Genetic Algorithm (GA) and programming language is Java. First made the modeling of the chromosomes, then implemented the function assessment of the problem and the operators involved, and finally the drafting of the graphical interfaces for access to the User. The results with the Genetic Algorithms were compared with the optimization technique nonlinear programming (NLP). Tests were conducted with seven hydroelectric plants interconnected hydraulically with historical stream flow from 1953 to 1955. The results of comparison between the GA and NLP techniques shows that the cost of operating the GA becomes increasingly smaller than the NLP when the number of hydroelectric plants interconnected increases. The program has managed to relate a coherent performance in problem resolution without the need for simplification of the calculations together with the ease of manipulating the parameters of simulation and visualization of output results.

Keywords: energy, optimization, hydrothermal power systems, artificial intelligence and genetic algorithms

Procedia PDF Downloads 420
3438 Fuzzy Optimization for Identifying Anticancer Targets in Genome-Scale Metabolic Models of Colon Cancer

Authors: Feng-Sheng Wang, Chao-Ting Cheng

Abstract:

Developing a drug from conception to launch is costly and time-consuming. Computer-aided methods can reduce research costs and accelerate the development process during the early drug discovery and development stages. This study developed a fuzzy multi-objective hierarchical optimization framework for identifying potential anticancer targets in a metabolic model. First, RNA-seq expression data of colorectal cancer samples and their healthy counterparts were used to reconstruct tissue-specific genome-scale metabolic models. The aim of the optimization framework was to identify anticancer targets that lead to cancer cell death and evaluate metabolic flux perturbations in normal cells that have been caused by cancer treatment. Four objectives were established in the optimization framework to evaluate the mortality of cancer cells for treatment and to minimize side effects causing toxicity-induced tumorigenesis on normal cells and smaller metabolic perturbations. Through fuzzy set theory, a multiobjective optimization problem was converted into a trilevel maximizing decision-making (MDM) problem. The applied nested hybrid differential evolution was applied to solve the trilevel MDM problem using two nutrient media to identify anticancer targets in the genome-scale metabolic model of colorectal cancer, respectively. Using Dulbecco’s Modified Eagle Medium (DMEM), the computational results reveal that the identified anticancer targets were mostly involved in cholesterol biosynthesis, pyrimidine and purine metabolisms, glycerophospholipid biosynthetic pathway and sphingolipid pathway. However, using Ham’s medium, the genes involved in cholesterol biosynthesis were unidentifiable. A comparison of the uptake reactions for the DMEM and Ham’s medium revealed that no cholesterol uptake reaction was included in DMEM. Two additional media, i.e., a cholesterol uptake reaction was included in DMEM and excluded in HAM, were respectively used to investigate the relationship of tumor cell growth with nutrient components and anticancer target genes. The genes involved in the cholesterol biosynthesis were also revealed to be determinable if a cholesterol uptake reaction was not induced when the cells were in the culture medium. However, the genes involved in cholesterol biosynthesis became unidentifiable if such a reaction was induced.

Keywords: Cancer metabolism, genome-scale metabolic model, constraint-based model, multilevel optimization, fuzzy optimization, hybrid differential evolution

Procedia PDF Downloads 80
3437 Detection of Phoneme [S] Mispronounciation for Sigmatism Diagnosis in Adults

Authors: Michal Krecichwost, Zauzanna Miodonska, Pawel Badura

Abstract:

The diagnosis of sigmatism is mostly based on the observation of articulatory organs. It is, however, not always possible to precisely observe the vocal apparatus, in particular in the oral cavity of the patient. Speech processing can allow to objectify the therapy and simplify the verification of its progress. In the described study the methodology for classification of incorrectly pronounced phoneme [s] is proposed. The recordings come from adults. They were registered with the speech recorder at the sampling rate of 44.1 kHz and the resolution of 16 bit. The database of pathological and normative speech has been collected for the study including reference assessments provided by the speech therapy experts. Ten adult subjects were asked to simulate a certain type of stigmatism under the speech therapy expert supervision. In the recordings, the analyzed phone [s] was surrounded by vowels, viz: ASA, ESE, ISI, SPA, USU, YSY. Thirteen MFCC (mel-frequency cepstral coefficients) and RMS (root mean square) values are calculated within each frame being a part of the analyzed phoneme. Additionally, 3 fricative formants along with corresponding amplitudes are determined for the entire segment. In order to aggregate the information within the segment, the average value of each MFCC coefficient is calculated. All features of other types are aggregated by means of their 75th percentile. The proposed method of features aggregation reduces the size of the feature vector used in the classification. Binary SVM (support vector machine) classifier is employed at the phoneme recognition stage. The first group consists of pathological phones, while the other of the normative ones. The proposed feature vector yields classification sensitivity and specificity measures above 90% level in case of individual logo phones. The employment of a fricative formants-based information improves the sole-MFCC classification results average of 5 percentage points. The study shows that the employment of specific parameters for the selected phones improves the efficiency of pathology detection referred to the traditional methods of speech signal parameterization.

Keywords: computer-aided pronunciation evaluation, sibilants, sigmatism diagnosis, speech processing

Procedia PDF Downloads 283
3436 Mixed Number Algebra and Its Application

Authors: Md. Shah Alam

Abstract:

Mushfiq Ahmad has defined a Mixed Number, which is the sum of a scalar and a Cartesian vector. He has also defined the elementary group operations of Mixed numbers i.e. the norm of Mixed numbers, the product of two Mixed numbers, the identity element and the inverse. It has been observed that Mixed Number is consistent with Pauli matrix algebra and a handy tool to work with Dirac electron theory. Its use as a mathematical method in Physics has been studied. (1) We have applied Mixed number in Quantum Mechanics: Mixed Number version of Displacement operator, Vector differential operator, and Angular momentum operator has been developed. Mixed Number method has also been applied to Klein-Gordon equation. (2) We have applied Mixed number in Electrodynamics: Mixed Number version of Maxwell’s equation, the Electric and Magnetic field quantities and Lorentz Force has been found. (3) An associative transformation of Mixed Number numbers fulfilling Lorentz invariance requirement is developed. (4) We have applied Mixed number algebra as an extension of Complex number. Mixed numbers and the Quaternions have isomorphic correspondence, but they are different in algebraic details. The multiplication of unit Mixed number and the multiplication of unit Quaternions are different. Since Mixed Number has properties similar to those of Pauli matrix algebra, Mixed Number algebra is a more convenient tool to deal with Dirac equation.

Keywords: mixed number, special relativity, quantum mechanics, electrodynamics, pauli matrix

Procedia PDF Downloads 364
3435 Research on the Application of Flexible and Programmable Systems in Electronic Systems

Authors: Yang Xiaodong

Abstract:

This article explores the application and structural characteristics of flexible and programmable systems in electronic systems, with a focus on analyzing their advantages and architectural differences in dealing with complex environments. By introducing mathematical models and simulation experiments, the performance of dynamic module combination in flexible systems and fixed path selection in programmable systems in resource utilization and performance optimization was demonstrated. This article also discusses the mutual transformation between the two in practical applications and proposes a solution to improve system flexibility and performance through dynamic reconfiguration technology. This study provides theoretical reference for the design and optimization of flexible and programmable systems.

Keywords: flexibility, programmable, electronic systems, system architecture

Procedia PDF Downloads 9
3434 A Non-Iterative Shape Reconstruction of an Interface from Boundary Measurement

Authors: Mourad Hrizi

Abstract:

In this paper, we study the inverse problem of reconstructing an interior interface D appearing in the elliptic partial differential equation: Δu+χ(D)u=0 from the knowledge of the boundary measurements. This problem arises from a semiconductor transistor model. We propose a new shape reconstruction procedure that is based on the Kohn-Vogelius formulation and the topological sensitivity method. The inverse problem is formulated as a topology optimization one. A topological sensitivity analysis is derived from a function. The unknown subdomain D is reconstructed using a level-set curve of the topological gradient. Finally, we give several examples to show the viability of our proposed method.

Keywords: inverse problem, topological optimization, topological gradient, Kohn-Vogelius formulation

Procedia PDF Downloads 244
3433 Optimization Approach to Estimate Hammerstein–Wiener Nonlinear Blocks in Presence of Noise and Disturbance

Authors: Leili Esmaeilani, Jafar Ghaisari, Mohsen Ahmadian

Abstract:

Hammerstein–Wiener model is a block-oriented model where a linear dynamic system is surrounded by two static nonlinearities at its input and output and could be used to model various processes. This paper contains an optimization approach method for analysing the problem of Hammerstein–Wiener systems identification. The method relies on reformulate the identification problem; solve it as constraint quadratic problem and analysing its solutions. During the formulation of the problem, effects of adding noise to both input and output signals of nonlinear blocks and disturbance to linear block, in the emerged equations are discussed. Additionally, the possible parametric form of matrix operations to reduce the equation size is presented. To analyse the possible solutions to the mentioned system of equations, a method to reduce the difference between the number of equations and number of unknown variables by formulate and importing existing knowledge about nonlinear functions is presented. Obtained equations are applied to an instance H–W system to validate the results and illustrate the proposed method.

Keywords: identification, Hammerstein-Wiener, optimization, quantization

Procedia PDF Downloads 257
3432 ACOPIN: An ACO Algorithm with TSP Approach for Clustering Proteins in Protein Interaction Networks

Authors: Jamaludin Sallim, Rozlina Mohamed, Roslina Abdul Hamid

Abstract:

In this paper, we proposed an Ant Colony Optimization (ACO) algorithm together with Traveling Salesman Problem (TSP) approach to investigate the clustering problem in Protein Interaction Networks (PIN). We named this combination as ACOPIN. The purpose of this work is two-fold. First, to test the efficacy of ACO in clustering PIN and second, to propose the simple generalization of the ACO algorithm that might allow its application in clustering proteins in PIN. We split this paper to three main sections. First, we describe the PIN and clustering proteins in PIN. Second, we discuss the steps involved in each phase of ACO algorithm. Finally, we present some results of the investigation with the clustering patterns.

Keywords: ant colony optimization algorithm, searching algorithm, protein functional module, protein interaction network

Procedia PDF Downloads 612
3431 Conventional and Hybrid Network Energy Systems Optimization for Canadian Community

Authors: Mohamed Ghorab

Abstract:

Local generated and distributed system for thermal and electrical energy is sighted in the near future to reduce transmission losses instead of the centralized system. Distributed Energy Resources (DER) is designed at different sizes (small and medium) and it is incorporated in energy distribution between the hubs. The energy generated from each technology at each hub should meet the local energy demands. Economic and environmental enhancement can be achieved when there are interaction and energy exchange between the hubs. Network energy system and CO2 optimization between different six hubs presented Canadian community level are investigated in this study. Three different scenarios of technology systems are studied to meet both thermal and electrical demand loads for the six hubs. The conventional system is used as the first technology system and a reference case study. The conventional system includes boiler to provide the thermal energy, but the electrical energy is imported from the utility grid. The second technology system includes combined heat and power (CHP) system to meet the thermal demand loads and part of the electrical demand load. The third scenario has integration systems of CHP and Organic Rankine Cycle (ORC) where the thermal waste energy from the CHP system is used by ORC to generate electricity. General Algebraic Modeling System (GAMS) is used to model DER system optimization based on energy economics and CO2 emission analyses. The results are compared with the conventional energy system. The results show that scenarios 2 and 3 provide an annual total cost saving of 21.3% and 32.3 %, respectively compared to the conventional system (scenario 1). Additionally, Scenario 3 (CHP & ORC systems) provides 32.5% saving in CO2 emission compared to conventional system subsequent case 2 (CHP system) with a value of 9.3%.  

Keywords: distributed energy resources, network energy system, optimization, microgeneration system

Procedia PDF Downloads 190
3430 Strategic Asset Allocation Optimization: Enhancing Portfolio Performance Through PCA-Driven Multi-Objective Modeling

Authors: Ghita Benayad

Abstract:

Asset allocation, which affects the long-term profitability of portfolios by distributing assets to fulfill a range of investment objectives, is the cornerstone of investment management in the dynamic and complicated world of financial markets. This paper offers a technique for optimizing strategic asset allocation with the goal of improving portfolio performance by addressing the inherent complexity and uncertainty of the market through the use of Principal Component Analysis (PCA) in a multi-objective modeling framework. The study's first section starts with a critical evaluation of conventional asset allocation techniques, highlighting how poorly they are able to capture the intricate relationships between assets and the volatile nature of the market. In order to overcome these challenges, the project suggests a PCA-driven methodology that isolates important characteristics influencing asset returns by decreasing the dimensionality of the investment universe. This decrease provides a stronger basis for asset allocation decisions by facilitating a clearer understanding of market structures and behaviors. Using a multi-objective optimization model, the project builds on this foundation by taking into account a number of performance metrics at once, including risk minimization, return maximization, and the accomplishment of predetermined investment goals like regulatory compliance or sustainability standards. This model provides a more comprehensive understanding of investor preferences and portfolio performance in comparison to conventional single-objective optimization techniques. While applying the PCA-driven multi-objective optimization model to historical market data, aiming to construct portfolios better under different market situations. As compared to portfolios produced from conventional asset allocation methodologies, the results show that portfolios optimized using the proposed method display improved risk-adjusted returns, more resilience to market downturns, and better alignment with specified investment objectives. The study also looks at the implications of this PCA technique for portfolio management, including the prospect that it might give investors a more advanced framework for navigating financial markets. The findings suggest that by combining PCA with multi-objective optimization, investors may obtain a more strategic and informed asset allocation that is responsive to both market conditions and individual investment preferences. In conclusion, this capstone project improves the field of financial engineering by creating a sophisticated asset allocation optimization model that integrates PCA with multi-objective optimization. In addition to raising concerns about the condition of asset allocation today, the proposed method of portfolio management opens up new avenues for research and application in the area of investment techniques.

Keywords: asset allocation, portfolio optimization, principle component analysis, multi-objective modelling, financial market

Procedia PDF Downloads 47
3429 An Integrated Approach for Optimizing Drillable Parameters to Increase Drilling Performance: A Real Field Case Study

Authors: Hamidoddin Yousife

Abstract:

Drilling optimization requires a prediction of drilling rate of penetration (ROP) since it provides a significant reduction in drilling costs. There are several factors that can have an impact on the ROP, both controllable and uncontrollable. Numerous drilling penetration rate models have been considered based on drilling parameters. This papers considered the effect of proper drilling parameter selection such as bit, Mud Type, applied weight on bit (WOB), Revolution per minutes (RPM), and flow rate on drilling optimization and drilling cost reduction. A predicted analysis is used in real-time drilling performance to determine the optimal drilling operation. As a result of these modeling studies, the real data collected from three directional wells at Azadegan oil fields, Iran, was verified and adjusted to determine the drillability of a specific formation. Simulation results and actual drilling results show significant improvements in inaccuracy. Once simulations had been validated, optimum drilling parameters and equipment specifications were determined by varying weight on bit (WOB), rotary speed (RPM), hydraulics (hydraulic pressure), and bit specification for each well until the highest drilling rate was achieved. To evaluate the potential operational and economic benefits of optimizing results, a qualitative and quantitative analysis of the data was performed.

Keywords: drlling, cost, optimization, parameters

Procedia PDF Downloads 168
3428 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection

Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa

Abstract:

Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.

Keywords: classification, airborne LiDAR, parameters selection, support vector machine

Procedia PDF Downloads 147
3427 Optimisation of the Input Layer Structure for Feedforward Narx Neural Networks

Authors: Zongyan Li, Matt Best

Abstract:

This paper presents an optimization method for reducing the number of input channels and the complexity of the feed-forward NARX neural network (NN) without compromising the accuracy of the NN model. By utilizing the correlation analysis method, the most significant regressors are selected to form the input layer of the NN structure. An application of vehicle dynamic model identification is also presented in this paper to demonstrate the optimization technique and the optimal input layer structure and the optimal number of neurons for the neural network is investigated.

Keywords: correlation analysis, F-ratio, levenberg-marquardt, MSE, NARX, neural network, optimisation

Procedia PDF Downloads 371
3426 A New Perspective: The Use of Low-Cost Phase Change Material in Building Envelope System

Authors: Andrey A. Chernousov, Ben Y. B. Chan

Abstract:

The use of the low-cost paraffinic phase change material can be rather effective in smart building envelopes in the South China region. Particular attention has to be paid to the PCM optimization as an exploitation conditions and the envelope insulation changes its thermal characteristics. The studied smart building envelope consists of a reinforced aluminum exterior, polymeric insulation foam, phase change material and reinforced interior gypsum board. A prototype sample was tested to validate the numerical scheme using EnergryPlus software. Three scenarios of insulation thermal resistance loss (ΔR/R = 0%, 25%, 50%) were compared with the different PCM thicknesses (tP=0, 1, 2.5, 5 mm). The comparisons were carried out for a west facing enveloped office building (50 storey). PCM optimization was applied to find the maximum efficiency for the different ΔR/R cases. It was found, during the optimization, that the PCM is an important smart component, lowering the peak energy demand up to 2.7 times. The results are not influenced by the insulation aging in terms of ΔR/R during long-term exploitation. In hot and humid climates like Hong Kong, the insulation core of the smart systems is recommended to be laminated completely. This can be very helpful in achieving an acceptable payback period.

Keywords: smart building envelope, thermal performance, phase change material, energy efficiency, large-scale sandwich panel

Procedia PDF Downloads 730
3425 Minimum-Fuel Optimal Trajectory for Reusable First-Stage Rocket Landing Using Particle Swarm Optimization

Authors: Kevin Spencer G. Anglim, Zhenyu Zhang, Qingbin Gao

Abstract:

Reusable launch vehicles (RLVs) present a more environmentally-friendly approach to accessing space when compared to traditional launch vehicles that are discarded after each flight. This paper studies the recyclable nature of RLVs by presenting a solution method for determining minimum-fuel optimal trajectories using principles from optimal control theory and particle swarm optimization (PSO). This problem is formulated as a minimum-landing error powered descent problem where it is desired to move the RLV from a fixed set of initial conditions to three different sets of terminal conditions. However, unlike other powered descent studies, this paper considers the highly nonlinear effects caused by atmospheric drag, which are often ignored for studies on the Moon or on Mars. Rather than optimizing the controls directly, the throttle control is assumed to be bang-off-bang with a predetermined thrust direction for each phase of flight. The PSO method is verified in a one-dimensional comparison study, and it is then applied to the two-dimensional cases, the results of which are illustrated.

Keywords: minimum-fuel optimal trajectory, particle swarm optimization, reusable rocket, SpaceX

Procedia PDF Downloads 277
3424 Optimum Design of Dual-Purpose Outriggers in Tall Buildings

Authors: Jiwon Park, Jihae Hur, Kukjae Kim, Hansoo Kim

Abstract:

In this study, outriggers, which are horizontal structures connecting a building core to distant columns to increase the lateral stiffness of a tall building, are used to reduce differential axial shortening in a tall building. Therefore, the outriggers in tall buildings are used to serve the dual purposes of reducing the lateral displacement and reducing the differential axial shortening. Since the location of the outrigger greatly affects the effectiveness of the outrigger in terms of the lateral displacement at the top of the tall building and the maximum differential axial shortening, the optimum locations of the dual-purpose outriggers can be determined by an optimization method. Because the floors where the outriggers are installed are given as integer numbers, the conventional gradient-based optimization methods cannot be directly used. In this study, a piecewise quadratic interpolation method is used to resolve the integrality requirement posed by the optimum locations of the dual-purpose outriggers. The optimal solutions for the dual-purpose outriggers are searched by linear scalarization which is a popular method for multi-objective optimization problems. It was found that increasing the number of outriggers reduced the maximum lateral displacement and the maximum differential axial shortening. It was also noted that the optimum locations for reducing the lateral displacement and reducing the differential axial shortening were different. Acknowledgment: This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science and ICT (NRF-2017R1A2B4010043) and financially supported by Korea Ministry of Land, Infrastructure and Transport(MOLIT) as U-City Master and Doctor Course Grant Program.

Keywords: concrete structure, optimization, outrigger, tall building

Procedia PDF Downloads 177
3423 Reliability Enhancement by Parameter Design in Ferrite Magnet Process

Authors: Won Jung, Wan Emri

Abstract:

Ferrite magnet is widely used in many automotive components such as motors and alternators. Magnets used inside the components must be in good quality to ensure the high level of performance. The purpose of this study is to design input parameters that optimize the ferrite magnet production process to ensure the quality and reliability of manufactured products. Design of Experiments (DOE) and Statistical Process Control (SPC) are used as mutual supplementations to optimize the process. DOE and SPC are quality tools being used in the industry to monitor and improve the manufacturing process condition. These tools are practically used to maintain the process on target and within the limits of natural variation. A mixed Taguchi method is utilized for optimization purpose as a part of DOE analysis. SPC with proportion data is applied to assess the output parameters to determine the optimal operating conditions. An example of case involving the monitoring and optimization of ferrite magnet process was presented to demonstrate the effectiveness of this approach. Through the utilization of these tools, reliable magnets can be produced by following the step by step procedures of proposed framework. One of the main contributions of this study was producing the crack free magnets by applying the proposed parameter design.

Keywords: ferrite magnet, crack, reliability, process optimization, Taguchi method

Procedia PDF Downloads 517
3422 Optimal Design of 3-Way Reversing Valve Considering Cavitation Effect

Authors: Myeong-Gon Lee, Yang-Gyun Kim, Tae-Young Kim, Seung-Ho Han

Abstract:

The high-pressure valve uses one set of 2-way valves for the purpose of reversing fluid direction. If there is no accurate control device for the 2-way valves, lots of surging can be generated. The surging is a kind of pressure ripple that occurs in rapid changes of fluid motions under inaccurate valve control. To reduce the surging effect, a 3-way reversing valve can be applied which provides a rapid and precise change of water flow directions without any accurate valve control system. However, a cavitation occurs due to a complicated internal trim shape of the 3-way reversing valve. The cavitation causes not only noise and vibration but also decreasing the efficiency of valve-operation, in which the bubbles generated below the saturated vapor pressure are collapsed rapidly at higher pressure zone. The shape optimization of the 3-way reversing valve to minimize the cavitation effect is necessary. In this study, the cavitation index according to the international standard ISA was introduced to estimate macroscopically the occurrence of the cavitation effect. Computational fluid dynamic analysis was carried out, and the cavitation effect was quantified by means of the percent of cavitation converted from calculated results of vapor volume fraction. In addition, the shape optimization of the 3-way reversing valve was performed by taking into account of the percent of cavitation.

Keywords: 3-Way reversing valve, cavitation, shape optimization, vapor volume fraction

Procedia PDF Downloads 371
3421 Optimum Dimensions of Hydraulic Structures Foundation and Protections Using Coupled Genetic Algorithm with Artificial Neural Network Model

Authors: Dheyaa W. Abbood, Rafa H. AL-Suhaili, May S. Saleh

Abstract:

A model using the artificial neural networks and genetic algorithm technique is developed for obtaining optimum dimensions of the foundation length and protections of small hydraulic structures. The procedure involves optimizing an objective function comprising a weighted summation of the state variables. The decision variables considered in the optimization are the upstream and downstream cutoffs length sand their angles of inclination, the foundation length, and the length of the downstream soil protection. These were obtained for a given maximum difference in head, depth of impervious layer and degree of anisotropy.The optimization carried out subjected to constraints that ensure a safe structure against the uplift pressure force and sufficient protection length at the downstream side of the structure to overcome an excessive exit gradient. The Geo-studios oft ware, was used to analyze 1200 different cases. For each case the length of protection and volume of structure required to satisfy the safety factors mentioned previously were estimated. An ANN model was developed and verified using these cases input-output sets as its data base. A MatLAB code was written to perform a genetic algorithm optimization modeling coupled with this ANN model using a formulated optimization model. A sensitivity analysis was done for selecting the cross-over probability, the mutation probability and level ,the number of population, the position of the crossover and the weights distribution for all the terms of the objective function. Results indicate that the most factor that affects the optimum solution is the number of population required. The minimum value that gives stable global optimum solution of this parameters is (30000) while other variables have little effect on the optimum solution.

Keywords: inclined cutoff, optimization, genetic algorithm, artificial neural networks, geo-studio, uplift pressure, exit gradient, factor of safety

Procedia PDF Downloads 324
3420 The Possibility of Solving a 3x3 Rubik’s Cube under 3 Seconds

Authors: Chung To Kong, Siu Ming Yiu

Abstract:

Rubik's cube was invented in 1974. Since then, speedcubers all over the world try their best to break the world record again and again. The newest record is 3.47 seconds. There are many factors that affect the timing, including turns per second (tps), algorithm, finger trick, hardware of the cube. In this paper, the lower bound of the cube solving time will be discussed using convex optimization. Extended analysis of the world records will be used to understand how to improve the timing. With the understanding of each part of the solving step, the paper suggests a list of speed improvement techniques. Based on the analysis of the world record, there is a high possibility that the 3 seconds mark will be broken soon.

Keywords: Rubik's Cube, speed, finger trick, optimization

Procedia PDF Downloads 206
3419 Optimization of Thermopile Sensor Performance of Polycrystalline Silicon Film

Authors: Li Long, Thomas Ortlepp

Abstract:

A theoretical model for the optimization of thermopile sensor performance is developed for thermoelectric-based infrared radiation detection. It is shown that the performance of polycrystalline silicon film thermopile sensor can be optimized according to the thermoelectric quality factor, sensor layer structure factor, and sensor layout geometrical form factor. Based on the properties of electrons, phonons, grain boundaries, and their interactions, the thermoelectric quality factor of polycrystalline silicon is analyzed with the relaxation time approximation of the Boltzmann transport equation. The model includes the effect of grain structure, grain boundary trap properties, and doping concentration. The layer structure factor is analyzed with respect to the infrared absorption coefficient. The optimization of layout design is characterized by the form factor, which is calculated for different sensor designs. A double-layer polycrystalline silicon thermopile infrared sensor on a suspended membrane has been designed and fabricated with a CMOS-compatible process. The theoretical approach is confirmed by measurement results.

Keywords: polycrystalline silicon, relaxation time approximation, specific detectivity, thermal conductivity, thermopile infrared sensor

Procedia PDF Downloads 139
3418 Proxisch: An Optimization Approach of Large-Scale Unstable Proxy Servers Scheduling

Authors: Xiaoming Jiang, Jinqiao Shi, Qingfeng Tan, Wentao Zhang, Xuebin Wang, Muqian Chen

Abstract:

Nowadays, big companies such as Google, Microsoft, which have adequate proxy servers, have perfectly implemented their web crawlers for a certain website in parallel. But due to lack of expensive proxy servers, it is still a puzzle for researchers to crawl large amounts of information from a single website in parallel. In this case, it is a good choice for researchers to use free public proxy servers which are crawled from the Internet. In order to improve efficiency of web crawler, the following two issues should be considered primarily: (1) Tasks may fail owing to the instability of free proxy servers; (2) A proxy server will be blocked if it visits a single website frequently. In this paper, we propose Proxisch, an optimization approach of large-scale unstable proxy servers scheduling, which allow anyone with extremely low cost to run a web crawler efficiently. Proxisch is designed to work efficiently by making maximum use of reliable proxy servers. To solve second problem, it establishes a frequency control mechanism which can ensure the visiting frequency of any chosen proxy server below the website’s limit. The results show that our approach performs better than the other scheduling algorithms.

Keywords: proxy server, priority queue, optimization algorithm, distributed web crawling

Procedia PDF Downloads 211
3417 Energy Efficient Clustering with Adaptive Particle Swarm Optimization

Authors: KumarShashvat, ArshpreetKaur, RajeshKumar, Raman Chadha

Abstract:

Wireless sensor networks have principal characteristic of having restricted energy and with limitation that energy of the nodes cannot be replenished. To increase the lifetime in this scenario WSN route for data transmission is opted such that utilization of energy along the selected route is negligible. For this energy efficient network, dandy infrastructure is needed because it impinges the network lifespan. Clustering is a technique in which nodes are grouped into disjoints and non–overlapping sets. In this technique data is collected at the cluster head. In this paper, Adaptive-PSO algorithm is proposed which forms energy aware clusters by minimizing the cost of locating the cluster head. The main concern is of the suitability of the swarms by adjusting the learning parameters of PSO. Particle Swarm Optimization converges quickly at the beginning stage of the search but during the course of time, it becomes stable and may be trapped in local optima. In suggested network model swarms are given the intelligence of the spiders which makes them capable enough to avoid earlier convergence and also help them to escape from the local optima. Comparison analysis with traditional PSO shows that new algorithm considerably enhances the performance where multi-dimensional functions are taken into consideration.

Keywords: Particle Swarm Optimization, adaptive – PSO, comparison between PSO and A-PSO, energy efficient clustering

Procedia PDF Downloads 246
3416 Uncertainty and Optimization Analysis Using PETREL RE

Authors: Ankur Sachan

Abstract:

The ability to make quick yet intelligent and value-added decisions to develop new fields has always been of great significance. In situations where the capital expenses and subsurface risk are high, carefully analyzing the inherent uncertainties in the reservoir and how they impact the predicted hydrocarbon accumulation and production becomes a daunting task. The problem is compounded in offshore environments, especially in the presence of heavy oils and disconnected sands where the margin for error is small. Uncertainty refers to the degree to which the data set may be in error or stray from the predicted values. To understand and quantify the uncertainties in reservoir model is important when estimating the reserves. Uncertainty parameters can be geophysical, geological, petrophysical etc. Identification of these parameters is necessary to carry out the uncertainty analysis. With so many uncertainties working at different scales, it becomes essential to have a consistent and efficient way of incorporating them into our analysis. Ranking the uncertainties based on their impact on reserves helps to prioritize/ guide future data gathering and uncertainty reduction efforts. Assigning probabilistic ranges to key uncertainties also enables the computation of probabilistic reserves. With this in mind, this paper, with the help the uncertainty and optimization process in petrel RE shows how the most influential uncertainties can be determined efficiently and how much impact so they have on the reservoir model thus helping in determining a cost effective and accurate model of the reservoir.

Keywords: uncertainty, reservoir model, parameters, optimization analysis

Procedia PDF Downloads 651
3415 A Comparative Study on ANN, ANFIS and SVM Methods for Computing Resonant Frequency of A-Shaped Compact Microstrip Antennas

Authors: Ahmet Kayabasi, Ali Akdagli

Abstract:

In this study, three robust predicting methods, namely artificial neural network (ANN), adaptive neuro fuzzy inference system (ANFIS) and support vector machine (SVM) were used for computing the resonant frequency of A-shaped compact microstrip antennas (ACMAs) operating at UHF band. Firstly, the resonant frequencies of 144 ACMAs with various dimensions and electrical parameters were simulated with the help of IE3D™ based on method of moment (MoM). The ANN, ANFIS and SVM models for computing the resonant frequency were then built by considering the simulation data. 124 simulated ACMAs were utilized for training and the remaining 20 ACMAs were used for testing the ANN, ANFIS and SVM models. The performance of the ANN, ANFIS and SVM models are compared in the training and test process. The average percentage errors (APE) regarding the computed resonant frequencies for training of the ANN, ANFIS and SVM were obtained as 0.457%, 0.399% and 0.600%, respectively. The constructed models were then tested and APE values as 0.601% for ANN, 0.744% for ANFIS and 0.623% for SVM were achieved. The results obtained here show that ANN, ANFIS and SVM methods can be successfully applied to compute the resonant frequency of ACMAs, since they are useful and versatile methods that yield accurate results.

Keywords: a-shaped compact microstrip antenna, artificial neural network (ANN), adaptive neuro-fuzzy inference system (ANFIS), support vector machine (SVM)

Procedia PDF Downloads 441
3414 Optimization of Wear during Dry Sliding Wear of AISI 1042 Steel Using Response Surface Methodology

Authors: Sukant Mehra, Parth Gupta, Varun Arora, Sarvoday Singh, Amit Kohli

Abstract:

The study was emphasised on dry sliding wear behavior of AISI 1042 steel. Dry sliding wear tests were performed using pin-on-disk apparatus under normal loads of 5, 7.5 and 10 kgf and at speeds 600, 750 and 900 rpm. Response surface methodology (RSM) was utilized for finding optimal values of process parameter and experiment was based on rotatable, central composite design (CCD). It was found that the wear followed linear pattern with the load and rpm. The obtained optimal process parameters have been predicted and verified by confirmation experiments.

Keywords: central composite design (CCD), optimization, response surface methodology (RSM), wear

Procedia PDF Downloads 577
3413 The Optimum Mel-Frequency Cepstral Coefficients (MFCCs) Contribution to Iranian Traditional Music Genre Classification by Instrumental Features

Authors: M. Abbasi Layegh, S. Haghipour, K. Athari, R. Khosravi, M. Tafkikialamdari

Abstract:

An approach to find the optimum mel-frequency cepstral coefficients (MFCCs) for the Radif of Mirzâ Ábdollâh, which is the principal emblem and the heart of Persian music, performed by most famous Iranian masters on two Iranian stringed instruments ‘Tar’ and ‘Setar’ is proposed. While investigating the variance of MFCC for each record in themusic database of 1500 gushe of the repertoire belonging to 12 modal systems (dastgâh and âvâz), we have applied the Fuzzy C-Mean clustering algorithm on each of the 12 coefficient and different combinations of those coefficients. We have applied the same experiment while increasing the number of coefficients but the clustering accuracy remained the same. Therefore, we can conclude that the first 7 MFCCs (V-7MFCC) are enough for classification of The Radif of Mirzâ Ábdollâh. Classical machine learning algorithms such as MLP neural networks, K-Nearest Neighbors (KNN), Gaussian Mixture Model (GMM), Hidden Markov Model (HMM) and Support Vector Machine (SVM) have been employed. Finally, it can be realized that SVM shows a better performance in this study.

Keywords: radif of Mirzâ Ábdollâh, Gushe, mel frequency cepstral coefficients, fuzzy c-mean clustering algorithm, k-nearest neighbors (KNN), gaussian mixture model (GMM), hidden markov model (HMM), support vector machine (SVM)

Procedia PDF Downloads 446
3412 Multi Objective Optimization for Two-Sided Assembly Line Balancing

Authors: Srushti Bhatt, M. B. Kiran

Abstract:

Two-sided assembly line balancing problem is yet to be addressed simply to compete for the global market for manufacturers. The task assigned in an ordered sequence to get optimum performance of the system is known as assembly line balancing problem mainly classified as single and two sided. It is very challenging in manufacturing industries to balance two-sided assembly line, wherein the set of sequential workstations the task operations are performed in two sides of the line. The conflicting major objective in two-sided assembly line balancing problem is either to maximize /minimize the performance parameters. The present study emphases on combining different evolutionary algorithm; ant colony, Tabu search and petri net method; and compares their results of an algorithm for solving two-sided assembly line balancing problem. The concept of multi objective optimization of performance parameters is now a day adopted to make a decision involving more than one objective function to be simultaneously optimized. The optimum result can be expected among the selected methods using multi-objective optimization. The performance parameters considered in the present study are a number of workstation, slickness and smoothness index. The simulation of the assembly line balancing problem provides optimal results of classical and practical problems.

Keywords: Ant colony, petri net, tabu search, two sided ALBP

Procedia PDF Downloads 278
3411 Optimization of Dez Dam Reservoir Operation Using Genetic Algorithm

Authors: Alireza Nikbakht Shahbazi, Emadeddin Shirali

Abstract:

Since optimization issues of water resources are complicated due to the variety of decision making criteria and objective functions, it is sometimes impossible to resolve them through regular optimization methods or, it is time or money consuming. Therefore, the use of modern tools and methods is inevitable in resolving such problems. An accurate and essential utilization policy has to be determined in order to use natural resources such as water reservoirs optimally. Water reservoir programming studies aim to determine the final cultivated land area based on predefined agricultural models and water requirements. Dam utilization rule curve is also provided in such studies. The basic information applied in water reservoir programming studies generally include meteorological, hydrological, agricultural and water reservoir related data, and the geometric characteristics of the reservoir. The system of Dez dam water resources was simulated applying the basic information in order to determine the capability of its reservoir to provide the objectives of the performed plan. As a meta-exploratory method, genetic algorithm was applied in order to provide utilization rule curves (intersecting the reservoir volume). MATLAB software was used in order to resolve the foresaid model. Rule curves were firstly obtained through genetic algorithm. Then the significance of using rule curves and the decrease in decision making variables in the system was determined through system simulation and comparing the results with optimization results (Standard Operating Procedure). One of the most essential issues in optimization of a complicated water resource system is the increasing number of variables. Therefore a lot of time is required to find an optimum answer and in some cases, no desirable result is obtained. In this research, intersecting the reservoir volume has been applied as a modern model in order to reduce the number of variables. Water reservoir programming studies has been performed based on basic information, general hypotheses and standards and applying monthly simulation technique for a statistical period of 30 years. Results indicated that application of rule curve prevents the extreme shortages and decrease the monthly shortages.

Keywords: optimization, rule curve, genetic algorithm method, Dez dam reservoir

Procedia PDF Downloads 265
3410 Accurate Cortical Reconstruction in Narrow Sulci with Zero-Non-Zero Distance (ZNZD) Vector Field

Authors: Somojit Saha, Rohit K. Chatterjee, Sarit K. Das, Avijit Kar

Abstract:

A new force field is designed for propagation of the parametric contour into deep narrow cortical fold in the application of knowledge based reconstruction of cerebral cortex from MR image of brain. Designing of this force field is highly inspired by the Generalized Gradient Vector Flow (GGVF) model and markedly differs in manipulation of image information in order to determine the direction of propagation of the contour. While GGVF uses edge map as its main driving force, the newly designed force field uses the map of distance between zero valued pixels and their nearest non-zero valued pixel as its main driving force. Hence, it is called Zero-Non-Zero Distance (ZNZD) force field. The objective of this force field is forceful propagation of the contour beyond spurious convergence due to partial volume effect (PVE) in to narrow sulcal fold. Being function of the corresponding non-zero pixel value, the force field has got an inherent property to determine spuriousness of the edge automatically. It is effectively applied along with some morphological processing in the application of cortical reconstruction to breach the hindrance of PVE in narrow sulci where conventional GGVF fails.

Keywords: deformable model, external force field, partial volume effect, cortical reconstruction, MR image of brain

Procedia PDF Downloads 397