Search results for: artificial intelligence-based optimization
300 Optimization of SAD Algorithm on VLIW DSP
Authors: Hui-Jae You, Sun-Tae Chung, Souhwan Jung
Abstract:
SAD (Sum of Absolute Difference) algorithm is heavily used in motion estimation which is computationally highly demanding process in motion picture encoding. To enhance the performance of motion picture encoding on a VLIW processor, an efficient implementation of SAD algorithm on the VLIW processor is essential. SAD algorithm is programmed as a nested loop with a conditional branch. In VLIW processors, loop is usually optimized by software pipelining, but researches on optimal scheduling of software pipelining for nested loops, especially nested loops with conditional branches are rare. In this paper, we propose an optimal scheduling and implementation of SAD algorithm with conditional branch on a VLIW DSP processor. The proposed optimal scheduling first transforms the nested loop with conditional branch into a single loop with conditional branch with consideration of full utilization of ILP capability of the VLIW processor and realization of earlier escape from the loop. Next, the proposed optimal scheduling applies a modulo scheduling technique developed for single loop. Based on this optimal scheduling strategy, optimal implementation of SAD algorithm on TMS320C67x, a VLIW DSP is presented. Through experiments on TMS320C6713 DSK, it is shown that H.263 encoder with the proposed SAD implementation performs better than other H.263 encoder with other SAD implementations, and that the code size of the optimal SAD implementation is small enough to be appropriate for embedded environments.Keywords: Optimal implementation, SAD algorithm, VLIW, TMS320C6713.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2346299 Vehicle Gearbox Fault Diagnosis Based On Cepstrum Analysis
Authors: Mohamed El Morsy, Gabriela Achtenová
Abstract:
Research on damage of gears and gear pairs using vibration signals remains very attractive, because vibration signals from a gear pair are complex in nature and not easy to interpret. Predicting gear pair defects by analyzing changes in vibration signal of gears pairs in operation is a very reliable method. Therefore, a suitable vibration signal processing technique is necessary to extract defect information generally obscured by the noise from dynamic factors of other gear pairs.This article presents the value of cepstrum analysis in vehicle gearbox fault diagnosis. Cepstrum represents the overall power content of a whole family of harmonics and sidebands when more than one family of sidebands is present at the same time. The concept for the measurement and analysis involved in using the technique are briefly outlined. Cepstrum analysis is used for detection of an artificial pitting defect in a vehicle gearbox loaded with different speeds and torques. The test stand is equipped with three dynamometers; the input dynamometer serves asthe internal combustion engine, the output dynamometers introduce the load on the flanges of the output joint shafts. The pitting defect is manufactured on the tooth side of a gear of the fifth speed on the secondary shaft. Also, a method for fault diagnosis of gear faults is presented based on order Cepstrum. The procedure is illustrated with the experimental vibration data of the vehicle gearbox. The results show the effectiveness of Cepstrum analysis in detection and diagnosis of the gear condition.
Keywords: Cepstrum analysis, fault diagnosis, gearbox.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3310298 sEMG Interface Design for Locomotion Identification
Authors: Rohit Gupta, Ravinder Agarwal
Abstract:
Surface electromyographic (sEMG) signal has the potential to identify the human activities and intention. This potential is further exploited to control the artificial limbs using the sEMG signal from residual limbs of amputees. The paper deals with the development of multichannel cost efficient sEMG signal interface for research application, along with evaluation of proposed class dependent statistical approach of the feature selection method. The sEMG signal acquisition interface was developed using ADS1298 of Texas Instruments, which is a front-end interface integrated circuit for ECG application. Further, the sEMG signal is recorded from two lower limb muscles for three locomotions namely: Plane Walk (PW), Stair Ascending (SA), Stair Descending (SD). A class dependent statistical approach is proposed for feature selection and also its performance is compared with 12 preexisting feature vectors. To make the study more extensive, performance of five different types of classifiers are compared. The outcome of the current piece of work proves the suitability of the proposed feature selection algorithm for locomotion recognition, as compared to other existing feature vectors. The SVM Classifier is found as the outperformed classifier among compared classifiers with an average recognition accuracy of 97.40%. Feature vector selection emerges as the most dominant factor affecting the classification performance as it holds 51.51% of the total variance in classification accuracy. The results demonstrate the potentials of the developed sEMG signal acquisition interface along with the proposed feature selection algorithm.Keywords: Classifiers, feature selection, locomotion, sEMG.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491297 Prediction of the Epileptic Events 'Epileptic Seizures' by Neural Networks and Expert Systems
Authors: Kifah Tout, Nisrine Sinno, Mohamad Mikati
Abstract:
Many studies have focused on the nonlinear analysis of electroencephalography (EEG) mainly for the characterization of epileptic brain states. It is assumed that at least two states of the epileptic brain are possible: the interictal state characterized by a normal apparently random, steady-state EEG ongoing activity; and the ictal state that is characterized by paroxysmal occurrence of synchronous oscillations and is generally called in neurology, a seizure. The spatial and temporal dynamics of the epileptogenic process is still not clear completely especially the most challenging aspects of epileptology which is the anticipation of the seizure. Despite all the efforts we still don-t know how and when and why the seizure occurs. However actual studies bring strong evidence that the interictal-ictal state transition is not an abrupt phenomena. Findings also indicate that it is possible to detect a preseizure phase. Our approach is to use the neural network tool to detect interictal states and to predict from those states the upcoming seizure ( ictal state). Analysis of the EEG signal based on neural networks is used for the classification of EEG as either seizure or non-seizure. By applying prediction methods it will be possible to predict the upcoming seizure from non-seizure EEG. We will study the patients admitted to the epilepsy monitoring unit for the purpose of recording their seizures. Preictal, ictal, and post ictal EEG recordings are available on such patients for analysis The system will be induced by taking a body of samples then validate it using another. Distinct from the two first ones a third body of samples is taken to test the network for the achievement of optimum prediction. Several methods will be tried 'Backpropagation ANN' and 'RBF'.Keywords: Artificial neural network (ANN), automatic prediction, epileptic seizures analysis, genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1540296 Effectiveness of Moringa oleifera Coagulant Protein as Natural Coagulant aid in Removal of Turbidity and Bacteria from Turbid Waters
Authors: B. Bina, M.H. Mehdinejad, Gunnel Dalhammer, Guna RajaraoM. Nikaeen, H. Movahedian Attar
Abstract:
Coagulation of water involves the use of coagulating agents to bring the suspended matter in the raw water together for settling and the filtration stage. Present study is aimed to examine the effects of aluminum sulfate as coagulant in conjunction with Moringa Oleifera Coagulant Protein as coagulant aid on turbidity, hardness, and bacteria in turbid water. A conventional jar test apparatus was employed for the tests. The best removal was observed at a pH of 7 to 7.5 for all turbidities. Turbidity removal efficiency was resulted between % 80 to % 99 by Moringa Oleifera Coagulant Protein as coagulant aid. Dosage of coagulant and coagulant aid decreased with increasing turbidity. In addition, Moringa Oleifera Coagulant Protein significantly has reduced the required dosage of primary coagulant. Residual Al+3 in treated water were less than 0.2 mg/l and meets the environmental protection agency guidelines. The results showed that turbidity reduction of % 85.9- % 98 paralleled by a primary Escherichia coli reduction of 1-3 log units (99.2 – 99.97%) was obtained within the first 1 to 2 h of treatment. In conclusions, Moringa Oleifera Coagulant Protein as coagulant aid can be used for drinking water treatment without the risk of organic or nutrient release. We demonstrated that optimal design method is an efficient approach for optimization of coagulation-flocculation process and appropriate for raw water treatment.Keywords: MOCP, Coagulant aid, turbidity removal, E.coliremoval, water, treatment
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3541295 Selection the Optimum Cooling Scheme for Generators based on the Electro-Thermal Analysis
Authors: Diako Azizi, Ahmad Gholami, Vahid Abbasi
Abstract:
Optimal selection of electrical insulations in electrical machinery insures reliability during operation. From the insulation studies of view for electrical machines, stator is the most important part. This fact reveals the requirement for inspection of the electrical machine insulation along with the electro-thermal stresses. In the first step of the study, a part of the whole structure of machine in which covers the general characteristics of the machine is chosen, then based on the electromagnetic analysis (finite element method), the machine operation is simulated. In the simulation results, the temperature distribution of the total structure is presented simultaneously by using electro-thermal analysis. The results of electro-thermal analysis can be used for designing an optimal cooling system. In order to design, review and comparing the cooling systems, four wiring structures in the slots of Stator are presented. The structures are compared to each other in terms of electrical, thermal distribution and remaining life of insulation by using Finite Element analysis. According to the steps of the study, an optimization algorithm has been presented for selection of appropriate structure.Keywords: Electrical field, field distribution, insulation, winding, finite element method, electro thermal
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1748294 Route Training in Mobile Robotics through System Identification
Authors: Roberto Iglesias, Theocharis Kyriacou, Ulrich Nehmzow, Steve Billings
Abstract:
Fundamental sensor-motor couplings form the backbone of most mobile robot control tasks, and often need to be implemented fast, efficiently and nevertheless reliably. Machine learning techniques are therefore often used to obtain the desired sensor-motor competences. In this paper we present an alternative to established machine learning methods such as artificial neural networks, that is very fast, easy to implement, and has the distinct advantage that it generates transparent, analysable sensor-motor couplings: system identification through nonlinear polynomial mapping. This work, which is part of the RobotMODIC project at the universities of Essex and Sheffield, aims to develop a theoretical understanding of the interaction between the robot and its environment. One of the purposes of this research is to enable the principled design of robot control programs. As a first step towards this aim we model the behaviour of the robot, as this emerges from its interaction with the environment, with the NARMAX modelling method (Nonlinear, Auto-Regressive, Moving Average models with eXogenous inputs). This method produces explicit polynomial functions that can be subsequently analysed using established mathematical methods. In this paper we demonstrate the fidelity of the obtained NARMAX models in the challenging task of robot route learning; we present a set of experiments in which a Magellan Pro mobile robot was taught to follow four different routes, always using the same mechanism to obtain the required control law.Keywords: Mobile robotics, system identification, non-linear modelling, NARMAX.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1722293 A Growing Natural Gas Approach for Evaluating Quality of Software Modules
Authors: Parvinder S. Sandhu, Sandeep Khimta, Kiranpreet Kaur
Abstract:
The prediction of Software quality during development life cycle of software project helps the development organization to make efficient use of available resource to produce the product of highest quality. “Whether a module is faulty or not" approach can be used to predict quality of a software module. There are numbers of software quality prediction models described in the literature based upon genetic algorithms, artificial neural network and other data mining algorithms. One of the promising aspects for quality prediction is based on clustering techniques. Most quality prediction models that are based on clustering techniques make use of K-means, Mixture-of-Guassians, Self-Organizing Map, Neural Gas and fuzzy K-means algorithm for prediction. In all these techniques a predefined structure is required that is number of neurons or clusters should be known before we start clustering process. But in case of Growing Neural Gas there is no need of predetermining the quantity of neurons and the topology of the structure to be used and it starts with a minimal neurons structure that is incremented during training until it reaches a maximum number user defined limits for clusters. Hence, in this work we have used Growing Neural Gas as underlying cluster algorithm that produces the initial set of labeled cluster from training data set and thereafter this set of clusters is used to predict the quality of test data set of software modules. The best testing results shows 80% accuracy in evaluating the quality of software modules. Hence, the proposed technique can be used by programmers in evaluating the quality of modules during software development.
Keywords: Growing Neural Gas, data clustering, fault prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1865292 Conjugate Mixed Convection Heat Transfer and Entropy Generation of Cu-Water Nanofluid in an Enclosure with Thick Wavy Bottom Wall
Authors: Sanjib Kr Pal, S. Bhattacharyya
Abstract:
Mixed convection of Cu-water nanofluid in an enclosure with thick wavy bottom wall has been investigated numerically. A co-ordinate transformation method is used to transform the computational domain into an orthogonal co-ordinate system. The governing equations in the computational domain are solved through a pressure correction based iterative algorithm. The fluid flow and heat transfer characteristics are analyzed for a wide range of Richardson number (0.1 ≤ Ri ≤ 5), nanoparticle volume concentration (0.0 ≤ ϕ ≤ 0.2), amplitude (0.0 ≤ α ≤ 0.1) of the wavy thick- bottom wall and the wave number (ω) at a fixed Reynolds number. Obtained results showed that heat transfer rate increases remarkably by adding the nanoparticles. Heat transfer rate is dependent on the wavy wall amplitude and wave number and decreases with increasing Richardson number for fixed amplitude and wave number. The Bejan number and the entropy generation are determined to analyze the thermodynamic optimization of the mixed convection.Keywords: Entropy generation, mixed convection, conjugate heat transfer, numerical, nanofluid, wall waviness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1046291 Impact of Wind Energy on Cost and Balancing Reserves
Authors: A. Khanal, A. Osareh, G. Lebby
Abstract:
Wind energy offers a significant advantage such as no fuel costs and no emissions from generation. However, wind energy sources are variable and non-dispatchable. The utility grid is able to accommodate the variability of wind in smaller proportion along with the daily load. However, at high penetration levels, the variability can severely impact the utility reserve requirements and the cost associated with it. In this paper the impact of wind energy is evaluated in detail in formulating the total utility cost. The objective is to minimize the overall cost of generation while ensuring the proper management of the load. Overall cost includes the curtailment cost, reserve cost and the reliability cost, as well as any other penalty imposed by the regulatory authority. Different levels of wind penetrations are explored and the cost impacts are evaluated. As the penetration level increases significantly, the reliability becomes a critical question to be answered. Here we increase the penetration from the wind yet keep the reliability factor within the acceptable limit provided by NERC. This paper uses an economic dispatch (ED) model to incorporate wind generation into the power grid. Power system costs are analyzed at various wind penetration levels using Linear Programming. The goal of this study is show how the increases in wind generation will affect power system economics.
Keywords: Balancing Reserves, Optimization, Wind Energy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2646290 Medical Knowledge Management in Healthcare Industry
Authors: B. Stroetmann, A. Aisenbrey
Abstract:
The Siemens Healthcare Sector is one of the world's largest suppliers to the healthcare industry and a trendsetter in medical imaging and therapy, laboratory diagnostics, medical information technology, and hearing aids. Siemens offers its customers products and solutions for the entire range of patient care from a single source – from prevention and early detection to diagnosis, and on to treatment and aftercare. By optimizing clinical workflows for the most common diseases, Siemens also makes healthcare faster, better, and more cost effective. The optimization of clinical workflows requires a multidisciplinary focus and a collaborative approach of e.g. medical advisors, researchers and scientists as well as healthcare economists. This new form of collaboration brings together experts with deep technical experience, physicians with specialized medical knowledge as well as people with comprehensive knowledge about health economics. As Charles Darwin is often quoted as saying, “It is neither the strongest of the species that survive, nor the most intelligent, but the one most responsive to change," We believe that those who can successfully manage this change will emerge as winners, with valuable competitive advantage. Current medical information and knowledge are some of the core assets in the healthcare industry. The main issue is to connect knowledge holders and knowledge recipients from various disciplines efficiently in order to spread and distribute knowledge.Keywords: Business Excellence, Clinical Knowledge, Knowledge Management, Knowledge Services, Learning Organizations, Trust.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3167289 An Optimal Algorithm for Finding (r, Q) Policy in a Price-Dependent Order Quantity Inventory System with Soft Budget Constraint
Authors: S. Hamid Mirmohammadi, Shahrazad Tamjidzad
Abstract:
This paper is concerned with the single-item continuous review inventory system in which demand is stochastic and discrete. The budget consumed for purchasing the ordered items is not restricted but it incurs extra cost when exceeding specific value. The unit purchasing price depends on the quantity ordered under the all-units discounts cost structure. In many actual systems, the budget as a resource which is occupied by the purchased items is limited and the system is able to confront the resource shortage by charging more costs. Thus, considering the resource shortage costs as a part of system costs, especially when the amount of resource occupied by the purchased item is influenced by quantity discounts, is well motivated by practical concerns. In this paper, an optimization problem is formulated for finding the optimal (r, Q) policy, when the system is influenced by the budget limitation and a discount pricing simultaneously. Properties of the cost function are investigated and then an algorithm based on a one-dimensional search procedure is proposed for finding an optimal (r, Q) policy which minimizes the expected system costs.Keywords: (r, Q) policy, Stochastic demand, backorders, limited resource, quantity discounts.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1862288 Substantial Fatigue Similarity of a New Small-Scale Test Rig to Actual Wheel-Rail System
Authors: Meysam Naeimi, Zili Li, Roumen Petrov, Rolf Dollevoet, Jilt Sietsma, Jun Wu
Abstract:
The substantial similarity of fatigue mechanism in a new test rig for rolling contact fatigue (RCF) has been investigated. A new reduced-scale test rig is designed to perform controlled RCF tests in wheel-rail materials. The fatigue mechanism of the rig is evaluated in this study using a combined finite element-fatigue prediction approach. The influences of loading conditions on fatigue crack initiation have been studied. Furthermore, the effects of some artificial defects (squat-shape) on fatigue lives are examined. To simulate the vehicle-track interaction by means of the test rig, a threedimensional finite element (FE) model is built up. The nonlinear material behaviour of the rail steel is modelled in the contact interface. The results of FE simulations are combined with the critical plane concept to determine the material points with the greatest possibility of fatigue failure. Based on the stress-strain responses, by employing of previously postulated criteria for fatigue crack initiation (plastic shakedown and ratchetting), fatigue life analysis is carried out. The results are reported for various loading conditions and different defect sizes. Afterward, the cyclic mechanism of the test rig is evaluated from the operational viewpoint. The results of fatigue life predictions are compared with the expected number of cycles of the test rig by its cyclic nature. Finally, the estimative duration of the experiments until fatigue crack initiation is roughly determined.
Keywords: Fatigue, test rig, crack initiation, life, rail, squats.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2170287 Effect of Fines on Liquefaction Susceptibility of Sandy Soil
Authors: Ayad Salih Sabbar, Amin Chegenizadeh, Hamid Nikraz
Abstract:
Investigation of liquefaction susceptibility of materials that have been used in embankments, slopes, dams, and foundations is very essential. Many catastrophic geo-hazards such as flow slides, declination of foundations, and damage to earth structure are associated with static liquefaction that may occur during abrupt shearing of these materials. Many artificial backfill materials are mixtures of sand with fines and other composition. In order to provide some clarifications and evaluations on the role of fines in static liquefaction behaviour of sand sandy soils, the effect of fines on the liquefaction susceptibility of sand was experimentally examined in the present work over a range of fines content, relative density, and initial confining pressure. The results of an experimental study on various sand-fines mixtures are presented. Undrained static triaxial compression tests were conducted on saturated Perth sand containing 5% bentonite at three different relative densities (10, 50, and 90%), and saturated Perth sand containing both 5% bentonite and slag (2%, 4%, and 6%) at single relative density 10%. Undrained static triaxial tests were performed at three different initial confining pressures (100, 150, and 200 kPa). The brittleness index was used to quantify the liquefaction potential of sand-bentonite-slag mixtures. The results demonstrated that the liquefaction susceptibility of sand-5% bentonite mixture was more than liquefaction susceptibility of clean sandy soil. However, liquefaction potential decreased when both of two fines (bentonite and slag) were used. Liquefaction susceptibility of all mixtures decreased with increasing relative density and initial confining pressure.
Keywords: Bentonite, brittleness index, liquefaction, slag.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1219286 Machining Parameters Optimization of Developed Yttria Stabilized Zirconia Toughened Alumina Ceramic Inserts While Machining AISI 4340 Steel
Authors: Nilrudra Mandal, B Doloi, B Mondal
Abstract:
An attempt has been made to investigate the machinability of zirconia toughened alumina (ZTA) inserts while turning AISI 4340 steel. The insert was prepared by powder metallurgy process route and the machining experiments were performed based on Response Surface Methodology (RSM) design called Central Composite Design (CCD). The mathematical model of flank wear, cutting force and surface roughness have been developed using second order regression analysis. The adequacy of model has been carried out based on Analysis of variance (ANOVA) techniques. It can be concluded that cutting speed and feed rate are the two most influential factor for flank wear and cutting force prediction. For surface roughness determination, the cutting speed & depth of cut both have significant contribution. Key parameters effect on each response has also been presented in graphical contours for choosing the operating parameter preciously. 83% desirability level has been achieved using this optimized condition.Keywords: Analysis of variance (ANOVA), Central Composite Design (CCD), Response Surface Methodology (RSM), Zirconia Toughened Alumina (ZTA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2783285 Probe Selection for Pathway-Specific Microarray Probe Design Minimizing Melting Temperature Variance
Authors: Fabian Horn, Reinhard Guthke
Abstract:
In molecular biology, microarray technology is widely and successfully utilized to efficiently measure gene activity. If working with less studied organisms, methods to design custom-made microarray probes are available. One design criterion is to select probes with minimal melting temperature variances thus ensuring similar hybridization properties. If the microarray application focuses on the investigation of metabolic pathways, it is not necessary to cover the whole genome. It is more efficient to cover each metabolic pathway with a limited number of genes. Firstly, an approach is presented which minimizes the overall melting temperature variance of selected probes for all genes of interest. Secondly, the approach is extended to include the additional constraints of covering all pathways with a limited number of genes while minimizing the overall variance. The new optimization problem is solved by a bottom-up programming approach which reduces the complexity to make it computationally feasible. The new method is exemplary applied for the selection of microarray probes in order to cover all fungal secondary metabolite gene clusters for Aspergillus terreus.
Keywords: bottom-up approach, gene clusters, melting temperature, metabolic pathway, microarray probe design, probe selection
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1559284 A Review on the Usage of Ceramic Wastes in Concrete Production
Authors: O. Zimbili, W. Salim, M. Ndambuki
Abstract:
Construction and Demolition (C&D) wastes contribute the highest percentage of wastes worldwide (75%). Furthermore, ceramic materials contribute the highest percentage of wastes within the C&D wastes (54%). The current option for disposal of ceramic wastes is landfill. This is due to unavailability of standards, avoidance of risk, lack of knowledge and experience in using ceramic wastes in construction. The ability of ceramic wastes to act as a pozzolanic material in the production of cement has been effectively explored. The results proved that temperatures used in the manufacturing of these tiles (about 900⁰C) are sufficient to activate pozzolanic properties of clay. They also showed that, after optimization (11-14% substitution); the cement blend performs better, with no morphological difference between the cement blended with ceramic waste, and that blended with other pozzolanic materials. Sanitary ware and electrical insulator porcelain wastes are some wastes investigated for usage as aggregates in concrete production. When optimized, both produced good results, better than when natural aggregates are used. However, the research on ceramic wastes as partial substitute for fine aggregates or cement has not been overly exploited as the other areas. This review has been concluded with focus on investigating whether ceramic wall tile wastes used as partial substitute for cement and fine aggregates could prove to be beneficial since the two materials are the most high-priced during concrete production.
Keywords: Blended, morphological, pozzolanic properties, waste.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8792283 Hybrid of Hunting Search and Modified Simplex Methods for Grease Position Parameter Design Optimisation
Authors: P. Luangpaiboon, S. Boonhao
Abstract:
This study proposes a multi-response surface optimization problem (MRSOP) for determining the proper choices of a process parameter design (PPD) decision problem in a noisy environment of a grease position process in an electronic industry. The proposed models attempts to maximize dual process responses on the mean of parts between failure on left and right processes. The conventional modified simplex method and its hybridization of the stochastic operator from the hunting search algorithm are applied to determine the proper levels of controllable design parameters affecting the quality performances. A numerical example demonstrates the feasibility of applying the proposed model to the PPD problem via two iterative methods. Its advantages are also discussed. Numerical results demonstrate that the hybridization is superior to the use of the conventional method. In this study, the mean of parts between failure on left and right lines improve by 39.51%, approximately. All experimental data presented in this research have been normalized to disguise actual performance measures as raw data are considered to be confidential.Keywords: Grease Position Process, Multi-response Surfaces, Modified Simplex Method, Hunting Search Method, Desirability Function Approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1688282 Optimal Design of Two-Channel Recursive Parallelogram Quadrature Mirror Filter Banks
Authors: Ju-Hong Lee, Yi-Lin Shieh
Abstract:
This paper deals with the optimal design of two-channel recursive parallelogram quadrature mirror filter (PQMF) banks. The analysis and synthesis filters of the PQMF bank are composed of two-dimensional (2-D) recursive digital all-pass filters (DAFs) with nonsymmetric half-plane (NSHP) support region. The design problem can be facilitated by using the 2-D doubly complementary half-band (DC-HB) property possessed by the analysis and synthesis filters. For finding the coefficients of the 2-D recursive NSHP DAFs, we appropriately formulate the design problem to result in an optimization problem that can be solved by using a weighted least-squares (WLS) algorithm in the minimax (L∞) optimal sense. The designed 2-D recursive PQMF bank achieves perfect magnitude response and possesses satisfactory phase response without requiring extra phase equalizer. Simulation results are also provided for illustration and comparison.
Keywords: Parallelogram Quadrature Mirror Filter Bank, Doubly Complementary Filter, Nonsymmetric Half-Plane Filter, Weighted Least Squares Algorithm, Digital All-Pass Filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1543281 Wall Heat Flux Mapping in Liquid Rocket Combustion Chamber with Different Jet Impingement Angles
Authors: O. S. Pradeep, S. Vigneshwaran, K. Praveen Kumar, K. Jeyendran, V. R. Sanal Kumar
Abstract:
The influence of injector attitude on wall heat flux plays an important role in predicting the start-up transient and also determining the combustion chamber wall durability of liquid rockets. In this paper comprehensive numerical studies have been carried out on an idealized liquid rocket combustion chamber to examine the transient wall heat flux during its start-up transient at different injector attitude. Numerical simulations have been carried out with the help of a validated 2d axisymmetric, double precision, pressure-based, transient, species transport, SST k-omega model with laminar finite rate model for governing turbulent-chemistry interaction for four cases with different jet intersection angles, viz., 0o, 30o, 45o, and 60o. We concluded that the jets intersection angle is having a bearing on the time and location of the maximum wall-heat flux zone of the liquid rocket combustion chamber during the start-up transient. We also concluded that the wall heat flux mapping in liquid rocket combustion chamber during the start-up transient is a meaningful objective for the chamber wall material selection and the lucrative design optimization of the combustion chamber for improving the payload capability of the rocket.Keywords: Combustion chamber, injector, liquid rocket, rocket engine wall heat flux.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1503280 Optimal Controllers with Actuator Saturation for Nonlinear Structures
Authors: M. Mohebbi, K. Shakeri
Abstract:
Since the actuator capacity is limited, in the real application of active control systems under sever earthquakes it is conceivable that the actuators saturate, hence the actuator saturation should be considered as a constraint in design of optimal controllers. In this paper optimal design of active controllers for nonlinear structures by considering actuator saturation, has been studied. The proposed method for designing optimal controllers is based on defining an optimization problem which the objective has been to minimize the maximum displacement of structure when a limited capacity for actuator has been used. To this end a single degree of freedom (SDF) structure with a bilinear hysteretic behavior has been simulated under a white noise ground acceleration of different amplitudes. Active tendon control mechanism, comprised of prestressed tendons and an actuator, and extended nonlinear Newmark method based instantaneous optimal control algorithm have been used. To achieve the best results, the weights corresponding to displacement, velocity, acceleration and control force in the performance index have been optimized by the Distributed Genetic Algorithm (DGA). Results show the effectiveness of the proposed method in considering actuator saturation. Also based on the numerical simulations it can be concluded that the actuator capacity and the average value of required control force are two important factors in designing nonlinear controllers which consider the actuator saturation.Keywords: Active control, Actuator Saturation, Distributedgeneticalgorithms, Nonlinear.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1604279 A New Heuristic Approach for the Large-Scale Generalized Assignment Problem
Authors: S. Raja Balachandar, K.Kannan
Abstract:
This paper presents a heuristic approach to solve the Generalized Assignment Problem (GAP) which is NP-hard. It is worth mentioning that many researches used to develop algorithms for identifying the redundant constraints and variables in linear programming model. Some of the algorithms are presented using intercept matrix of the constraints to identify redundant constraints and variables prior to the start of the solution process. Here a new heuristic approach based on the dominance property of the intercept matrix to find optimal or near optimal solution of the GAP is proposed. In this heuristic, redundant variables of the GAP are identified by applying the dominance property of the intercept matrix repeatedly. This heuristic approach is tested for 90 benchmark problems of sizes upto 4000, taken from OR-library and the results are compared with optimum solutions. Computational complexity is proved to be O(mn2) of solving GAP using this approach. The performance of our heuristic is compared with the best state-ofthe- art heuristic algorithms with respect to both the quality of the solutions. The encouraging results especially for relatively large size test problems indicate that this heuristic approach can successfully be used for finding good solutions for highly constrained NP-hard problems.
Keywords: Combinatorial Optimization Problem, Generalized Assignment Problem, Intercept Matrix, Heuristic, Computational Complexity, NP-Hard Problems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2348278 Automated Textile Defect Recognition System Using Computer Vision and Artificial Neural Networks
Authors: Atiqul Islam, Shamim Akhter, Tumnun E. Mursalin
Abstract:
Least Development Countries (LDC) like Bangladesh, whose 25% revenue earning is achieved from Textile export, requires producing less defective textile for minimizing production cost and time. Inspection processes done on these industries are mostly manual and time consuming. To reduce error on identifying fabric defects requires more automotive and accurate inspection process. Considering this lacking, this research implements a Textile Defect Recognizer which uses computer vision methodology with the combination of multi-layer neural networks to identify four classifications of textile defects. The recognizer, suitable for LDC countries, identifies the fabric defects within economical cost and produces less error prone inspection system in real time. In order to generate input set for the neural network, primarily the recognizer captures digital fabric images by image acquisition device and converts the RGB images into binary images by restoration process and local threshold techniques. Later, the output of the processed image, the area of the faulty portion, the number of objects of the image and the sharp factor of the image, are feed backed as an input layer to the neural network which uses back propagation algorithm to compute the weighted factors and generates the desired classifications of defects as an output.Keywords: Computer vision, image acquisition device, machine vision, multi-layer neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3300277 Pure Scalar Equilibria for Normal-Form Games
Authors: H. W. Corley
Abstract:
A scalar equilibrium (SE) is an alternative type of equilibrium in pure strategies for an n-person normal-form game G. It is defined using optimization techniques to obtain a pure strategy for each player of G by maximizing an appropriate utility function over the acceptable joint actions. The players’ actions are determined by the choice of the utility function. Such a utility function could be agreed upon by the players or chosen by an arbitrator. An SE is an equilibrium since no players of G can increase the value of this utility function by changing their strategies. SEs are formally defined, and examples are given. In a greedy SE, the goal is to assign actions to the players giving them the largest individual payoffs jointly possible. In a weighted SE, each player is assigned weights modeling the degree to which he helps every player, including himself, achieve as large a payoff as jointly possible. In a compromise SE, each player wants a fair payoff for a reasonable interpretation of fairness. In a parity SE, the players want their payoffs to be as nearly equal as jointly possible. Finally, a satisficing SE achieves a personal target payoff value for each player. The vector payoffs associated with each of these SEs are shown to be Pareto optimal among all such acceptable vectors, as well as computationally tractable.
Keywords: Compromise equilibrium, greedy equilibrium, normal-form game, parity equilibrium, pure strategies, satisficing equilibrium, scalar equilibria, utility function, weighted equilibrium.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 251276 Elaboration and Characterization of Self-Compacting Mortar Based Biopolymer
Authors: I. Djefour, M. Saidi, I. Tlemsani, S. Toubal
Abstract:
Lignin is a molecule derived from wood and also generated as waste from the paper industry. With a view to its valorization and protection of the environment, we are interested in its use as a superplasticizer-type adjuvant in mortars and concretes to improve their mechanical strengths. The additives of the concrete have a very strong influence on the properties of the fresh and / or hardened concrete. This study examines the development and use of industrial waste and lignin extracted from a renewable natural source (wood) in cementitious materials. The use of these resources is known at present as a definite resurgence of interest in the development of building materials. Physicomechanical characteristics of mortars are determined by optimization quantity of the natural superplasticizer. The results show that the mechanical strengths of mortars based on natural adjuvant have improved by 20% (64 MPa) for a W/C ratio = 0.4, and the amount of natural adjuvant of dry extract needed is 40 times smaller than commercial adjuvant. This study has a scientific impact (improving the performance of the mortar with an increase in compactness and reduction of the quantity of water), ecological use of the lignin waste generated by the paper industry) and economic reduction of the cost price necessary to elaboration of self-compacting mortars and concretes).Keywords: Biopolymer, lignin, industrial waste, mechanical resistances, self-compacting mortars.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1002275 Generative Adversarial Network Based Fingerprint Anti-Spoofing Limitations
Authors: Yehjune Heo
Abstract:
Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.
Keywords: Anti-spoofing, CNN, fingerprint recognition, GAN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 594274 Applying Case-Based Reasoning in Supporting Strategy Decisions
Authors: S. M. Seyedhosseini, A. Makui, M. Ghadami
Abstract:
Globalization and therefore increasing tight competition among companies, have resulted to increase the importance of making well-timed decision. Devising and employing effective strategies, that are flexible and adaptive to changing market, stand a greater chance of being effective in the long-term. In other side, a clear focus on managing the entire product lifecycle has emerged as critical areas for investment. Therefore, applying wellorganized tools to employ past experience in new case, helps to make proper and managerial decisions. Case based reasoning (CBR) is based on a means of solving a new problem by using or adapting solutions to old problems. In this paper, an adapted CBR model with k-nearest neighbor (K-NN) is employed to provide suggestions for better decision making which are adopted for a given product in the middle of life phase. The set of solutions are weighted by CBR in the principle of group decision making. Wrapper approach of genetic algorithm is employed to generate optimal feature subsets. The dataset of the department store, including various products which are collected among two years, have been used. K-fold approach is used to evaluate the classification accuracy rate. Empirical results are compared with classical case based reasoning algorithm which has no special process for feature selection, CBR-PCA algorithm based on filter approach feature selection, and Artificial Neural Network. The results indicate that the predictive performance of the model, compare with two CBR algorithms, in specific case is more effective.
Keywords: Case based reasoning, Genetic algorithm, Groupdecision making, Product management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2174273 Optimization of a Four-Lobed Swirl Pipe for Clean-In-Place Procedures
Authors: Guozhen Li, Philip Hall, Nick Miles, Tao Wu
Abstract:
This paper presents a numerical investigation of two horizontally mounted four-lobed swirl pipes in terms of swirl induction effectiveness into flows passing through them. The swirl flows induced by the two swirl pipes have the potential to improve the efficiency of Clean-In-Place procedures in a closed processing system by local intensification of hydrodynamic impact on the internal pipe surface. Pressure losses, swirl development within the two swirl pipe, swirl induction effectiveness, swirl decay and wall shear stress variation downstream of two swirl pipes are analyzed and compared. It was found that a shorter length of swirl inducing pipe used in joint with transition pipes is more effective in swirl induction than when a longer one is used, in that it has a less constraint to the induced swirl and results in slightly higher swirl intensity just downstream of it with the expense of a smaller pressure loss. The wall shear stress downstream of the shorter swirl pipe is also slightly larger than that downstream of the longer swirl pipe due to the slightly higher swirl intensity induced by the shorter swirl pipe. The advantage of the shorter swirl pipe in terms of swirl induction is more significant in flows with a larger Reynolds Number.Keywords: Swirl pipe, swirl effectiveness, CFD, wall shear stress, swirl intensity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1824272 Color Characteristics of Dried Cocoa Using Shallow Box Fermentation Technique
Authors: Khairul Bariah Sulaiman, Tajul Aris Yang
Abstract:
Fermentation is well known as an essential process to develop chocolate flavor in dried cocoa beans. Besides developing the precursor of cocoa flavor, it also induces the color changes in the beans. The fermentation process is influenced by various factors such as planting material, preconditioning of cocoa pod and fermentation technique. Therefore, this study was conducted to evaluate color of Malaysian cocoa beans and how the duration of pods storage and fermentation technique using shallow box will effect on its color characteristics. There are two factors being studied i.e. duration of cocoa pod storage (0, 2, 4 and 6 days) and duration of cocoa fermentation (0, 1, 2, 3, 4 and 5 days). The experiment is arranged in 4 x 6 factorial designs with 24 treatments and arrangement is in a Completely Randomised Design (CRD). The produced beans are inspected for color changes under artificial light during cut test and divided into four groups of color namely fully brown, purple brown, fully purple and slaty. Cut tests indicated that cocoa beans which are directly dried without undergone fermentation has the highest slaty percentage. However, application of pods storage before fermentation process is found to decrease the slaty percentage. In contrast, the percentages of fully brown beans start to dominate after two days of fermentation, especially from four and six days of pods storage batch. Whereas, almost all batches of cocoa beans have a percentage of fully purple less than 20%. Interestingly, the percentage of purple brown beans are scattered in the entire beans batch regardless any specific trend. Meanwhile, statistical analysis using General Linear Model showed that the pods storage has a significant effect on the color characteristic of the Malaysian dried beans compared to fermentation duration.Keywords: Cocoa beans, color, fermentation, shallow box.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2975271 An Application for Risk of Crime Prediction Using Machine Learning
Authors: Luis Fonseca, Filipe Cabral Pinto, Susana Sargento
Abstract:
The increase of the world population, especially in large urban centers, has resulted in new challenges particularly with the control and optimization of public safety. Thus, in the present work, a solution is proposed for the prediction of criminal occurrences in a city based on historical data of incidents and demographic information. The entire research and implementation will be presented start with the data collection from its original source, the treatment and transformations applied to them, choice and the evaluation and implementation of the Machine Learning model up to the application layer. Classification models will be implemented to predict criminal risk for a given time interval and location. Machine Learning algorithms such as Random Forest, Neural Networks, K-Nearest Neighbors and Logistic Regression will be used to predict occurrences, and their performance will be compared according to the data processing and transformation used. The results show that the use of Machine Learning techniques helps to anticipate criminal occurrences, which contributed to the reinforcement of public security. Finally, the models were implemented on a platform that will provide an API to enable other entities to make requests for predictions in real-time. An application will also be presented where it is possible to show criminal predictions visually.Keywords: Crime prediction, machine learning, public safety, smart city.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1326