Search results for: Optimal Computing Budget Allocation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2519

Search results for: Optimal Computing Budget Allocation

449 Optimization of Surface Finish in Milling Operation Using Live Tooling via Taguchi Method

Authors: Harish Kumar Ponnappan, Joseph C. Chen

Abstract:

The main objective of this research is to optimize the surface roughness of a milling operation on AISI 1018 steel using live tooling on a HAAS ST-20 lathe. In this study, Taguchi analysis is used to optimize the milling process by investigating the effect of different machining parameters on surface roughness. The L9 orthogonal array is designed with four controllable factors with three different levels each and an uncontrollable factor, resulting in 18 experimental runs. The optimal parameters determined from Taguchi analysis were feed rate – 76.2 mm/min, spindle speed 1150 rpm, depth of cut – 0.762 mm and 2-flute TiN coated high-speed steel as tool material. The process capability Cp and process capability index Cpk values were improved from 0.62 and -0.44 to 1.39 and 1.24 respectively. The average surface roughness values from the confirmation runs were 1.30 µ, decreasing the defect rate from 87.72% to 0.01%. The purpose of this study is to efficiently utilize the Taguchi design to optimize the surface roughness in a milling operation using live tooling.

Keywords: Live tooling, surface roughness, Taguchi analysis, Computer Numerical Control (CNC) milling operation, CNC turning operation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 724
448 A Hybrid Multi Objective Algorithm for Flexible Job Shop Scheduling

Authors: Parviz Fattahi

Abstract:

Scheduling for the flexible job shop is very important in both fields of production management and combinatorial optimization. However, it quit difficult to achieve an optimal solution to this problem with traditional optimization approaches owing to the high computational complexity. The combining of several optimization criteria induces additional complexity and new problems. In this paper, a Pareto approach to solve the multi objective flexible job shop scheduling problems is proposed. The objectives considered are to minimize the overall completion time (makespan) and total weighted tardiness (TWT). An effective simulated annealing algorithm based on the proposed approach is presented to solve multi objective flexible job shop scheduling problem. An external memory of non-dominated solutions is considered to save and update the non-dominated solutions during the solution process. Numerical examples are used to evaluate and study the performance of the proposed algorithm. The proposed algorithm can be applied easily in real factory conditions and for large size problems. It should thus be useful to both practitioners and researchers.

Keywords: Flexible job shop, Scheduling, Hierarchical approach, simulated annealing, tabu search, multi objective.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1975
447 Performance Analysis of Software Reliability Models using Matrix Method

Authors: RajPal Garg, Kapil Sharma, Rajive Kumar, R. K. Garg

Abstract:

This paper presents a computational methodology based on matrix operations for a computer based solution to the problem of performance analysis of software reliability models (SRMs). A set of seven comparison criteria have been formulated to rank various non-homogenous Poisson process software reliability models proposed during the past 30 years to estimate software reliability measures such as the number of remaining faults, software failure rate, and software reliability. Selection of optimal SRM for use in a particular case has been an area of interest for researchers in the field of software reliability. Tools and techniques for software reliability model selection found in the literature cannot be used with high level of confidence as they use a limited number of model selection criteria. A real data set of middle size software project from published papers has been used for demonstration of matrix method. The result of this study will be a ranking of SRMs based on the Permanent value of the criteria matrix formed for each model based on the comparison criteria. The software reliability model with highest value of the Permanent is ranked at number – 1 and so on.

Keywords: Matrix method, Model ranking, Model selection, Model selection criteria, Software reliability models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2275
446 Optimization of Petroleum Refinery Configuration Design with Logic Propositions

Authors: Cheng Seong Khor, Xiao Qi Yeoh

Abstract:

This work concerns the topological optimization problem for determining the optimal petroleum refinery configuration. We are interested in further investigating and hopefully advancing the existing optimization approaches and strategies employing logic propositions to conceptual process synthesis problems. In particular, we seek to contribute to this increasingly exciting area of chemical process modeling by addressing the following potentially important issues: (a) how the formulation of design specifications in a mixed-logical-and-integer optimization model can be employed in a synthesis problem to enrich the problem representation by incorporating past design experience, engineering knowledge, and heuristics; and (b) how structural specifications on the interconnectivity relationships by space (states) and by function (tasks) in a superstructure should be properly formulated within a mixed-integer linear programming (MILP) model. The proposed modeling technique is illustrated on a case study involving the alternative processing routes of naphtha, in which significant improvement in the solution quality is obtained.

Keywords: Mixed-integer linear programming (MILP), petroleum refinery, process synthesis, superstructure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695
445 Performance Enhancement of Dye-Sensitized Solar Cells by MgO Coating on TiO2 Electrodes

Authors: C. Photiphitak, P. Rakkwamsuk, P. Muthitamongkol, C. Thanachayanont

Abstract:

TiO2/MgO composite films were prepared by coating the magnesium acetate solution in the pores of mesoporous TiO2 films using a dip coating method. Concentrations of magnesium acetate solution were varied in a range of 1x10-4 – 1x10-1 M. The TiO2/MgO composite films were characterized by scanning electron microscopy (SEM), transmission electron microscropy (TEM), electrochemical impedance spectroscopy(EIS) , transient voltage decay and I-V test. The TiO2 films and TiO2/MgO composite films were immersed in a 0.3 mM N719 dye solution. The Dye-sensitized solar cells with the TiO2/MgO/N719 structure showed an optimal concentration of magnesium acetate solution of 1x10-3 M resulting in the MgO film estimated thickness of 0.0963 nm and giving the maximum efficiency of 4.85%. The improved efficiency of dyesensitized solar cell was due to the magnesium oxide film as the wide band gap coating decays the electron back transfer to the triiodide electrolyte and reduce charge recombination.

Keywords: Magnesium oxide thin film, TiO2/MgO composite films, Electrochemical Impedance Spectrum, Transient voltage decay

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3161
444 The Reproducibility and Repeatability of Modified Likelihood Ratio for Forensics Handwriting Examination

Authors: O. Abiodun Adeyinka, B. Adeyemo Adesesan

Abstract:

The forensic use of handwriting depends on the analysis, comparison, and evaluation decisions made by forensic document examiners. When using biometric technology in forensic applications, it is necessary to compute Likelihood Ratio (LR) for quantifying strength of evidence under two competing hypotheses, namely the prosecution and the defense hypotheses wherein a set of assumptions and methods for a given data set will be made. It is therefore important to know how repeatable and reproducible our estimated LR is. This paper evaluated the accuracy and reproducibility of examiners' decisions. Confidence interval for the estimated LR were presented so as not get an incorrect estimate that will be used to deliver wrong judgment in the court of Law. The estimate of LR is fundamentally a Bayesian concept and we used two LR estimators, namely Logistic Regression (LoR) and Kernel Density Estimator (KDE) for this paper. The repeatability evaluation was carried out by retesting the initial experiment after an interval of six months to observe whether examiners would repeat their decisions for the estimated LR. The experimental results, which are based on handwriting dataset, show that LR has different confidence intervals which therefore implies that LR cannot be estimated with the same certainty everywhere. Though the LoR performed better than the KDE when tested using the same dataset, the two LR estimators investigated showed a consistent region in which LR value can be estimated confidently. These two findings advance our understanding of LR when used in computing the strength of evidence in handwriting using forensics.

Keywords: Logistic Regression LoR, Kernel Density Estimator KDE, Handwriting, Confidence Interval, Repeatability, Reproducibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 420
443 Preconcentration and Determination of Cyproheptadine in Biological Samples by Hollow Fiber Liquid Phase Microextraction Coupled with High Performance Liquid Chromatography

Authors: Najari Moghadam Sh., Qomi M., Raofie F., Khadiv J.

Abstract:

In this study, a liquid phase microextraction by hollow fiber (HF-LPME) combined with high performance liquid chromatography-UV detector was applied to preconcentrate and determine trace levels of Cyproheptadine in human urine and plasma samples. Cyproheptadine was extracted from 10 mL alkaline aqueous solution (pH: 9.81) into an organic solvent (n-octnol) which was immobilized in the wall pores of a hollow fiber. Then was back-extracted into an acidified aqueous solution (pH: 2.59) located inside the lumen of the hollow fiber. This method is simple, efficient and cost-effective. It is based on pH gradient and differences between two aqueous phases. In order to optimize the HF-LPME some affecting parameters including the pH of donor and acceptor phases, the type of organic solvent, ionic strength, stirring rate, extraction time and temperature were studied and optimized. Under optimal conditions enrichment factor, limit of detection (LOD) and relative standard deviation (RSD(%), n=3) were up to 112, 15 μg.L−1 and 2.7, respectively.

Keywords: Biological samples, Cyproheptadine, hollow fiber, liquid phase microextraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2203
442 Supply Chain Decarbonisation – A Cost-Based Decision Support Model in Slow Steaming Maritime Operations

Authors: Eugene Y. C. Wong, Henry Y. K. Lau, Mardjuki Raman

Abstract:

CO2 emissions from maritime transport operations represent a substantial part of the total greenhouse gas emission. Vessels are designed with better energy efficiency. Minimizing CO2 emission in maritime operations plays an important role in supply chain decarbonisation. This paper reviews the initiatives on slow steaming operations towards the reduction of carbon emission. It investigates the relationship and impact among slow steaming cost reduction, carbon emission reduction, and shipment delay. A scenario-based cost-driven decision support model is developed to facilitate the selection of the optimal slow steaming options, considering the cost on bunker fuel consumption, available speed, carbon emission, and shipment delay. The incorporation of the social cost of cargo is reviewed and suggested. Additional measures on the effect of vessels sizes, routing, and type of fuels towards decarbonisation are discussed.

Keywords: Slow steaming, carbon emission, maritime logistics, sustainability, green supply chain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2641
441 Field Programmable Gate Array Based Infinite Impulse Response Filter Using Multipliers

Authors: Rajesh Mehra, Bharti Thakur

Abstract:

In this paper, an Infinite Impulse Response (IIR) filter has been designed and simulated on an Field Programmable Gate Arrays (FPGA). The implementation is based on Multiply Add and Accumulate (MAC) algorithm which uses multiply operations for design implementation. Parallel Pipelined structure is used to implement the proposed IIR Filter taking optimal advantage of the look up table of target device. The designed filter has been synthesized on Digital Signal Processor (DSP) slice based FPGA to perform multiplier function of MAC unit. The DSP slices are useful to enhance the speed performance. The proposed design is simulated with Matlab, synthesized with Xilinx Synthesis Tool, and implemented on FPGA devices. The Virtex 5 FPGA based design can operate at an estimated frequency of 81.5 MHz as compared to 40.5 MHz in case of Spartan 3 ADSP based design. The Virtex 5 based implementation also consumes less slices and slice flip flops of target FPGA in comparison to Spartan 3 ADSP based implementation to provide cost effective solution for signal processing applications.

Keywords: Butterworth, DSP, IIR, MAC, FPGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1832
440 Data Mining Techniques in Computer-Aided Diagnosis: Non-Invasive Cancer Detection

Authors: Florin Gorunescu

Abstract:

Diagnosis can be achieved by building a model of a certain organ under surveillance and comparing it with the real time physiological measurements taken from the patient. This paper deals with the presentation of the benefits of using Data Mining techniques in the computer-aided diagnosis (CAD), focusing on the cancer detection, in order to help doctors to make optimal decisions quickly and accurately. In the field of the noninvasive diagnosis techniques, the endoscopic ultrasound elastography (EUSE) is a recent elasticity imaging technique, allowing characterizing the difference between malignant and benign tumors. Digitalizing and summarizing the main EUSE sample movies features in a vector form concern with the use of the exploratory data analysis (EDA). Neural networks are then trained on the corresponding EUSE sample movies vector input in such a way that these intelligent systems are able to offer a very precise and objective diagnosis, discriminating between benign and malignant tumors. A concrete application of these Data Mining techniques illustrates the suitability and the reliability of this methodology in CAD.

Keywords: Endoscopic ultrasound elastography, exploratorydata analysis, neural networks, non-invasive cancer detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1825
439 An Efficient Algorithm for Delay Delay-variation Bounded Least Cost Multicast Routing

Authors: Manas Ranjan Kabat, Manoj Kumar Patel, Chita Ranjan Tripathy

Abstract:

Many multimedia communication applications require a source to transmit messages to multiple destinations subject to quality of service (QoS) delay constraint. To support delay constrained multicast communications, computer networks need to guarantee an upper bound end-to-end delay from the source node to each of the destination nodes. This is known as multicast delay problem. On the other hand, if the same message fails to arrive at each destination node at the same time, there may arise inconsistency and unfairness problem among users. This is related to multicast delayvariation problem. The problem to find a minimum cost multicast tree with delay and delay-variation constraints has been proven to be NP-Complete. In this paper, we propose an efficient heuristic algorithm, namely, Economic Delay and Delay-Variation Bounded Multicast (EDVBM) algorithm, based on a novel heuristic function, to construct an economic delay and delay-variation bounded multicast tree. A noteworthy feature of this algorithm is that it has very high probability of finding the optimal solution in polynomial time with low computational complexity.

Keywords: EDVBM, Heuristic algorithm, Multicast tree, QoS routing, Shortest path.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1595
438 The Relationship between Representational Conflicts, Generalization, and Encoding Requirements in an Instance Memory Network

Authors: Mathew Wakefield, Matthew Mitchell, Lisa Wise, Christopher McCarthy

Abstract:

This paper aims to provide an interpretation of artificial neural networks (ANNs) and explore some of its implications. The interpretation views ANNs as a memory which encodes instances of experience. An experiment explores the behavior of encoding and retrieval of instances from memory. A localised representation ANN is created that allows control over encoding and retrieved memory sample size and is experimented with using the MNIST digits dataset. The relationship between input familiarity, conflict within retrieved samples, and error rates is described and demonstrated to be an effective driver for memory encoding. Results indicate that selective encoding and retrieval samples that allow detection of memory conflicts produce optimal performance, and that error rates are normally distributed with input familiarity and conflict. By using input familiarity and sample consistency to guide memory encoding, the number of encoding trials on the dataset were reduced to 18.33% of the training data while maintaining good recognition performance on the test data.

Keywords: Artificial Neural Networks, ANNs, representation, memory, conflict monitoring, confidence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 427
437 High Level Synthesis of Canny Edge Detection Algorithm on Zynq Platform

Authors: Hanaa M. Abdelgawad, Mona Safar, Ayman M. Wahba

Abstract:

Real time image and video processing is a demand in many computer vision applications, e.g. video surveillance, traffic management and medical imaging. The processing of those video applications requires high computational power. Thus, the optimal solution is the collaboration of CPU and hardware accelerators. In this paper, a Canny edge detection hardware accelerator is proposed. Edge detection is one of the basic building blocks of video and image processing applications. It is a common block in the pre-processing phase of image and video processing pipeline. Our presented approach targets offloading the Canny edge detection algorithm from processing system (PS) to programmable logic (PL) taking the advantage of High Level Synthesis (HLS) tool flow to accelerate the implementation on Zynq platform. The resulting implementation enables up to a 100x performance improvement through hardware acceleration. The CPU utilization drops down and the frame rate jumps to 60 fps of 1080p full HD input video stream.

Keywords: High Level Synthesis, Canny edge detection, Hardware accelerators, and Computer Vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5392
436 Simulation of Reflection Loss for Carbon and Nickel-Carbon Thin Films

Authors: M. Emami, R. Tarighi, R. Goodarzi

Abstract:

Maximal radar wave absorbing cannot be achieved by shaping alone. We have to focus on the parameters of absorbing materials such as permittivity, permeability, and thickness so that best absorbing according to our necessity can happen. The real and imaginary parts of the relative complex permittivity (εr' and εr") and permeability (µr' and µr") were obtained by simulation. The microwave absorbing property of carbon and Ni(C) is simulated in this study by MATLAB software; the simulation was in the frequency range between 2 to 12 GHz for carbon black (C), and carbon coated nickel (Ni(C)) with different thicknesses. In fact, we draw reflection loss (RL) for C and Ni-C via frequency. We have compared their absorption for 3-mm thickness and predicted for other thicknesses by using of electromagnetic wave transmission theory. The results showed that reflection loss position changes in low frequency with increasing of thickness. We found out that, in all cases, using nanocomposites as absorbance cannot get better results relative to pure nanoparticles. The frequency where absorption is maximum can determine the best choice between nanocomposites and pure nanoparticles. Also, we could find an optimal thickness for long wavelength absorbing in order to utilize them in protecting shields and covering.

Keywords: Absorbing, carbon, carbon nickel, frequency, thicknesses.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 866
435 Efficient Tuning Parameter Selection by Cross-Validated Score in High Dimensional Models

Authors: Yoonsuh Jung

Abstract:

As DNA microarray data contain relatively small sample size compared to the number of genes, high dimensional models are often employed. In high dimensional models, the selection of tuning parameter (or, penalty parameter) is often one of the crucial parts of the modeling. Cross-validation is one of the most common methods for the tuning parameter selection, which selects a parameter value with the smallest cross-validated score. However, selecting a single value as an ‘optimal’ value for the parameter can be very unstable due to the sampling variation since the sample sizes of microarray data are often small. Our approach is to choose multiple candidates of tuning parameter first, then average the candidates with different weights depending on their performance. The additional step of estimating the weights and averaging the candidates rarely increase the computational cost, while it can considerably improve the traditional cross-validation. We show that the selected value from the suggested methods often lead to stable parameter selection as well as improved detection of significant genetic variables compared to the tradition cross-validation via real data and simulated data sets.

Keywords: Cross Validation, Parameter Averaging, Parameter Selection, Regularization Parameter Search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
434 Adapting the Chemical Reaction Optimization Algorithm to the Printed Circuit Board Drilling Problem

Authors: Taisir Eldos, Aws Kanan, Waleed Nazih, Ahmad Khatatbih

Abstract:

Chemical Reaction Optimization (CRO) is an optimization metaheuristic inspired by the nature of chemical reactions as a natural process of transforming the substances from unstable to stable states. Starting with some unstable molecules with excessive energy, a sequence of interactions takes the set to a state of minimum energy. Researchers reported successful application of the algorithm in solving some engineering problems, like the quadratic assignment problem, with superior performance when compared with other optimization algorithms. We adapted this optimization algorithm to the Printed Circuit Board Drilling Problem (PCBDP) towards reducing the drilling time and hence improving the PCB manufacturing throughput. Although the PCBDP can be viewed as instance of the popular Traveling Salesman Problem (TSP), it has some characteristics that would require special attention to the transactions that explore the solution landscape. Experimental test results using the standard CROToolBox are not promising for practically sized problems, while it could find optimal solutions for artificial problems and small benchmarks as a proof of concept.

Keywords: Evolutionary Algorithms, Chemical Reaction Optimization, Traveling Salesman, Board Drilling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3189
433 Improvement of Lipase Catalytic Properties by Immobilization in Hybrid Matrices

Authors: C. Zarcula, R. Croitoru, L. Corîci, C. Csunderlik, F. Peter

Abstract:

Lipases are enzymes particularly amenable for immobilization by entrapment methods, as they can work equally well in aqueous or non-conventional media and long-time stability of enzyme activity and enantioselectivity is needed to elaborate more efficient bioprocesses. The improvement of Pseudomonas fluorescens (Amano AK) lipase characteristics was investigated by optimizing the immobilization procedure in hybrid organic-inorganic matrices using ionic liquids as additives. Ionic liquids containing a more hydrophobic alkyl group in the cationic moiety are beneficial for the activity of immobilized lipase. Silanes with alkyl- or aryl nonhydrolizable groups used as precursors in combination with tetramethoxysilane could generate composites with higher enantioselectivity compared to the native enzyme in acylation reactions of secondary alcohols. The optimal effect on both activity and enantioselectivity was achieved for the composite made from octyltrimethoxysilane and tetramethoxysilane at 1:1 molar ratio (60% increase of total activity following immobilization and enantiomeric ratio of 30). Ionic liquids also demonstrated valuable properties as reaction media for the studied reactions, comparable with the usual organic solvent, hexane.

Keywords: Ionic liquids, lipase, enantioselectivity, sol-gelimmobilization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1855
432 On the Variability of Tool Wear and Life at Disparate Operating Parameters

Authors: S. E. Oraby, A.M. Alaskari

Abstract:

The stochastic nature of tool life using conventional discrete-wear data from experimental tests usually exists due to many individual and interacting parameters. It is a common practice in batch production to continually use the same tool to machine different parts, using disparate machining parameters. In such an environment, the optimal points at which tools have to be changed, while achieving minimum production cost and maximum production rate within the surface roughness specifications, have not been adequately studied. In the current study, two relevant aspects are investigated using coated and uncoated inserts in turning operations: (i) the accuracy of using machinability information, from fixed parameters testing procedures, when variable parameters situations are emerged, and (ii) the credibility of tool life machinability data from prior discrete testing procedures in a non-stop machining. A novel technique is proposed and verified to normalize the conventional fixed parameters machinability data to suit the cases when parameters have to be changed for the same tool. Also, an experimental investigation has been established to evaluate the error in the tool life assessment when machinability from discrete testing procedures is employed in uninterrupted practical machining.

Keywords: Machinability, tool life, tool wear, wear variability

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1763
431 The Excess Loop Delay Calibration in a Bandpass Continuous-Time Delta Sigma Modulators Based on Q-Enhanced LC Filter

Authors: Sorore Benabid

Abstract:

The Q-enhanced LC filters are the most used architecture in the Bandpass (BP) Continuous-Time (CT) Delta-Sigma (ΣΔ) modulators, due to their: high frequencies operation, high linearity than the active filters and a high quality factor obtained by Q-enhanced technique. This technique consists of the use of a negative resistance that compensate the ohmic losses in the on-chip inductor. However, this technique introduces a zero in the filter transfer function which will affect the modulator performances in term of Dynamic Range (DR), stability and in-band noise (Signal-to-Noise Ratio (SNR)). In this paper, we study the effect of this zero and we demonstrate that a calibration of the excess loop delay (ELD) is required to ensure the best performances of the modulator. System level simulations are done for a 2ndorder BP CT (ΣΔ) modulator at a center frequency of 300MHz. Simulation results indicate that the optimal ELD should be reduced by 13% to achieve the maximum SNR and DR compared to the ideal LC-based ΣΔ modulator.

Keywords: Continuous-time bandpass delta-sigma modulators, excess loop delay, on-chip inductor, Q-enhanced LC filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 718
430 Robust Sensorless Speed Control of Induction Motor with DTFC and Fuzzy Speed Regulator

Authors: Jagadish H. Pujar, S. F. Kodad

Abstract:

Recent developments in Soft computing techniques, power electronic switches and low-cost computational hardware have made it possible to design and implement sophisticated control strategies for sensorless speed control of AC motor drives. Such an attempt has been made in this work, for Sensorless Speed Control of Induction Motor (IM) by means of Direct Torque Fuzzy Control (DTFC), PI-type fuzzy speed regulator and MRAS speed estimator strategy, which is absolutely nonlinear in its nature. Direct torque control is known to produce quick and robust response in AC drive system. However, during steady state, torque, flux and current ripple occurs. So, the performance of conventional DTC with PI speed regulator can be improved by implementing fuzzy logic techniques. Certain important issues in design including the space vector modulated (SVM) 3-Ф voltage source inverter, DTFC design, generation of reference torque using PI-type fuzzy speed regulator and sensor less speed estimator have been resolved. The proposed scheme is validated through extensive numerical simulations on MATLAB. The simulated results indicate the sensor less speed control of IM with DTFC and PI-type fuzzy speed regulator provides satisfactory high dynamic and static performance compare to conventional DTC with PI speed regulator.

Keywords: Sensor-less Speed Estimator, Fuzzy Logic Control(FLC), SVM, DTC, DTFC, IM, fuzzy speed regulator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2465
429 Effects of Damper Locations and Base Isolators on Seismic Response of a Building Frame

Authors: Azin Shakibabarough, Mojtaba Valinejadshoubi, Ashutosh Bagchi

Abstract:

Structural vibration means repetitive motion that causes fatigue and reduction of the performance of a structure. An earthquake may release high amount of energy that can have adverse effect on all components of a structure. Therefore, decreasing of vibration or maintaining performance of structures such as bridges, dams, roads and buildings is important for life safety and reducing economic loss. When earthquake or any vibration happens, investigation on parts of a structure which sustain the seismic loads is mandatory to provide a safe condition for the occupants. One of the solutions for reducing the earthquake vibration in a structure is using of vibration control devices such as dampers and base isolators. The objective of this study is to investigate the optimal positions of friction dampers and base isolators for better seismic response of 2D frame. For this purpose, a two bay and six story frame with different distribution formats was modeled and some of their responses to earthquake such as inter-story drift, max joint displacement, max axial force and max bending moment were determined and compared using non-linear dynamic analysis.

Keywords: Fast nonlinear analysis, friction damper, base isolator, seismic vibration control, seismic response.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630
428 Animation of Objects on the Website by Application of CSS3 Language

Authors: Vladimir Simovic, Matija Varga, Robert Svetlacic

Abstract:

Scientific work analytically explores and demonstrates techniques that can animate objects and geometric characters using CSS3 language by applying proper formatting and positioning of elements. This paper presents examples of optimum application of the CSS3 descriptive language when generating general web animations (e.g., billiards and movement of geometric characters, etc.). The paper presents analytically, the optimal development and animation design with the frames within which the animated objects are. The originally developed content is based on the upgrading of existing CSS3 descriptive language animations with more complex syntax and project-oriented work. The purpose of the developed animations is to provide an overview of the interactive features of CSS3 descriptive language design for computer games and the animation of important analytical data based on the web view. It has been analytically demonstrated that CSS3 as a descriptive language allows inserting of various multimedia elements into websites for public and internal sites.

Keywords: Animation recording, web page graphics, HTML5 forms, Cascading Style Sheets 3 - CSS3, man-computer interaction, KML animation presenting format, GML, Google Earth Professional.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 767
427 Instability of Ties in Compression

Authors: T. Cornelius

Abstract:

Masonry cavity walls are loaded by wind pressure and vertical load from upper floors. These loads results in bending moments and compression forces in the ties connecting the outer and the inner wall in a cavity wall. Large cavity walls are furthermore loaded by differential movements from the temperature gradient between the outer and the inner wall, which results in critical increase of the bending moments in the ties. Since the ties are loaded by combined compression and moment forces, the loadbearing capacity is derived from instability equilibrium equations. Most of them are iterative, since exact instability solutions are complex to derive, not to mention the extra complexity introducing dimensional instability from the temperature gradients. Using an inverse variable substitution and comparing an exact theory with an analytical instability solution a method to design tie-connectors in cavity walls was developed. The method takes into account constraint conditions limiting the free length of the wall tie, and the instability in case of pure compression which gives an optimal load bearing capacity. The model is illustrated with examples from praxis.

Keywords: Masonry, tie connectors, cavity wall, instability, differential movements, combined bending and compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1660
426 Pectoral Muscles Suppression in Digital Mammograms Using Hybridization of Soft Computing Methods

Authors: I. Laurence Aroquiaraj, K. Thangavel

Abstract:

Breast region segmentation is an essential prerequisite in computerized analysis of mammograms. It aims at separating the breast tissue from the background of the mammogram and it includes two independent segmentations. The first segments the background region which usually contains annotations, labels and frames from the whole breast region, while the second removes the pectoral muscle portion (present in Medio Lateral Oblique (MLO) views) from the rest of the breast tissue. In this paper we propose hybridization of Connected Component Labeling (CCL), Fuzzy, and Straight line methods. Our proposed methods worked good for separating pectoral region. After removal pectoral muscle from the mammogram, further processing is confined to the breast region alone. To demonstrate the validity of our segmentation algorithm, it is extensively tested using over 322 mammographic images from the Mammographic Image Analysis Society (MIAS) database. The segmentation results were evaluated using a Mean Absolute Error (MAE), Hausdroff Distance (HD), Probabilistic Rand Index (PRI), Local Consistency Error (LCE) and Tanimoto Coefficient (TC). The hybridization of fuzzy with straight line method is given more than 96% of the curve segmentations to be adequate or better. In addition a comparison with similar approaches from the state of the art has been given, obtaining slightly improved results. Experimental results demonstrate the effectiveness of the proposed approach.

Keywords: X-ray Mammography, CCL, Fuzzy, Straight line.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1715
425 Applying Energy Consumption Schedule and Comparing It with Load Shifting Technique in Residential Load

Authors: Amira M. Attia, Karim H. Youssef, Nabil H. Abbasy

Abstract:

Energy consumption schedule (ECS) technique shifts usage of loads from on peak hours and redistributes them throughout the day according to residents’ operating time preferences. This technique is used as form of indirect control from utility to improve the load curve and hence its load factor and reduce customer’s total electric bill as well. Similarly, load shifting technique achieves ECS purposes but as direct control form applied from utility. In this paper, ECS is simulated twice as optimal constrained mathematical formula, solved by using CVX program in MATLAB® R2013b. First, it is utilized for single residential building with ten apartments to determine max allowable energy consumption per hour for each residential apartment. Then, it is used for single apartment with number of shiftable domestic devices, where operating schedule is deduced using previous simulation output results as constraints. The paper ends by giving differences between ECS technique and load shifting technique via literature and simulation. Based on results assessment, it will be shown whether using ECS or load shifting is more beneficial to both customer and utility.

Keywords: Energy consumption schedule, load shifting technique, comparison.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1061
424 Scheduling Maintenance Actions for Gas Turbines Aircraft Engines

Authors: Anis Gharbi

Abstract:

This paper considers the problem of scheduling maintenance actions for identical aircraft gas turbine engines. Each one of the turbines consists of parts which frequently require replacement. A finite inventory of spare parts is available and all parts are ready for replacement at any time. The inventory consists of both new and refurbished parts. Hence, these parts have different field lives. The goal is to find a replacement part sequencing that maximizes the time that the aircraft will keep functioning before the inventory is replenished. The problem is formulated as an identical parallel machine scheduling problem where the minimum completion time has to be maximized. Two models have been developed. The first one is an optimization model which is based on a 0-1 linear programming formulation, while the second one is an approximate procedure which consists in decomposing the problem into several two-machine subproblems. Each subproblem is optimally solved using the first model. Both models have been implemented using Lingo and have been tested on two sets of randomly generated data with up to 150 parts and 10 turbines. Experimental results show that the optimization model is able to solve only instances with no more than 4 turbines, while the decomposition procedure often provides near-optimal solutions within a maximum CPU time of 3 seconds.

Keywords: Aircraft turbines, Scheduling, Identical parallel machines, 0-1 linear programming, Heuristic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1970
423 Detecting HCC Tumor in Three Phasic CT Liver Images with Optimization of Neural Network

Authors: Mahdieh Khalilinezhad, Silvana Dellepiane, Gianni Vernazza

Abstract:

The aim of this work is to build a model based on tissue characterization that is able to discriminate pathological and non-pathological regions from three-phasic CT images. With our research and based on a feature selection in different phases, we are trying to design a neural network system with an optimal neuron number in a hidden layer. Our approach consists of three steps: feature selection, feature reduction, and classification. For each region of interest (ROI), 6 distinct sets of texture features are extracted such as: first order histogram parameters, absolute gradient, run-length matrix, co-occurrence matrix, autoregressive model, and wavelet, for a total of 270 texture features. When analyzing more phases, we show that the injection of liquid cause changes to the high relevant features in each region. Our results demonstrate that for detecting HCC tumor phase 3 is the best one in most of the features that we apply to the classification algorithm. The percentage of detection between pathology and healthy classes, according to our method, relates to first order histogram parameters with accuracy of 85% in phase 1, 95% in phase 2, and 95% in phase 3.

Keywords: Feature selection, Multi-phasic liver images, Neural network, Texture analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2502
422 Overview of Multi-Chip Alternatives for 2.5D and 3D Integrated Circuit Packagings

Authors: Ching-Feng Chen, Ching-Chih Tsai

Abstract:

With the size of the transistor gradually approaching the physical limit, it challenges the persistence of Moore’s Law due to such issues of the short channel effect and the development of the high numerical aperture (NA) lithography equipment. In the context of the ever-increasing technical requirements of portable devices and high-performance computing (HPC), relying on the law continuation to enhance the chip density will no longer support the prospects of the electronics industry. Weighing the chip’s power consumption-performance-area-cost-cycle time to market (PPACC) is an updated benchmark to drive the evolution of the advanced wafer nanometer (nm). The advent of two and half- and three-dimensional (2.5 and 3D)- Very-Large-Scale Integration (VLSI) packaging based on Through Silicon Via (TSV) technology has updated the traditional die assembly methods and provided the solution. This overview investigates the up-to-date and cutting-edge packaging technologies for 2.5D and 3D integrated circuits (IC) based on the updated transistor structure and technology nodes. We conclude that multi-chip solutions for 2.5D and 3D IC packaging can prolong Moore’s Law.

Keywords: Moore’s Law, High Numerical Aperture, Power Consumption-Performance-Area-Cost-Cycle Time to Market, PPACC, 2.5 and 3D-Very-Large-Scale Integration Packaging, Through Silicon Vi.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 166
421 An Experimental Study of Downstream Structures on the Flow-Induced Vibrations Energy Harvester Performances

Authors: Pakorn Uttayopas, Chawalit Kittichaikarn

Abstract:

This paper presents an experimental investigation for the characteristics of an energy harvesting device exploiting flow-induced vibration in a wind tunnel. A stationary bluff body is connected with a downstream tip body via an aluminium cantilever beam. Various lengths of aluminium cantilever beam and different shapes of downstream tip body are considered. The results show that the characteristics of the energy harvester’s vibration depend on both the length of the aluminium cantilever beam and the shape of the downstream tip body. The highest ratio between vibration amplitude and bluff body diameter was found to be 1.39 for an energy harvester with a symmetrical triangular tip body and L/D1 = 5 at 9.8 m/s of flow speed (Re = 20077). Using this configuration, the electrical energy was extracted with a polyvinylidene fluoride (PVDF) piezoelectric beam with different load resistances, of which the optimal value could be found on each Reynolds number. The highest power output was found to be 3.19 µW, at 9.8 m/s of flow speed (Re = 20077) and 27 MΩ of load resistance.

Keywords: Downstream structures, energy harvesting, flow-induced vibration, piezoelectric material, wind tunnel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 892
420 Comparative Study on Swarm Intelligence Techniques for Biclustering of Microarray Gene Expression Data

Authors: R. Balamurugan, A. M. Natarajan, K. Premalatha

Abstract:

Microarray gene expression data play a vital in biological processes, gene regulation and disease mechanism. Biclustering in gene expression data is a subset of the genes indicating consistent patterns under the subset of the conditions. Finding a biclustering is an optimization problem. In recent years, swarm intelligence techniques are popular due to the fact that many real-world problems are increasingly large, complex and dynamic. By reasons of the size and complexity of the problems, it is necessary to find an optimization technique whose efficiency is measured by finding the near optimal solution within a reasonable amount of time. In this paper, the algorithmic concepts of the Particle Swarm Optimization (PSO), Shuffled Frog Leaping (SFL) and Cuckoo Search (CS) algorithms have been analyzed for the four benchmark gene expression dataset. The experiment results show that CS outperforms PSO and SFL for 3 datasets and SFL give better performance in one dataset. Also this work determines the biological relevance of the biclusters with Gene Ontology in terms of function, process and component.

Keywords: Particle swarm optimization, Shuffled frog leaping, Cuckoo search, biclustering, gene expression data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2634