Search results for: Ashok Gupta
52 Predictive Models for Compressive Strength of High Performance Fly Ash Cement Concrete for Pavements
Authors: S. M. Gupta, Vanita Aggarwal, Som Nath Sachdeva
Abstract:
The work reported through this paper is an experimental work conducted on High Performance Concrete (HPC) with super plasticizer with the aim to develop some models suitable for prediction of compressive strength of HPC mixes. In this study, the effect of varying proportions of fly ash (0% to 50% @ 10% increment) on compressive strength of high performance concrete has been evaluated. The mix designs studied were M30, M40 and M50 to compare the effect of fly ash addition on the properties of these concrete mixes. In all eighteen concrete mixes that have been designed, three were conventional concretes for three grades under discussion and fifteen were HPC with fly ash with varying percentages of fly ash. The concrete mix designing has been done in accordance with Indian standard recommended guidelines. All the concrete mixes have been studied in terms of compressive strength at 7 days, 28 days, 90 days, and 365 days. All the materials used have been kept same throughout the study to get a perfect comparison of values of results. The models for compressive strength prediction have been developed using Linear Regression method (LR), Artificial Neural Network (ANN) and Leave-One-Out Validation (LOOV) methods.
Keywords: ANN, concrete mixes, compressive strength, fly ash, high performance concrete, linear regression, strength prediction models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 207651 Reducing Test Vectors Count Using Fault Based Optimization Schemes in VLSI Testing
Authors: Vinod Kumar Khera, R. K. Sharma, A. K. Gupta
Abstract:
Power dissipation increases exponentially during test mode as compared to normal operation of the circuit. In extreme cases, test power is more than twice the power consumed during normal operation mode. Test vector generation scheme is key component in deciding the power hungriness of a circuit during testing. Test vector count and consequent leakage current are functions of test vector generation scheme. Fault based test vector count optimization has been presented in this work. It helps in reducing test vector count and the leakage current. In the presented scheme, test vectors have been reduced by extracting essential child vectors. The scheme has been tested experimentally using stuck at fault models and results ensure the reduction in test vector count.Keywords: Low power VLSI testing, independent fault, essential faults, test vector reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 142250 Fuzzy Logic Controller Based Shunt Active Filter with Different MFs for Current Harmonics Elimination
Authors: Shreyash Sinai Kunde, Siddhang Tendulkar, Shiv Prakash Gupta, Gaurav Kumar, Suresh Mikkili
Abstract:
One of the major power quality concerns in modern times is the problem of current harmonics. The current harmonics is caused due to the increase in non-linear loads which is largely dominated by power electronics devices. The Shunt active filtering is one of the best solutions for mitigating current harmonics. This paper describes a fuzzy logic controller based (FLC) based three Phase Shunt active Filter to achieve low current harmonic distortion (THD) and Reactive power compensation. The performance of fuzzy logic controller is analysed under both balanced sinusoidal and unbalanced sinusoidal source condition. The above controller serves the purpose of maintaining DC Capacitor Voltage constant. The proposed shunt active filter uses hysteresis current controller for current control of IGBT based PWM inverter. The simulation results of model in Simulink MATLAB reveals satisfying results.
Keywords: Shunt active filter, Current harmonics, Fuzzy logic controller, Hysteresis current controller.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 272349 Langmuir–Blodgett Films of Polyaniline for Efficient Detection of Uric Acid
Authors: Kashima Arora, Monika Tomar, Vinay Gupta
Abstract:
Langmuir–Blodgett (LB) films of polyaniline (PANI) grown onto ITO coated glass substrates were utilized for the fabrication of Uric acid biosensor for efficient detection of uric acid by immobilizing Uricase via EDC–NHS coupling. The modified electrodes were characterized by atomic force microscopy (AFM). The response characteristics after immobilization of uricase were studied using cyclic voltammetry and electrochemical impedance spectroscopy techniques. The uricase/PANI/ITO/glass bioelectrode studied by CV and EIS techniques revealed detection of uric acid in a wide range of 0.05 mM to 1.0 mM, covering the physiological range in blood. A low Michaelis–Menten constant (Km) of 0.21 mM indicates the higher affinity of immobilized Uricase towards its analyte (uric acid). The fabricated uric acid biosensor based on PANI LB films exhibits excellent sensitivity of 0.21 mA/mM with a response time of 4 s, good reproducibility, long shelf life (8 weeks) and high selectivity.
Keywords: Uric acid; biosensor, PANI, Langmuir Blodgett films deposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 213348 A Study on Mode of Collapse of Metallic Shells Having Combined Tube-Frusta Geometry Subjected to Axial Compression
Authors: P. K. Gupta
Abstract:
The present paper deals with the experimental and computational study of axial collapse of the aluminum metallic shells having combined tube-frusta geometry between two parallel plates. Shells were having bottom two third lengths as frusta and remaining top one third lengths as tube. Shells were compressed to recognize their modes of collapse and associated energy absorption capability. An axisymmetric Finite Element computational model of collapse process is presented and analysed, using a non-linear FE code FORGE2. Six noded isoparametric triangular elements were used to discretize the deforming shell. The material of the shells was idealized as rigid visco-plastic. To validate the computational model experimental and computed results of the deformed shapes and their corresponding load-compression and energy-compression curves were compared. With the help of the obtained results progress of the axisymmetric mode of collapse has been presented, analysed and discussed.Keywords: Axial compression, crashworthiness, energy absorption, FORGE2, metallic shells.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 147947 Study of Reporting System for Adverse Events Related to Common Medical Devices at a Tertiary Care Public Sector Hospital in India
Authors: S. Kurien, S. Satpathy, S. K. Gupta, S. K. Arya, D. K. Sharma
Abstract:
Advances in the use of health care technology have resulted in increased adverse events (AEs) related to the use of medical devices. The study focused on the existing reporting systems. This study was conducted in a tertiary care public sector hospital. Devices included Syringe infusion pumps, Cardiac monitors, Pulse oximeters, Ventilators and Defibrillators. A total of 211 respondents were recruited. Interviews were held with 30 key informants. Medical records were scrutinized. Relevant statistical tests were used. Resident doctors reported maximum frequency of AEs, followed by nurses; and least by consultants. A significant association was found between the cadre of health care personnel and awareness that the patients and bystanders have a risk of sustaining AE. Awareness regarding reporting of AEs was low, and it was generally done verbally. Other critical findings are discussed in the light of the barriers to reporting, reasons for non-compliance, recording system, and so on.
Keywords: Adverse events, health care technology, public sector hospital, reporting systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 252846 The Effect of Blockage Factor on Savonius Hydrokinetic Turbine Performance
Authors: Thochi Seb Rengma, Mahendra Kumar Gupta, P. M. V. Subbarao
Abstract:
Hydrokinetic turbines can be used to produce power in inaccessible villages located near rivers. The hydrokinetic turbine uses the kinetic energy of the water and maybe put it directly into the natural flow of water without dams. For off-grid power production, the Savonius-type vertical axis turbine is the easiest to design and manufacture. This proposal uses three-dimensional Computational Fluid Dynamics (CFD) simulations to measure the considerable interaction and complexity of turbine blades. Savonius hydrokinetic turbine (SHKT) performance is affected by a blockage in the river, canals, and waterways. Putting a large object in a water channel causes water obstruction and raises local free stream velocity. The blockage correction factor or velocity increment measures the impact of velocity on the performance. SHKT performance is evaluated by comparing power coefficient (Cp) with tip-speed ratio (TSR) at various blockage ratios. The maximum Cp was obtained at a TSR of 1.1 with a blockage ratio of 45%, whereas TSR of 0.8 yielded the highest Cp without blockage. The greatest Cp of 0.29 was obtained with a 45% blockage ratio compared to a Cp max of 0.18 without a blockage.
Keywords: Savonius hydrokinetic turbine, blockage ratio, vertical axis turbine, power coefficient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17145 Seismic Analysis of a S-Curved Viaduct using Stick and Finite Element Models
Authors: Sourabh Agrawal, Ashok K. Jain
Abstract:
Stick models are widely used in studying the behaviour of straight as well as skew bridges and viaducts subjected to earthquakes while carrying out preliminary studies. The application of such models to highly curved bridges continues to pose challenging problems. A viaduct proposed in the foothills of the Himalayas in Northern India is chosen for the study. It is having 8 simply supported spans @ 30 m c/c. It is doubly curved in horizontal plane with 20 m radius. It is inclined in vertical plane as well. The superstructure consists of a box section. Three models have been used: a conventional stick model, an improved stick model and a 3D finite element model. The improved stick model is employed by making use of body constraints in order to study its capabilities. The first 8 frequencies are about 9.71% away in the latter two models. Later the difference increases to 80% in 50th mode. The viaduct was subjected to all three components of the El Centro earthquake of May 1940. The numerical integration was carried out using the Hilber- Hughes-Taylor method as implemented in SAP2000. Axial forces and moments in the bridge piers as well as lateral displacements at the bearing levels are compared for the three models. The maximum difference in the axial forces and bending moments and displacements vary by 25% between the improved and finite element model. Whereas, the maximum difference in the axial forces, moments, and displacements in various sections vary by 35% between the improved stick model and equivalent straight stick model. The difference for torsional moment was as high as 75%. It is concluded that the stick model with body constraints to model the bearings and expansion joints is not desirable in very sharp S curved viaducts even for preliminary analysis. This model can be used only to determine first 10 frequency and mode shapes but not for member forces. A 3D finite element analysis must be carried out for meaningful results.Keywords: Bearing, body constraint, box girder, curved viaduct, expansion joint, finite element, link element, seismic, stick model, time history analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 235244 Comparison of Three Turbulence Models in Wear Prediction of Multi-Size Particulate Flow through Rotating Channel
Authors: Pankaj K. Gupta, Krishnan V. Pagalthivarthi
Abstract:
The present work compares the performance of three turbulence modeling approach (based on the two-equation k -ε model) in predicting erosive wear in multi-size dense slurry flow through rotating channel. All three turbulence models include rotation modification to the production term in the turbulent kineticenergy equation. The two-phase flow field obtained numerically using Galerkin finite element methodology relates the local flow velocity and concentration to the wear rate via a suitable wear model. The wear models for both sliding wear and impact wear mechanisms account for the particle size dependence. Results of predicted wear rates using the three turbulence models are compared for a large number of cases spanning such operating parameters as rotation rate, solids concentration, flow rate, particle size distribution and so forth. The root-mean-square error between FE-generated data and the correlation between maximum wear rate and the operating parameters is found less than 2.5% for all the three models.Keywords: Rotating channel, maximum wear rate, multi-sizeparticulate flow, k −ε turbulence models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 177143 Array Signal Processing: DOA Estimation for Missing Sensors
Authors: Lalita Gupta, R. P. Singh
Abstract:
Array signal processing involves signal enumeration and source localization. Array signal processing is centered on the ability to fuse temporal and spatial information captured via sampling signals emitted from a number of sources at the sensors of an array in order to carry out a specific estimation task: source characteristics (mainly localization of the sources) and/or array characteristics (mainly array geometry) estimation. Array signal processing is a part of signal processing that uses sensors organized in patterns or arrays, to detect signals and to determine information about them. Beamforming is a general signal processing technique used to control the directionality of the reception or transmission of a signal. Using Beamforming we can direct the majority of signal energy we receive from a group of array. Multiple signal classification (MUSIC) is a highly popular eigenstructure-based estimation method of direction of arrival (DOA) with high resolution. This Paper enumerates the effect of missing sensors in DOA estimation. The accuracy of the MUSIC-based DOA estimation is degraded significantly both by the effects of the missing sensors among the receiving array elements and the unequal channel gain and phase errors of the receiver.
Keywords: Array Signal Processing, Beamforming, ULA, Direction of Arrival, MUSIC
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 301942 A Novel FIFO Design for Data Transfer in Mixed Timing Systems
Authors: Mansi Jhamb, R. K. Sharma, A. K. Gupta
Abstract:
In the current scenario, with the increasing integration densities, most system-on-chip designs are partitioned into multiple clock domains. In this paper, an asynchronous FIFO (First-in First-out pipeline) design is employed as a data transfer interface between two independent clock domains. Since the clocks on the either sides of the FIFO run at a different speed, the task to ensure the correct data transmission through this FIFO is manually performed. Firstly an existing asynchronous FIFO design is discussed and simulated. Gate-level simulation results depicted the flaw in existing design. In order to solve this problem, a novel modified asynchronous FIFO design is proposed. The results obtained from proposed design are in perfect accordance with theoretical expectations. The proposed asynchronous FIFO design outperforms the existing design in terms of accuracy and speed. In order to evaluate the performance of the FIFO designs presented in this paper, the circuits were implemented in 0.24µ TSMC CMOS technology and simulated at 2.5V using HSpice (© Avant! Corporation). The layout design of the proposed FIFO is also presented.
Keywords: Asynchronous, Clock, CMOS, C-element, FIFO, Globally Asynchronous Locally Synchronous (GALS), HSpice.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 307641 Genetic Algorithm Parameters Optimization for Bi-Criteria Multiprocessor Task Scheduling Using Design of Experiments
Authors: Sunita Dhingra, Satinder Bal Gupta, Ranjit Biswas
Abstract:
Multiprocessor task scheduling is a NP-hard problem and Genetic Algorithm (GA) has been revealed as an excellent technique for finding an optimal solution. In the past, several methods have been considered for the solution of this problem based on GAs. But, all these methods consider single criteria and in the present work, minimization of the bi-criteria multiprocessor task scheduling problem has been considered which includes weighted sum of makespan & total completion time. Efficiency and effectiveness of genetic algorithm can be achieved by optimization of its different parameters such as crossover, mutation, crossover probability, selection function etc. The effects of GA parameters on minimization of bi-criteria fitness function and subsequent setting of parameters have been accomplished by central composite design (CCD) approach of response surface methodology (RSM) of Design of Experiments. The experiments have been performed with different levels of GA parameters and analysis of variance has been performed for significant parameters for minimisation of makespan and total completion time simultaneously.
Keywords: Multiprocessor task scheduling, Design of experiments, Genetic Algorithm, Makespan, Total completion time.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 284340 Statistical Optimization of Medium Components for Biomass Production of Chlorella pyrenoidosa under Autotrophic Conditions and Evaluation of Its Biochemical Composition under Stress Conditions
Authors: N. P. Dhull, K. Gupta, R. Soni, D. K. Rahi, S. K. Soni
Abstract:
The aim of the present work was to statistically design an autotrophic medium for maximum biomass production by Chlorella pyrenoidosa using response surface methodology. After evaluating one factor at a time approach, K2HPO4, KNO3, MgSO4.7H2O and NaHCO3 were preferred over the other components of the fog’s medium as most critical autotrophic medium components. The study showed that the maximum biomass yield was achieved while the concentrations of MgSO4.7H2O, K2HPO4, KNO3 and NaHCO3 were 0.409 g/L, 0.24 g/L, 1.033 g/L, and 3.265 g/L, respectively. The study reported that the biomass productivity of C. pyrenoidosa improved from 0.14 g/L in defined fog’s medium to 1.40 g/L in modified fog’s medium resulting 10 fold increase. The biochemical composition biosynthesis of C. pyrenoidosa was altered using nitrogen limiting stress bringing about 5.23 fold increase in lipid content than control (cell without stress), as analyzed by FTIR integration method.
Keywords: Autotrophic condition, Chlorella pyrenoidosa, FTIR, Response Surface Methodology, Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 243939 A High Level Implementation of a High Performance Data Transfer Interface for NoC
Authors: Mansi Jhamb, R. K. Sharma, A. K. Gupta
Abstract:
The distribution of a single global clock across a chip has become the major design bottleneck for high performance VLSI systems owing to the power dissipation, process variability and multicycle cross-chip signaling. A Network-on-Chip (NoC) architecture partitioned into several synchronous blocks has become a promising approach for attaining fine-grain power management at the system level. In a NoC architecture the communication between the blocks is handled asynchronously. To interface these blocks on a chip operating at different frequencies, an asynchronous FIFO interface is inevitable. However, these asynchronous FIFOs are not required if adjacent blocks belong to the same clock domain. In this paper, we have designed and analyzed a 16-bit asynchronous micropipelined FIFO of depth four, with the awareness of place and route on an FPGA device. We have used a commercially available Spartan 3 device and designed a high speed implementation of the asynchronous 4-phase micropipeline. The asynchronous FIFO implemented on the FPGA device shows 76 Mb/s throughput and a handshake cycle of 109 ns for write and 101.3 ns for read at the simulation under the worst case operating conditions (voltage = 0.95V) on a working chip at the room temperature.Keywords: Asynchronous, FIFO, FPGA, GALS, Network-on- Chip (NoC), VHDL.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 203938 Pefloxacin as a Surrogate Marker for Ciprofloxacin Resistance in Salmonella: Study from North India
Authors: Varsha Gupta, Priya Datta, Gursimran Mohi, Jagdish Chander
Abstract:
Fluoroquinolones form the mainstay of therapy for the treatment of infections due to Salmonella enterica subsp. enterica. There is a complex interplay between several resistance mechanisms for quinolones and various fluoroquinolones discs, giving varying results, making detection and interpretation of fluoroquinolone resistance difficult. For detection of fluoroquinolone resistance in Salmonella ssp., we compared the use of pefloxacin and nalidixic acid discs as surrogate marker. Using MIC for ciprofloxacin as the gold standard, 43.5% of strains showed MIC as ≥1 μg/ml and were thus resistant to fluoroquinoloes. Based on the performance of nalidixic acid and pefloxacin discs as surrogate marker for ciprofloxacin resistance, both the discs could correctly detect all the resistant phenotypes; however, use of nalidixic acid disc showed false resistance in the majority of the sensitive phenotypes. We have also tested newer antimicrobial agents like cefixime, imipenem, tigecycline and azithromycin against Salmonella spp. Moreover, there was a comeback of susceptibility to older antimicrobials like ampicillin, chloramphenicol, and cotrimoxazole. We can also use cefixime, imipenem, tigecycline and azithromycin in the treatment of multidrug resistant S. typhi due to their high susceptibility.
Keywords: Pefloxacin, salmonella, surrogate marker.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 152437 Real Time Approach for Data Placement in Wireless Sensor Networks
Authors: Sanjeev Gupta, Mayank Dave
Abstract:
The issue of real-time and reliable report delivery is extremely important for taking effective decision in a real world mission critical Wireless Sensor Network (WSN) based application. The sensor data behaves differently in many ways from the data in traditional databases. WSNs need a mechanism to register, process queries, and disseminate data. In this paper we propose an architectural framework for data placement and management. We propose a reliable and real time approach for data placement and achieving data integrity using self organized sensor clusters. Instead of storing information in individual cluster heads as suggested in some protocols, in our architecture we suggest storing of information of all clusters within a cell in the corresponding base station. For data dissemination and action in the wireless sensor network we propose to use Action and Relay Stations (ARS). To reduce average energy dissipation of sensor nodes, the data is sent to the nearest ARS rather than base station. We have designed our architecture in such a way so as to achieve greater energy savings, enhanced availability and reliability.
Keywords: Cluster head, data reliability, real time communication, wireless sensor networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 181336 Traffic Behaviour of VoIP in a Simulated Access Network
Authors: Jishu Das Gupta, Srecko Howard, Angela Howard
Abstract:
Insufficient Quality of Service (QoS) of Voice over Internet Protocol (VoIP) is a growing concern that has lead the need for research and study. In this paper we investigate the performance of VoIP and the impact of resource limitations on the performance of Access Networks. The impact of VoIP performance in Access Networks is particularly important in regions where Internet resources are limited and the cost of improving these resources is prohibitive. It is clear that perceived VoIP performance, as measured by mean opinion score [2] in experiments, where subjects are asked to rate communication quality, is determined by end-to-end delay on the communication path, delay variation, packet loss, echo, the coding algorithm in use and noise. These performance indicators can be measured and the affect in the Access Network can be estimated. This paper investigates the congestion in the Access Network to the overall performance of VoIP services with the presence of other substantial uses of internet and ways in which Access Networks can be designed to improve VoIP performance. Methods for analyzing the impact of the Access Network on VoIP performance will be surveyed and reviewed. This paper also considers some approaches for improving performance of VoIP by carrying out experiments using Network Simulator version 2 (NS2) software with a view to gaining a better understanding of the design of Access Networks.Keywords: Codec, DiffServ, Droptail, RED, VOIP
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 159335 Comparison of Deep Convolutional Neural Networks Models for Plant Disease Identification
Authors: Megha Gupta, Nupur Prakash
Abstract:
Identification of plant diseases has been performed using machine learning and deep learning models on the datasets containing images of healthy and diseased plant leaves. The current study carries out an evaluation of some of the deep learning models based on convolutional neural network architectures for identification of plant diseases. For this purpose, the publicly available New Plant Diseases Dataset, an augmented version of PlantVillage dataset, available on Kaggle platform, containing 87,900 images has been used. The dataset contained images of 26 diseases of 14 different plants and images of 12 healthy plants. The CNN models selected for the study presented in this paper are AlexNet, ZFNet, VGGNet (four models), GoogLeNet, and ResNet (three models). The selected models are trained using PyTorch, an open-source machine learning library, on Google Colaboratory. A comparative study has been carried out to analyze the high degree of accuracy achieved using these models. The highest test accuracy and F1-score of 99.59% and 0.996, respectively, were achieved by using GoogLeNet with Mini-batch momentum based gradient descent learning algorithm.
Keywords: comparative analysis, convolutional neural networks, deep learning, plant disease identification
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 63634 Modelling and Enhancing Engineering Drawing and Design Table Design by Analyzing Stress and Advanced Deformation Analysis Using Finite Element Method
Authors: Nitesh Pandey, Manish Kumar, Amit Kumar Srivastava, Pankaj Gupta
Abstract:
The research presents an extensive analysis of the Engineering Drawing and Design (EDD) table's design and development, accentuating its convertible utility and ergonomic design principles. Through the amalgamation of advanced design methodologies with simulation tools, this paper explores and compares the structural integrity of the EDD table, considering both linear and nonlinear stress behaviors. The study evaluates stress distribution and deformation patterns using the Finite Element Method (FEM) in Autodesk Fusion 360 CAD/CAM software. These analyses are critical to maximizing the durability and performance of the table. Stress situations are modeled using mathematical equations, which provide an accurate depiction of real-world operational conditions. The research highlights the EDD table as an innovative solution tailored to the diverse needs of modern workspaces, providing a balance of practical functionality and ergonomic design while demonstrating cost-effectiveness and time efficiency in the design process.
Keywords: Parametric modelling, Finite element method, FEM, Autodesk Fusion 360, stress analysis, CAD/CAM, computer aided design, computer-aided manufacturing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3033 Role of Process Parameters on Pocket Milling with Abrasive Water Jet Machining Technique
Authors: T. V. K. Gupta, J. Ramkumar, Puneet Tandon, N. S. Vyas
Abstract:
Abrasive Water Jet Machining is an unconventional machining process well known for machining hard to cut materials. The primary research focus on the process was for through cutting and a very limited literature is available on pocket milling using AWJM. The present work is an attempt to use this process for milling applications considering a set of various process parameters. Four different input parameters, which were considered by researchers for part separation, are selected for the above application, i.e., abrasive size, flow rate, standoff distance and traverse speed. Pockets of definite size are machined to investigate surface roughness, material removal rate and pocket depth. Based on the data available through experiments on SS304 material, it is observed that higher traverse speeds gives a better finish because of reduction in the particle energy density and lower depth is also observed. Increase in the standoff distance and abrasive flow rate reduces the rate of material removal as the jet loses its focus and occurrence of collisions within the particles. ANOVA for individual output parameter has been studied to know the significant process parameters.
Keywords: Abrasive flow rate, surface finish, abrasive size, standoff distance, traverse speed.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 423132 Improved Multi–Objective Firefly Algorithms to Find Optimal Golomb Ruler Sequences for Optimal Golomb Ruler Channel Allocation
Authors: Shonak Bansal, Prince Jain, Arun Kumar Singh, Neena Gupta
Abstract:
Recently nature–inspired algorithms have widespread use throughout the tough and time consuming multi–objective scientific and engineering design optimization problems. In this paper, we present extended forms of firefly algorithm to find optimal Golomb ruler (OGR) sequences. The OGRs have their one of the major application as unequally spaced channel–allocation algorithm in optical wavelength division multiplexing (WDM) systems in order to minimize the adverse four–wave mixing (FWM) crosstalk effect. The simulation results conclude that the proposed optimization algorithm has superior performance compared to the existing conventional computing and nature–inspired optimization algorithms to find OGRs in terms of ruler length, total optical channel bandwidth and computation time.Keywords: Channel allocation, conventional computing, four–wave mixing, nature–inspired algorithm, optimal Golomb ruler, Lévy flight distribution, optimization, improved multi–objective Firefly algorithms, Pareto optimal.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 115731 A Study on Leaching Behavior of Na, Ca and K Using Column Leach Test
Authors: Barman P.J, Kartha S A, Gupta S, Pradhan B.
Abstract:
Column leach test has been performed to examine the behavior of leaching of sodium, calcium and potassium in landfills. In the column leach apparatus, two different layers of contaminated and uncontaminated soils of different height ratios (ratio of depth of contaminated soil to the depth of uncontaminated soil) are taken. Water is poured from an overhead tank at a particular flowrate to the inlet of the soil column for a certain ponding depth over the contaminated soil. Subsequent infiltration causes leaching and the leachates are collected from the bottom of the column. The concentrations of Na, Ca and K in the leachate are measured using flame photometry. The experiments are further extended by changing the rates of flow from the overhead tank to the inlet of the column in achieving the same ponding depth. The experiments are performed for different scenarios in which the height ratios are altered and the variations of concentrations of Na, Ca, and K are observed. The study brings an estimation of leaching in landfill sites for different heights and precipitation intensity where a ponding depth is maintained over the landfill. It has been observed that the leaching behavior of Na, Ca, and K are not similar. Calcium exhibits highest amount of leaching compared to Sodium and Potassium under similar experimental conditions.Keywords: Column leaching, flow rate, uncontaminated soil, contaminated soil, concentration, height ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 233330 Optimization of Process Parameters of Pressure Die Casting using Taguchi Methodology
Authors: Satish Kumar, Arun Kumar Gupta, Pankaj Chandna
Abstract:
The present work analyses different parameters of pressure die casting to minimize the casting defects. Pressure diecasting is usually applied for casting of aluminium alloys. Good surface finish with required tolerances and dimensional accuracy can be achieved by optimization of controllable process parameters such as solidification time, molten temperature, filling time, injection pressure and plunger velocity. Moreover, by selection of optimum process parameters the pressure die casting defects such as porosity, insufficient spread of molten material, flash etc. are also minimized. Therefore, a pressure die casting component, carburetor housing of aluminium alloy (Al2Si2O5) has been considered. The effects of selected process parameters on casting defects and subsequent setting of parameters with the levels have been accomplished by Taguchi-s parameter design approach. The experiments have been performed as per the combination of levels of different process parameters suggested by L18 orthogonal array. Analyses of variance have been performed for mean and signal-to-noise ratio to estimate the percent contribution of different process parameters. Confidence interval has also been estimated for 95% consistency level and three conformational experiments have been performed to validate the optimum level of different parameters. Overall 2.352% reduction in defects has been observed with the help of suggested optimum process parameters.
Keywords: Aluminium Casting, Pressure Die Casting, Taguchi Methodology, Design of Experiments
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 733329 Shrinkage of High Strength Concrete
Authors: S.M. Gupta, V.K. Sehgal, S.K. Kaushik
Abstract:
This paper presents the results of an experimental investigation carried out to evaluate the shrinkage of High Strength Concrete. High Strength Concrete is made by partially replacement of cement by flyash and silica fume. The shrinkage of High Strength Concrete has been studied using the different types of coarse and fine aggregates i.e. Sandstone and Granite of 12.5 mm size and Yamuna and Badarpur Sand. The Mix proportion of concrete is 1:0.8:2.2 with water cement ratio as 0.30. Superplasticizer dose @ of 2% by weight of cement is added to achieve the required degree of workability in terms of compaction factor. From the test results of the above investigation it can be concluded that the shrinkage strain of High Strength Concrete increases with age. The shrinkage strain of concrete with replacement of cement by 10% of Flyash and Silica fume respectively at various ages are more (6 to 10%) than the shrinkage strain of concrete without Flyash and Silica fume. The shrinkage strain of concrete with Badarpur sand as Fine aggregate at 90 days is slightly less (10%) than that of concrete with Yamuna Sand. Further, the shrinkage strain of concrete with Granite as Coarse aggregate at 90 days is slightly less (6 to 7%) than that of concrete with Sand stone as aggregate of same size. The shrinkage strain of High Strength Concrete is also compared with that of normal strength concrete. Test results show that the shrinkage strain of high strength concrete is less than that of normal strength concrete.Keywords: Shrinkage high strength concrete, fly ash, silica fume& superplastizers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 252828 Ranking of Inventory Policies Using Distance Based Approach Method
Authors: Gupta Amit, Kumar Ramesh, Tewari P. C.
Abstract:
Globalization is putting enormous pressure on the business organizations specially manufacturing one to rethink the supply chain in innovative manners. Inventory consumes major portion of total sale revenue. Effective and efficient inventory management plays a vital role for the successful functioning of any organization. Selection of inventory policy is one of the important purchasing activities. This paper focuses on selection and ranking of alternative inventory policies. A deterministic quantitative model based on Distance Based Approach (DBA) method has been developed for evaluation and ranking of inventory policies. We have employed this concept first time for this type of the selection problem. Four inventory policies economic order quantity (EOQ), just in time (JIT), vendor managed inventory (VMI) and monthly policy are considered. Improper selection could affect a company’s competitiveness in terms of the productivity of its facilities and quality of its products. The ranking of inventory policies is a multi-criteria problem. There is a need to first identify the selection criteria and then processes the information with reference to relative importance of attributes for comparison. Criteria values for each inventory policy can be obtained either analytically or by using a simulation technique or they are linguistic subjective judgments defined by fuzzy sets, like, for example, the values of criteria. A methodology is developed and applied to rank the inventory policies.
Keywords: Inventory Policy, Ranking, DBA, Selection criteria.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 182427 A PIM (Processor-In-Memory) for Computer Graphics : Data Partitioning and Placement Schemes
Authors: Jae Chul Cha, Sandeep K. Gupta
Abstract:
The demand for higher performance graphics continues to grow because of the incessant desire towards realism. And, rapid advances in fabrication technology have enabled us to build several processor cores on a single die. Hence, it is important to develop single chip parallel architectures for such data-intensive applications. In this paper, we propose an efficient PIM architectures tailored for computer graphics which requires a large number of memory accesses. We then address the two important tasks necessary for maximally exploiting the parallelism provided by the architecture, namely, partitioning and placement of graphic data, which affect respectively load balances and communication costs. Under the constraints of uniform partitioning, we develop approaches for optimal partitioning and placement, which significantly reduce search space. We also present heuristics for identifying near-optimal placement, since the search space for placement is impractically large despite our optimization. We then demonstrate the effectiveness of our partitioning and placement approaches via analysis of example scenes; simulation results show considerable search space reductions, and our heuristics for placement performs close to optimal – the average ratio of communication overheads between our heuristics and the optimal was 1.05. Our uniform partitioning showed average load-balance ratio of 1.47 for geometry processing and 1.44 for rasterization, which is reasonable.Keywords: Data Partitioning and Placement, Graphics, PIM, Search Space Reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 149226 SVM-based Multiview Face Recognition by Generalization of Discriminant Analysis
Authors: Dakshina Ranjan Kisku, Hunny Mehrotra, Jamuna Kanta Sing, Phalguni Gupta
Abstract:
Identity verification of authentic persons by their multiview faces is a real valued problem in machine vision. Multiview faces are having difficulties due to non-linear representation in the feature space. This paper illustrates the usability of the generalization of LDA in the form of canonical covariate for face recognition to multiview faces. In the proposed work, the Gabor filter bank is used to extract facial features that characterized by spatial frequency, spatial locality and orientation. Gabor face representation captures substantial amount of variations of the face instances that often occurs due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images of rotated profile views produce Gabor faces with high dimensional features vectors. Canonical covariate is then used to Gabor faces to reduce the high dimensional feature spaces into low dimensional subspaces. Finally, support vector machines are trained with canonical sub-spaces that contain reduced set of features and perform recognition task. The proposed system is evaluated with UMIST face database. The experiment results demonstrate the efficiency and robustness of the proposed system with high recognition rates.
Keywords: Biometrics, Multiview face Recognition, Gaborwavelets, LDA, SVM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 150225 Upgraded Cuckoo Search Algorithm to Solve Optimisation Problems Using Gaussian Selection Operator and Neighbour Strategy Approach
Authors: Mukesh Kumar Shah, Tushar Gupta
Abstract:
An Upgraded Cuckoo Search Algorithm is proposed here to solve optimization problems based on the improvements made in the earlier versions of Cuckoo Search Algorithm. Short comings of the earlier versions like slow convergence, trap in local optima improved in the proposed version by random initialization of solution by suggesting an Improved Lambda Iteration Relaxation method, Random Gaussian Distribution Walk to improve local search and further proposing Greedy Selection to accelerate to optimized solution quickly and by “Study Nearby Strategy” to improve global search performance by avoiding trapping to local optima. It is further proposed to generate better solution by Crossover Operation. The proposed strategy used in algorithm shows superiority in terms of high convergence speed over several classical algorithms. Three standard algorithms were tested on a 6-generator standard test system and the results are presented which clearly demonstrate its superiority over other established algorithms. The algorithm is also capable of handling higher unit systems.
Keywords: Economic dispatch, Gaussian selection operator, prohibited operating zones, ramp rate limits, upgraded cuckoo search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 68324 Influence of Power Flow Controller on Energy Transaction Charges in Restructured Power System
Authors: Manisha Dubey, Gaurav Gupta, Anoop Arya
Abstract:
The demand for power supply increases day by day in developing countries like India henceforth demand of reactive power support in the form of ancillary services provider also has been increased. The multi-line and multi-type Flexible alternating current transmission system (FACTS) controllers are playing a vital role to regulate power flow through the transmission line. Unified power flow controller and interline power flow controller can be utilized to control reactive power flow through the transmission line. In a restructured power system, the demand of such controller is being popular due to their inherent capability. The transmission pricing by using reactive power cost allocation through modified matrix methodology has been proposed. The FACTS technologies have quite costly assembly, so it is very useful to apportion the expenses throughout the restructured electricity industry. Therefore, in this work, after embedding the FACTS devices into load flow, the impact on the costs allocated to users in fraction to the transmission framework utilization has been analyzed. From the obtained results, it is clear that the total cost recovery is enhanced towards the Reactive Power flow through the different transmission line for 5 bus test system. The fair pricing policy towards reactive power can be achieved by the proposed method incorporating FACTS controller towards cost recovery of the transmission network.
Keywords: Inter line power flow controller, Transmission Pricing, Unified power flow controller, cost allocation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 68423 Dynamic Stall Characterization of Low Reynolds Airfoil in Mars and Titan’s Atmosphere
Authors: Vatasta Koul, Vaibhav Sharma, Ayush Gupta, Rajesh Yadav
Abstract:
Exploratory missions to Mars and Titan have increased recently with various endeavors to find an alternate home to humankind. The use of surface rovers has its limitations due to rugged and uneven surfaces of these planetary bodies. The use of aerial robots requires the complete aerodynamic characterization of these vehicles in the atmospheric conditions of these planetary bodies. The dynamic stall phenomenon is extremely important for rotary wings performance under low Reynolds number that can be encountered in Martian and Titan’s atmosphere. The current research focuses on the aerodynamic characterization and exploration of the dynamic stall phenomenon of two different airfoils viz. E387 and Selig-Donovan7003 in Martian and Titan’s atmosphere at low Reynolds numbers of 10000 and 50000. The two-dimensional numerical simulations are conducted using commercially available finite volume solver with multi-species non-reacting mixture of gases as the working fluid. The k-epsilon (k-ε) turbulence model is used to capture the unsteady flow separation and the effect of turbulence. The dynamic characteristics are studied at a fixed different constant rotational extreme of angles of attack. This study of airfoils at different low Reynolds number and atmospheric conditions on Mars and Titan will be resulting in defining the aerodynamic characteristics of these airfoils for unmanned aerial missions for outer space exploration.
Keywords: Aerodynamic, dynamic stall, low Reynolds, Mars, Titan.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 662