Search results for: concave minimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 311

Search results for: concave minimization

191 Modeling and Analysis of Laser Sintering Process Scanning Time for Optimal Planning and Control

Authors: Agarana Michael C., Akinlabi Esther T., Pule Kholopane

Abstract:

In order to sustain the advantages of an advanced manufacturing technique, such as laser sintering, minimization of total processing cost of the parts being produced is very important. An efficient time management would usually very important in optimal cost attainment which would ultimately result in an efficient advanced manufacturing process planning and control. During Laser Scanning Process Scanning (SLS) procedures it is possible to adjust various manufacturing parameters which are used to influence the improvement of various mechanical and other properties of the products. In this study, Modelling and mathematical analysis, including sensitivity analysis, of the laser sintering process time were carried out. The results of the analyses were represented with graphs, from where conclusions were drawn. It was specifically observed that achievement of optimal total scanning time is key for economic efficiency which is required for sustainability of the process.

Keywords: modeling and analysis, optimal planning and control, laser sintering process, scanning time

Procedia PDF Downloads 73
190 Waste Minimization through Vermicompost: An Alternative Approach

Authors: Mary Fabiola

Abstract:

Vermicompost is the product or process of composting using various worms. Large-scale vermicomposting is practiced in Canada, Italy, Japan, Malaysia, the Philippines, and the United States. The vermicompost may be used for farming, landscaping, and creating compost tea or for sale. Some of these operations produce worms for bait and/or home vermicomposting. As a processing system, The vermicomposting of organic waste is very simple. Worms ingest the waste material-break it up in their rudimentary. Gizzards, consume the digestible/putrefiable portion and then excrete a stable, Humus-like material that can be immediately marketed. Vermitechnology can be a promising technique that has shown its potential in certain challenging areas like augmentation of food production, waste recycling, management of solid wastes etc. There is no doubt that in India, where on side pollution is increasing due to accumulation of organic wastes and on the other side there is shortage of organic manure, which could increase the fertility and productivity of the land and produce nutritive and safe food. So, the scope for vermicomposting is enormous.

Keywords: pollution, solid wastes, vermicompost, waste recycling

Procedia PDF Downloads 393
189 Development of a New Piezoelectrically Actuated Micropump for Liquid and Gas

Authors: Chiang-Ho Cheng, An-Shik Yang, Chih-Jer Lin, Chun-Ying Lee

Abstract:

This paper aims to present the design, fabrication and test of a novel piezoelectric actuated, check-valves embedded micropump having the advantages of miniature size, light weight and low power consumption. This device is designed to pump gases and liquids with the capability of performing the self-priming and bubble-tolerant work mode by maximizing the stroke volume of the membrane as well as the compression ratio via minimization of the dead volume of the micropump chamber and channel. By experiment apparatus setup, we can get the real-time values of the flow rate of micropump, the displacement of the piezoelectric actuator and the deformation of the check valve, simultaneously. The micropump with check valve 0.4 mm in thickness obtained higher output performance under the sinusoidal waveform of 120 Vpp. The micropump achieved the maximum pumping rates of 42.2 ml/min and back pressure of 14.0 kPa at the corresponding frequency of 28 and 20 Hz. The presented micropump is able to pump gases with a pumping rate of 196 ml/min at operating frequencies of 280 Hz under the sinusoidal waveform of 120 Vpp.

Keywords: actuator, check-valve, micropump, piezoelectric

Procedia PDF Downloads 408
188 Comparison of Receiver Operating Characteristic Curve Smoothing Methods

Authors: D. Sigirli

Abstract:

The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.

Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve

Procedia PDF Downloads 123
187 An Epsilon Hierarchical Fuzzy Twin Support Vector Regression

Authors: Arindam Chaudhuri

Abstract:

The research presents epsilon- hierarchical fuzzy twin support vector regression (epsilon-HFTSVR) based on epsilon-fuzzy twin support vector regression (epsilon-FTSVR) and epsilon-twin support vector regression (epsilon-TSVR). Epsilon-FTSVR is achieved by incorporating trapezoidal fuzzy numbers to epsilon-TSVR which takes care of uncertainty existing in forecasting problems. Epsilon-FTSVR determines a pair of epsilon-insensitive proximal functions by solving two related quadratic programming problems. The structural risk minimization principle is implemented by introducing regularization term in primal problems of epsilon-FTSVR. This yields dual stable positive definite problems which improves regression performance. Epsilon-FTSVR is then reformulated as epsilon-HFTSVR consisting of a set of hierarchical layers each containing epsilon-FTSVR. Experimental results on both synthetic and real datasets reveal that epsilon-HFTSVR has remarkable generalization performance with minimum training time.

Keywords: regression, epsilon-TSVR, epsilon-FTSVR, epsilon-HFTSVR

Procedia PDF Downloads 331
186 The Hyperbolic Smoothing Approach for Automatic Calibration of Rainfall-Runoff Models

Authors: Adilson Elias Xavier, Otto Corrêa Rotunno Filho, Paulo Canedo De Magalhães

Abstract:

This paper addresses the issue of automatic parameter estimation in conceptual rainfall-runoff (CRR) models. Due to threshold structures commonly occurring in CRR models, the associated mathematical optimization problems have the significant characteristic of being strongly non-differentiable. In order to face this enormous task, the resolution method proposed adopts a smoothing strategy using a special C∞ differentiable class function. The final estimation solution is obtained by solving a sequence of differentiable subproblems which gradually approach the original conceptual problem. The use of this technique, called Hyperbolic Smoothing Method (HSM), makes possible the application of the most powerful minimization algorithms, and also allows for the main difficulties presented by the original CRR problem to be overcome. A set of computational experiments is presented for the purpose of illustrating both the reliability and the efficiency of the proposed approach.

Keywords: rainfall-runoff models, automatic calibration, hyperbolic smoothing method

Procedia PDF Downloads 104
185 Genetic Algorithm Optimization of a Small Scale Natural Gas Liquefaction Process

Authors: M. I. Abdelhamid, A. O. Ghallab, R. S. Ettouney, M. A. El-Rifai

Abstract:

An optimization scheme based on COM server is suggested for communication between Genetic Algorithm (GA) toolbox of MATLAB and Aspen HYSYS. The structure and details of the proposed framework are discussed. The power of the developed scheme is illustrated by its application to the optimization of a recently developed natural gas liquefaction process in which Aspen HYSYS was used for minimization of the power consumption by optimizing the values of five operating variables. In this work, optimization by coupling between the GA in MATLAB and Aspen HYSYS model of the same process using the same five decision variables enabled improvements in power consumption by 3.3%, when 77% of the natural gas feed is liquefied. Also on inclusion of the flow rates of both nitrogen and carbon dioxide refrigerants as two additional decision variables, the power consumption decreased by 6.5% for a 78% liquefaction of the natural gas feed.

Keywords: stranded gas liquefaction, genetic algorithm, COM server, single nitrogen expansion, carbon dioxide pre-cooling

Procedia PDF Downloads 410
184 Thermal Regeneration of CO2 Spent Palm Shell-Polyetheretherketone Activated Carbon Sorbents

Authors: Usman D. Hamza, Noor S. Nasri, Mohammed Jibril, Husna M. Zain

Abstract:

Activated carbons (M4P0, M4P2, and M5P2) used in this research were produced from palm shell and polyetherether ketone (PEEK) via carbonization, impregnation, and microwave activation. The adsorption/desorption process was carried out using static volumetric adsorption. Regeneration is important in the overall economy of the process and waste minimization. This work focuses on the thermal regeneration of the CO2 exhausted microwave activated carbons. The regeneration strategy adopted was thermal with nitrogen purge desorption with N2 feed flow rate of 20 ml/min for 1 h at atmospheric pressure followed by drying at 1500C. Seven successive adsorption/regeneration processes were carried out on the material. It was found that after seven adsorption regeneration cycles; the regeneration efficiency (RE) for CO2 activated carbon from palm shell only (M4P0) was more than 90% while that of hybrid palm shell-PEEK (M4P2, M5P2) was above 95%. The cyclic adsorption and regeneration shows the stability of the adsorbent materials.

Keywords: activated carbon, palm shell-PEEK, regeneration, thermal

Procedia PDF Downloads 459
183 Application of Transportation Linear Programming Algorithms to Cost Reduction in Nigeria Soft Drinks Industry

Authors: Salami Akeem Olanrewaju

Abstract:

The transportation models or problems are primarily concerned with the optimal (best possible) way in which a product produced at different factories or plants (called supply origins) can be transported to a number of warehouses or customers (called demand destinations). The objective in a transportation problem is to fully satisfy the destination requirements within the operating production capacity constraints at the minimum possible cost. The objective of this study is to determine ways of minimizing transport cost in order to maximum profit. Data were gathered from the records of the Distribution Department of 7-Up Bottling Company Plc. Ilorin, Kwara State, Nigeria. The data were analyzed using SPSS (Statistical Package for Social Sciences) while applying the three methods of solving a transportation problem. The three methods produced the same results; therefore, any of the method can be adopted by the company in transporting its final products to the wholesale dealers in order to minimize total production cost.

Keywords: cost minimization, resources utilization, distribution system, allocation problem

Procedia PDF Downloads 226
182 A Teaching Learning Based Optimization for Optimal Design of a Hybrid Energy System

Authors: Ahmad Rouhani, Masood Jabbari, Sima Honarmand

Abstract:

This paper introduces a method to optimal design of a hybrid Wind/Photovoltaic/Fuel cell generation system for a typical domestic load that is not located near the electricity grid. In this configuration the combination of a battery, an electrolyser, and a hydrogen storage tank are used as the energy storage system. The aim of this design is minimization of overall cost of generation scheme over 20 years of operation. The Matlab/Simulink is applied for choosing the appropriate structure and the optimization of system sizing. A teaching learning based optimization is used to optimize the cost function. An overall power management strategy is designed for the proposed system to manage power flows among the different energy sources and the storage unit in the system. The results have been analyzed in terms of technics and economics. The simulation results indicate that the proposed hybrid system would be a feasible solution for stand-alone applications at remote locations.

Keywords: hybrid energy system, optimum sizing, power management, TLBO

Procedia PDF Downloads 542
181 Strategies for the Optimization of Ground Resistance in Large Scale Foundations for Optimum Lightning Protection

Authors: Oibar Martinez, Clara Oliver, Jose Miguel Miranda

Abstract:

In this paper, we discuss the standard improvements which can be made to reduce the earth resistance in difficult terrains for optimum lightning protection, what are the practical limitations, and how the modeling can be refined for accurate diagnostics and ground resistance minimization. Ground resistance minimization can be made via three different approaches: burying vertical electrodes connected in parallel, burying horizontal conductive plates or meshes, or modifying the own terrain, either by changing the entire terrain material in a large volume or by adding earth-enhancing compounds. The use of vertical electrodes connected in parallel pose several practical limitations. In order to prevent loss of effectiveness, it is necessary to keep a minimum distance between each electrode, which is typically around five times larger than the electrode length. Otherwise, the overlapping of the local equipotential lines around each electrode reduces the efficiency of the configuration. The addition of parallel electrodes reduces the resistance and facilitates the measurement, but the basic parallel resistor formula of circuit theory will always underestimate the final resistance. Numerical simulation of equipotential lines around the electrodes overcomes this limitation. The resistance of a single electrode will always be proportional to the soil resistivity. The electrodes are usually installed with a backfilling material of high conductivity, which increases the effective diameter. However, the improvement is marginal, since the electrode diameter counts in the estimation of the ground resistance via a logarithmic function. Substances that are used for efficient chemical treatment must be environmentally friendly and must feature stability, high hygroscopicity, low corrosivity, and high electrical conductivity. A number of earth enhancement materials are commercially available. Many are comprised of carbon-based materials or clays like bentonite. These materials can also be used as backfilling materials to reduce the resistance of an electrode. Chemical treatment of soil has environmental issues. Some products contain copper sulfate or other copper-based compounds, which may not be environmentally friendly. Carbon-based compounds are relatively inexpensive and they do have very low resistivities, but they also feature corrosion issues. Typically, the carbon can corrode and destroy a copper electrode in around five years. These compounds also have potential environmental concerns. Some earthing enhancement materials contain cement, which, after installation acquire properties that are very close to concrete. This prevents the earthing enhancement material from leaching into the soil. After analyzing different configurations, we conclude that a buried conductive ring with vertical electrodes connected periodically should be the optimum baseline solution for the grounding of a large size structure installed on a large resistivity terrain. In order to show this, a practical example is explained here where we simulate the ground resistance of a conductive ring buried in a terrain with a resistivity in the range of 1 kOhm·m.

Keywords: grounding improvements, large scale scientific instrument, lightning risk assessment, lightning standards

Procedia PDF Downloads 106
180 Optimizing Design Parameters for Efficient Saturated Steam Production in Fire Tube Boilers: A Cost-Effective Approach

Authors: Yoftahe Nigussie Worku

Abstract:

This research focuses on advancing fire tube boiler technology by systematically optimizing design parameters to achieve efficient saturated steam production. The main objective is to design a high-performance boiler with a production capacity of 2000kg/h at a 12-bar design pressure while minimizing costs. The methodology employs iterative analysis, utilizing relevant formulas, and considers material selection and production methods. The study successfully results in a boiler operating at 85.25% efficiency, with a fuel consumption rate of 140.37kg/hr and a heat output of 1610kW. Theoretical importance lies in balancing efficiency, safety considerations, and cost minimization. The research addresses key questions on parameter optimization, material choices, and safety-efficiency balance, contributing valuable insights to fire tube boiler design.

Keywords: safety consideration, efficiency, production methods, material selection

Procedia PDF Downloads 22
179 Minimizing Mutant Sets by Equivalence and Subsumption

Authors: Samia Alblwi, Amani Ayad

Abstract:

Mutation testing is the art of generating syntactic variations of a base program and checking whether a candidate test suite can identify all the mutants that are not semantically equivalent to the base: this technique is widely used by researchers to select quality test suites. One of the main obstacles to the widespread use of mutation testing is cost: even small pro-grams (a few dozen lines of code) can give rise to a large number of mutants (up to hundreds): this has created an incentive to seek to reduce the number of mutants while preserving their collective effectiveness. Two criteria have been used to reduce the size of mutant sets: equiva-lence, which aims to partition the set of mutants into equivalence classes modulo semantic equivalence, and selecting one representative per class; subsumption, which aims to define a partial ordering among mutants that ranks mutants by effectiveness and seeks to select maximal elements in this ordering. In this paper we analyze these two policies using analytical and em-pirical criteria.

Keywords: mutation testing, mutant sets, mutant equivalence, mutant subsumption, mutant set minimization

Procedia PDF Downloads 32
178 Performance Comparison of ADTree and Naive Bayes Algorithms for Spam Filtering

Authors: Thanh Nguyen, Andrei Doncescu, Pierre Siegel

Abstract:

Classification is an important data mining technique and could be used as data filtering in artificial intelligence. The broad application of classification for all kind of data leads to be used in nearly every field of our modern life. Classification helps us to put together different items according to the feature items decided as interesting and useful. In this paper, we compare two classification methods Naïve Bayes and ADTree use to detect spam e-mail. This choice is motivated by the fact that Naive Bayes algorithm is based on probability calculus while ADTree algorithm is based on decision tree. The parameter settings of the above classifiers use the maximization of true positive rate and minimization of false positive rate. The experiment results present classification accuracy and cost analysis in view of optimal classifier choice for Spam Detection. It is point out the number of attributes to obtain a tradeoff between number of them and the classification accuracy.

Keywords: classification, data mining, spam filtering, naive bayes, decision tree

Procedia PDF Downloads 384
177 A Method of Effective Planning and Control of Industrial Facility Energy Consumption

Authors: Aleksandra Aleksandrovna Filimonova, Lev Sergeevich Kazarinov, Tatyana Aleksandrovna Barbasova

Abstract:

A method of effective planning and control of industrial facility energy consumption is offered. The method allows to optimally arrange the management and full control of complex production facilities in accordance with the criteria of minimal technical and economic losses at the forecasting control. The method is based on the optimal construction of the power efficiency characteristics with the prescribed accuracy. The problem of optimal designing of the forecasting model is solved on the basis of three criteria: maximizing the weighted sum of the points of forecasting with the prescribed accuracy; the solving of the problem by the standard principles at the incomplete statistic data on the basis of minimization of the regularized function; minimizing the technical and economic losses due to the forecasting errors.

Keywords: energy consumption, energy efficiency, energy management system, forecasting model, power efficiency characteristics

Procedia PDF Downloads 354
176 Industrial Ecology Perspectives of Food Supply Chains: A Framework of Analysis

Authors: Luciano Batista, Sylvia Saes, Nuno Fouto, Liam Fassam

Abstract:

This paper introduces the theoretical and methodological basis of an analytical framework conceived with the purpose of bringing industrial ecology perspectives into the core of the underlying disciplines supporting analyses in studies concerned with environmental sustainability aspects beyond the product cycle in a supply chain. Given the pressing challenges faced by the food sector, the framework focuses upon waste minimization through industrial linkages in food supply chains. The combination of industrial ecology practice with basic LCA elements, the waste hierarchy model, and the spatial scale of industrial symbiosis allows the standardization of qualitative analyses and associated outcomes. Such standardization enables comparative analysis not only between different stages of a supply chain, but also between different supply chains. The analytical approach proposed contributes more coherently to the wider circular economy aspiration of optimizing the flow of goods to get the most out of raw materials and cuts wastes to a minimum.

Keywords: by-product synergy, food supply chain, industrial ecology, industrial symbiosis

Procedia PDF Downloads 388
175 Optimization of Process Parameters in Wire Electrical Discharge Machining of Inconel X-750 for Dimensional Deviation Using Taguchi Technique

Authors: Mandeep Kumar, Hari Singh

Abstract:

The effective optimization of machining process parameters affects dramatically the cost and production time of machined components as well as the quality of the final products. This paper presents the optimization aspects of a Wire Electrical Discharge Machining operation using Inconel X-750 as work material. The objective considered in this study is minimization of the dimensional deviation. Six input process parameters of WEDM namely spark gap voltage, pulse-on time, pulse-off time, wire feed rate, peak current and wire tension, were chosen as variables to study the process performance. Taguchi's design of experiments methodology has been used for planning and designing the experiments. The analysis of variance was carried out for raw data as well as for signal to noise ratio. Four input parameters and one two-factor interaction have been found to be statistically significant for their effects on the response of interest. The confirmation experiments were also performed for validating the predicted results.

Keywords: ANOVA, DOE, inconel, machining, optimization

Procedia PDF Downloads 171
174 Mechanical Simulation with Electrical and Dimensional Tests for AISHa Containment Chamber

Authors: F. Noto, G. Costa, L. Celona, F. Chines, G. Ciavola, G. Cuttone, S. Gammino, O. Leonardi, S. Marletta, G. Torrisi

Abstract:

At Istituto Nazionale di Fisica Nucleare – Laboratorio Nazionale del Sud (INFN-LNS), a broad experience in the design, construction and commissioning of ECR and microwave ion sources is available. The AISHa ion source has been designed by taking into account the typical requirements of hospital-based facilities, where the minimization of the mean time between failures (MTBF) is a key point together with the maintenance operations, which should be fast and easy. It is intended to be a multipurpose device, operating at 18 GHz, in order to achieve higher plasma densities. It should provide enough versatility for future needs of the hadron therapy, including the ability to run at larger microwave power to produce different species and highly charged ion beams. The source is potentially interesting for any hadron therapy facility using heavy ions. In this paper, we analyzed the dimensional test and electrical test about an innovative solution for the containment chamber that allows us to solve our isolation and structural problems.

Keywords: FEM analysis, electron cyclotron resonance ion source, dielectrical measurement, hadron therapy

Procedia PDF Downloads 265
173 Optimal Reactive Power Dispatch under Various Contingency Conditions Using Whale Optimization Algorithm

Authors: Khaled Ben Oualid Medani, Samir Sayah

Abstract:

The Optimal Reactive Power Dispatch (ORPD) problem has been solved and analysed usually in the normal conditions. However, network collapses appear in contingency conditions. In this paper, ORPD under several contingencies is presented using the proposed method WOA. To ensure viability of the power system in contingency conditions, several critical cases are simulated in order to prevent and prepare the power system to face such situations. The results obtained are carried out in IEEE 30 bus test system for the solution of ORPD problem in which control of bus voltages, tap position of transformers and reactive power sources are involved. Moreover, another method, namely, Particle Swarm Optimization with Time Varying Acceleration Coefficient (PSO-TVAC) has been compared with the proposed technique. Simulation results indicate that the proposed WOA gives remarkable solution in terms of effectiveness in case of outages.

Keywords: optimal reactive power dispatch, power system analysis, real power loss minimization, contingency condition, metaheuristic technique, whale optimization algorithm

Procedia PDF Downloads 88
172 Thermal Characterization of Smart and Large-Scale Building Envelope System in a Subtropical Climate

Authors: Andrey A. Chernousov, Ben Y. B. Chan

Abstract:

The thermal behavior of a large-scale, phase change material (PCM) enhanced building envelope system was studied in regard to the need for pre-fabricated construction in subtropical regions. The proposed large-scale envelope consists of a reinforced aluminum skin, insulation core, phase change material and reinforced gypsum board. The PCM impact on an energy efficiency of an enveloped room was resolved by validation of the Energy Plus numerical scheme and optimization of a smart material location in the core. The PCM location was optimized by a minimization method of a cooling energy demand. It has been shown that there is good agreement between the test and simulation results. The optimal location of the PCM layer in Hong Kong summer conditions has been then recomputed for core thicknesses of 40, 60 and 80 mm. A non-dimensional value of the optimal PCM location was obtained to be same for all the studied cases and the considered external and internal conditions.

Keywords: thermal performance, phase change material, energy efficiency, PCM optimization

Procedia PDF Downloads 379
171 Energy Benefits of Urban Platooning with Self-Driving Vehicles

Authors: Eduardo F. Mello, Peter H. Bauer

Abstract:

The primary focus of this paper is the generation of energy-optimal speed trajectories for heterogeneous electric vehicle platoons in urban driving conditions. Optimal speed trajectories are generated for individual vehicles and for an entire platoon under the assumption that they can be executed without errors, as would be the case for self-driving vehicles. It is then shown that the optimization for the “average vehicle in the platoon” generates similar transportation energy savings to optimizing speed trajectories for each vehicle individually. The introduced approach only requires the lead vehicle to run the optimization software while the remaining vehicles are only required to have adaptive cruise control capability. The achieved energy savings are typically between 30% and 50% for stop-to-stop segments in cities. The prime motivation of urban platooning comes from the fact that urban platoons efficiently utilize the available space and the minimization of transportation energy in cities is important for many reasons, i.e., for environmental, power, and range considerations.

Keywords: electric vehicles, energy efficiency, optimization, platooning, self-driving vehicles, urban traffic

Procedia PDF Downloads 142
170 Optimization of Line Loss Minimization Using Distributed Generation

Authors: S. Sambath, P. Palanivel

Abstract:

Research conducted in the last few decades has proven that an inclusion of Distributed Genaration (DG) into distribution systems considerably lowers the level of power losses and the power quality improved. Moreover, the choice of DG is even more attractive since it provides not only benefits in power loss minimisation, but also a wide range of other advantages including environment, economic, power qualities and technical issues. This paper is an intent to quantify and analyse the impact of distributed generation (DG) in Tamil Nadu, India to examine what the benefits of decentralized generation would be for meeting rural loads. We used load flow analysis to simulate and quantify the loss reduction and power quality enhancement by having decentralized generation available line conditions for actual rural feeders in Tamil Nadu, India. Reactive and voltage profile was considered. This helps utilities to better plan their system in rural areas to meet dispersed loads, while optimizing the renewable and decentralised generation sources.

Keywords: distributed generation, distribution system, load flow analysis, optimal location, power quality

Procedia PDF Downloads 375
169 Minimization of Denial of Services Attacks in Vehicular Adhoc Networking by Applying Different Constraints

Authors: Amjad Khan

Abstract:

The security of Vehicular ad hoc networking is of great importance as it involves serious life threats. Thus to provide secure communication amongst Vehicles on road, the conventional security system is not enough. It is necessary to prevent the network resources from wastage and give them protection against malicious nodes so that to ensure the data bandwidth availability to the legitimate nodes of the network. This work is related to provide a non conventional security system by introducing some constraints to minimize the DoS (Denial of services) especially data and bandwidth. The data packets received by a node in the network will pass through a number of tests and if any of the test fails, the node will drop those data packets and will not forward it anymore. Also if a node claims to be the nearest node for forwarding emergency messages then the sender can effectively identify the true or false status of the claim by using these constraints. Consequently the DoS(Denial of Services) attack is minimized by the instant availability of data without wasting the network resources.

Keywords: black hole attack, grey hole attack, intransient traffic tempering, networking

Procedia PDF Downloads 258
168 Blind Super-Resolution Reconstruction Based on PSF Estimation

Authors: Osama A. Omer, Amal Hamed

Abstract:

Successful blind image Super-Resolution algorithms require the exact estimation of the Point Spread Function (PSF). In the absence of any prior information about the imagery system and the true image; this estimation is normally done by trial and error experimentation until an acceptable restored image quality is obtained. Multi-frame blind Super-Resolution algorithms often have disadvantages of slow convergence and sensitiveness to complex noises. This paper presents a Super-Resolution image reconstruction algorithm based on estimation of the PSF that yields the optimum restored image quality. The estimation of PSF is performed by the knife-edge method and it is implemented by measuring spreading of the edges in the reproduced HR image itself during the reconstruction process. The proposed image reconstruction approach is using L1 norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. A series of experiment results show that the proposed method can outperform other previous work robustly and efficiently.

Keywords: blind, PSF, super-resolution, knife-edge, blurring, bilateral, L1 norm

Procedia PDF Downloads 336
167 Influence of Measurement System on Negative Bias Temperature Instability Characterization: Fast BTI vs Conventional BTI vs Fast Wafer Level Reliability

Authors: Vincent King Soon Wong, Hong Seng Ng, Florinna Sim

Abstract:

Negative Bias Temperature Instability (NBTI) is one of the critical degradation mechanisms in semiconductor device reliability that causes shift in the threshold voltage (Vth). However, thorough understanding of this reliability failure mechanism is still unachievable due to a recovery characteristic known as NBTI recovery. This paper will demonstrate the severity of NBTI recovery as well as one of the effective methods used to mitigate, which is the minimization of measurement system delays. Comparison was done in between two measurement systems that have significant differences in measurement delays to show how NBTI recovery causes result deviations and how fast measurement systems can mitigate NBTI recovery. Another method to minimize NBTI recovery without the influence of measurement system known as Fast Wafer Level Reliability (FWLR) NBTI was also done to be used as reference.

Keywords: fast vs slow BTI, fast wafer level reliability (FWLR), negative bias temperature instability (NBTI), NBTI measurement system, metal-oxide-semiconductor field-effect transistor (MOSFET), NBTI recovery, reliability

Procedia PDF Downloads 381
166 Realistic Testing Procedure of Power Swing Blocking Function in Distance Relay

Authors: Farzad Razavi, Behrooz Taheri, Mohammad Parpaei, Mehdi Mohammadi Ghalesefidi, Siamak Zarei

Abstract:

As one of the major problems in protecting large-dimension power systems, power swing and its effect on distance have caused a lot of damages to energy transfer systems in many parts of the world. Therefore, power swing has gained attentions of many researchers, which has led to invention of different methods for power swing detection. Power swing detection algorithm is highly important in distance relay, but protection relays should have general requirements such as correct fault detection, response rate, and minimization of disturbances in a power system. To ensure meeting the requirements, protection relays need different tests during development, setup, maintenance, configuration, and troubleshooting steps. This paper covers power swing scheme of the modern numerical relay protection, 7sa522 to address the effect of the different fault types on the function of the power swing blocking. In this study, it was shown that the different fault types during power swing cause different time for unblocking distance relay.

Keywords: power swing, distance relay, power system protection, relay test, transient in power system

Procedia PDF Downloads 342
165 Role of Zakat and Awqf in Socioeconomic Development of Pakistan: Exploring the Issues and Challenges

Authors: Marium. K.Makhdoom, Talat Hussain, Syed H. Bukhari

Abstract:

The motivation behind this paper is to focus the need of Zakat as a monetary framework with a specific end goal and as a social equity instrument and minimization of the level of poverty in society to assess the socioeconomic development. The procedure of the study includes investigating the applied system of Islamic economics to propose an option display so as to contribute fundamentally to the Ummah and serving the countries. This paper closes to be viewed Zakat as one of the best possible strategies to quantify the socioeconomic development, which implies when individuals pay Zakat the socioeconomic development level will be higher and vice versa. The duties of Muslims to pay Zakat to accomplish practical improvement as far as wealth redistribution in the middle of Muslims and in addition overcoming any and all hardships between the rich and the poor in the general public. The paper adds to consider Zakat as an index to gauge economic development, moreover, the part of Zakat as an instrument of social equity and neediness destruction in the public eye. By and large, this includes the installment every year of more than two percent of one's capital after the needs of the family have been met.

Keywords: Zakat, Waqf, economic development, Pakistan, Islamic economics, macroeconomics, microeconomics

Procedia PDF Downloads 394
164 Density Measurement of Underexpanded Jet Using Stripe Patterned Background Oriented Schlieren Method

Authors: Shinsuke Udagawa, Masato Yamagishi, Masanori Ota

Abstract:

The Schlieren method, which has been conventionally used to visualize high-speed flows, has disadvantages such as the complexity of the experimental setup and the inability to quantitatively analyze the amount of refraction of light. The Background Oriented Schlieren (BOS) method proposed by Meier is one of the measurement methods that solves the problems, as mentioned above. The refraction of light is used for BOS method same as the Schlieren method. The BOS method is characterized using a digital camera to capture the images of the background behind the observation area. The images are later analyzed by a computer to quantitatively detect the amount of shift of the background image. The experimental setup for BOS does not require concave mirrors, pinholes, or color filters, which are necessary in the conventional Schlieren method, thus simplifying the experimental setup. However, the defocusing of the observation results is caused in case of using BOS method. Since the focus of camera on the background image leads to defocusing of the observed object. The defocusing of object becomes greater with increasing the distance between the background and the object. On the other hand, the higher sensitivity can be obtained. Therefore, it is necessary to adjust the distance between the background and the object to be appropriate for the experiment, considering the relation between the defocus and the sensitivity. The purpose of this study is to experimentally clarify the effect of defocus on density field reconstruction. In this study, the visualization experiment of underexpanded jet using BOS measurement system with ronchi ruling as the background that we constructed, have been performed. The reservoir pressure of the jet and the distance between camera and axis of jet is fixed, and the distance between background and axis of jet has been changed as the parameter. The images have been later analyzed by using personal computer to quantitatively detect the amount of shift of the background image from the comparison between the background pattern and the captured image of underexpanded jet. The quantitatively measured amount of shift have been reconstructed into a density flow field using the Abel transformation and the Gradstone-Dale equation. From the experimental results, it is found that the reconstructed density image becomes blurring, and noise becomes decreasing with increasing the distance between background and axis of underexpanded jet. Consequently, it is cralified that the sensitivity constant should be greater than 20, and the circle of confusion diameter should be less than 2.7mm at least in this experimental setup.

Keywords: BOS method, underexpanded jet, abel transformation, density field visualization

Procedia PDF Downloads 41
163 Optimization of the Measure of Compromise as a Version of Sorites Paradox

Authors: Aleksandar Hatzivelkos

Abstract:

The term ”compromise” is mostly used casually within the social choice theory. It is usually used as a mere result of the social choice function, and this omits its deeper meaning and ramifications. This paper is based on a mathematical model for the description of a compromise as a version of the Sorites paradox. It introduces a formal definition of d-measure of divergence from a compromise and models a notion of compromise that is often used only colloquially. Such a model for vagueness phenomenon, which lies at the core of the notion of compromise enables the introduction of new mathematical structures. In order to maximize compromise, different methods can be used. In this paper, we explore properties of a social welfare function TdM (from Total d-Measure), which is defined as a function which minimizes the total sum of d-measures of divergence over all possible linear orderings. We prove that TdM satisfy strict Pareto principle and behaves well asymptotically. Furthermore, we show that for certain domain restrictions, TdM satisfy positive responsiveness and IIIA (intense independence of irrelevant alternatives) thus being equivalent to Borda count on such domain restriction. This result gives new opportunities in social choice, especially when there is an emphasis on compromise in the decision-making process.

Keywords: borda count, compromise, measure of divergence, minimization

Procedia PDF Downloads 100
162 Hybrid PWM Techniques for the Reduction of Switching Losses and Voltage Harmonics in Cascaded Multilevel Inverters

Authors: Venkata Reddy Kota

Abstract:

These days, the industrial trend is moving away from heavy and bulky passive components to power converter systems that use more and more semiconductor elements. Also, it is difficult to connect the traditional converters to the high and medium voltage. For these reasons, a new family of multilevel inverters has appeared as a solution for working with higher voltage levels. Different modulation topologies like Sinusoidal Pulse Width Modulation (SPWM), Selective Harmonic Elimination Pulse Width Modulation (SHE-PWM) are available for multilevel inverters. In this work, different hybrid modulation techniques which are combination of fundamental frequency modulation and multilevel sinusoidal-modulation are compared. The main characteristic of these modulations are reduction of switching losses with good harmonic performance and balanced power loss dissipation among the device. The proposed hybrid modulation schemes are developed and simulated in Matlab/Simulink for cascaded H-bridge inverter. The results validate the applicability of the proposed schemes for cascaded multilevel inverter.

Keywords: hybrid PWM techniques, cascaded multilevel inverters, switching loss minimization

Procedia PDF Downloads 584