Search results for: queue size distribution at a random epoch
3382 A Review: Comparative Analysis of Different Categorical Data Clustering Ensemble Methods
Authors: S. Sarumathi, N. Shanthi, M. Sharmila
Abstract:
Over the past epoch a rampant amount of work has been done in the data clustering research under the unsupervised learning technique in Data mining. Furthermore several algorithms and methods have been proposed focusing on clustering different data types, representation of cluster models, and accuracy rates of the clusters. However no single clustering algorithm proves to be the most efficient in providing best results. Accordingly in order to find the solution to this issue a new technique, called Cluster ensemble method was bloomed. This cluster ensemble is a good alternative approach for facing the cluster analysis problem. The main hope of the cluster ensemble is to merge different clustering solutions in such a way to achieve accuracy and to improve the quality of individual data clustering. Due to the substantial and unremitting development of new methods in the sphere of data mining and also the incessant interest in inventing new algorithms, makes obligatory to scrutinize a critical analysis of the existing techniques and the future novelty. This paper exposes the comparative study of different cluster ensemble methods along with their features, systematic working process and the average accuracy and error rates of each ensemble methods. Consequently this speculative and comprehensive analysis will be very useful for the community of clustering practitioners and also helps in deciding the most suitable one to rectify the problem in hand.
Keywords: Clustering, Cluster Ensemble methods, Co-association matrix, Consensus function, Median partition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26033381 Energy Distribution of EEG Signals: EEG Signal Wavelet-Neural Network Classifier
Authors: I. Omerhodzic, S. Avdakovic, A. Nuhanovic, K. Dizdarevic
Abstract:
In this paper, a wavelet-based neural network (WNN) classifier for recognizing EEG signals is implemented and tested under three sets EEG signals (healthy subjects, patients with epilepsy and patients with epileptic syndrome during the seizure). First, the Discrete Wavelet Transform (DWT) with the Multi-Resolution Analysis (MRA) is applied to decompose EEG signal at resolution levels of the components of the EEG signal (δ, θ, α, β and γ) and the Parseval-s theorem are employed to extract the percentage distribution of energy features of the EEG signal at different resolution levels. Second, the neural network (NN) classifies these extracted features to identify the EEGs type according to the percentage distribution of energy features. The performance of the proposed algorithm has been evaluated using in total 300 EEG signals. The results showed that the proposed classifier has the ability of recognizing and classifying EEG signals efficiently.
Keywords: Epilepsy, EEG, Wavelet transform, Energydistribution, Neural Network, Classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19753380 Surface Pressure Distribution of a Flapped-Airfoil for Different Momentum Injection at the Leading Edge
Authors: Mohammad Mashud, S. M. Nahid Hasan
Abstract:
The aim of the research work is to modify the NACA 4215 airfoil with flap and rotary cylinder at the leading edge of the airfoil and experimentally study the static pressure distribution over the airfoil completed with flap and leading-edge vortex generator. In this research, NACA 4215 wing model has been constructed by generating the profile geometry using the standard equations and design software such as AutoCAD and SolidWorks. To perform the experiment, three wooden models are prepared and tested in subsonic wind tunnel. The experiments were carried out in various angles of attack. Flap angle and momentum injection rate are changed to observe the characteristics of pressure distribution. In this research, a new concept of flow separation control mechanism has been introduced to improve the aerodynamic characteristics of airfoil. Control of flow separation over airfoil which experiences a vortex generator (rotating cylinder) at the leading edge of airfoil is experimentally simulated under the effects of momentum injection. The experimental results show that the flow separation control is possible by the proposed mechanism, and benefits can be achieved by momentum injection technique. The wing performance is significantly improved due to control of flow separation by momentum injection method.
Keywords: Airfoil, momentum injection, flap and pressure distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6293379 Estimating the Life-Distribution Parameters of Weibull-Life PV Systems Utilizing Non-Parametric Analysis
Authors: Saleem Z. Ramadan
Abstract:
In this paper, a model is proposed to determine the life distribution parameters of the useful life region for the PV system utilizing a combination of non-parametric and linear regression analysis for the failure data of these systems. Results showed that this method is dependable for analyzing failure time data for such reliable systems when the data is scarce.Keywords: Masking, Bathtub model, reliability, non-parametric analysis, useful life.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18433378 The Statistical Significant of Adsorbents for Effective Zn (II) Ions Removal
Authors: Kiurski S. Jelena, Oros B. Ivana, Kecić S. Vesna, Kovačević M. Ilija, Aksentijević M. Snežana
Abstract:
The adsorption efficiency of various adsorbents for the removal of Zn(II) ions from the waste printing developer was studied in laboratory batch mode. The maximum adsorption efficiency of 94.1% was achieved with unfired clay pellets size (d ≈ 15 mm). The obtained values of adsorption efficiency was subjected to the independent-samples t test in order to investigate the statistically significant differences of the investigated adsorbents for the effective removal of Zn(II) ions from the waste printing developer. The most statistically significant differences of adsorption efficiencies for Zn(II) ions removal were obtained between unfired clay pellets (size d ≈ 15 mm) and activated carbon (½t½=6.909), natural zeolite (½t½=10.380), mixture of activated carbon and natural zeolite (½t½=9.865), bentonite (½t½=6.159), fired clay (½t½=6.641), fired clay pellets (size d ≈ 5 mm) (½t½=6.678), fired clay pellets (size d ≈ 8 mm) (½t½=3.422), respectively.
Keywords: Adsorbent, adsorption efficiency, statistical analysis, zinc ion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18883377 Study of Explicit Finite Difference Method in One Dimensional System
Authors: Azizollah Khormali, Seyyed Shahab Tabatabaee Moradi, Dmitry Petrakov
Abstract:
One of the most important parameters in petroleum reservoirs is the pressure distribution along the reservoir, as the pressure varies with the time and location. A popular method to determine the pressure distribution in a reservoir in the unsteady state regime of flow is applying Darcy’s equation and solving this equation numerically. The numerical simulation of reservoirs is based on these numerical solutions of different partial differential equations (PDEs) representing the multiphase flow of fluids. Pressure profile has obtained in a one dimensional system solving Darcy’s equation explicitly. Changes of pressure profile in three situations are investigated in this work. These situations include section length changes, step time changes and time approach to infinity. The effects of these changes in pressure profile are shown and discussed in the paper.
Keywords: Explicit solution, Numerical simulation, Petroleum reservoir, Pressure distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42033376 A Numerical Method to Evaluate the Elastoplastic Material Properties of Fiber Reinforced Composite
Authors: M. Palizvan, M. H. Sadr, M. T. Abadi
Abstract:
The representative volume element (RVE) plays a central role in the mechanics of random heterogeneous materials with a view to predicting their effective properties. In this paper, a computational homogenization methodology, developed to determine effective linear elastic properties of composite materials, is extended to predict the effective nonlinear elastoplastic response of long fiber reinforced composite. Finite element simulations of volumes of different sizes and fiber volume fractures are performed for calculation of the overall response RVE. The dependencies of the overall stress-strain curves on the number of fibers inside the RVE are studied in the 2D cases. Volume averaged stress-strain responses are generated from RVEs and compared with the finite element calculations available in the literature at moderate and high fiber volume fractions. For these materials, the existence of an RVE is demonstrated for the sizes of RVE corresponding to 10–100 times the diameter of the fibers. In addition, the response of small size RVE is found anisotropic, whereas the average of all large ones leads to recover the isotropic material properties.
Keywords: Homogenization, periodic boundary condition, elastoplastic properties, RVE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8603375 An EWMA p Chart Based On Improved Square Root Transformation
Authors: S. Sukparungsee
Abstract:
Generally, the traditional Shewhart p chart has been developed by for charting the binomial data. This chart has been developed using the normal approximation with condition as low defect level and the small to moderate sample size. In real applications, however, are away from these assumptions due to skewness in the exact distribution. In this paper, a modified Exponentially Weighted Moving Average (EWMA) control chat for detecting a change in binomial data by improving square root transformations, namely ISRT p EWMA control chart. The numerical results show that ISRT p EWMA chart is superior to ISRT p chart for small to moderate shifts, otherwise, the latter is better for large shifts.
Keywords: Number of defects, Exponentially Weighted Moving Average, Average Run Length, Square root transformations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24853374 An Exact Solution of Axi-symmetric Conductive Heat Transfer in Cylindrical Composite Laminate under the General Boundary Condition
Authors: M.kayhani, M.Nourouzi, A. Amiri Delooei
Abstract:
This study presents an exact general solution for steady-state conductive heat transfer in cylindrical composite laminates. Appropriate Fourier transformation has been obtained using Sturm-Liouville theorem. Series coefficients are achieved by solving a set of equations that related to thermal boundary conditions at inner and outer of the cylinder, also related to temperature continuity and heat flux continuity between each layer. The solution of this set of equations are obtained using Thomas algorithm. In this paper, the effect of fibers- angle on temperature distribution of composite laminate is investigated under general boundary conditions. Here, we show that the temperature distribution for any composite laminates is between temperature distribution for laminates with θ = 0° and θ = 90° .Keywords: exact solution, composite laminate, heat conduction, cylinder, Fourier transformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24473373 Energy Deposited by Secondary Electrons Generated by Swift Proton Beams through Polymethylmethacrylate
Authors: Maurizio Dapor, Isabel Abril, Pablo de Vera, Rafael Garcia-Molina
Abstract:
The ionization yield of ion tracks in polymers and bio-molecular systems reaches a maximum, known as the Bragg peak, close to the end of the ion trajectories. Along the path of the ions through the materials, many electrons are generated, which produce a cascade of further ionizations and, consequently, a shower of secondary electrons. Among these, very low energy secondary electrons can produce damage in the biomolecules by dissociative electron attachment. This work deals with the calculation of the energy distribution of electrons produced by protons in a sample of polymethylmethacrylate (PMMA), a material that is used as a phantom for living tissues in hadron therapy. PMMA is also of relevance for microelectronics in CMOS technologies and as a photoresist mask in electron beam lithography. We present a Monte Carlo code that, starting from a realistic description of the energy distribution of the electrons ejected by protons moving through PMMA, simulates the entire cascade of generated secondary electrons. By following in detail the motion of all these electrons, we find the radial distribution of the energy that they deposit in PMMA for several initial proton energies characteristic of the Bragg peak.Keywords: Monte Carlo method, secondary electrons, energetic ions, ion-beam cancer therapy, ionization cross section, polymethylmethacrylate, proton beams, secondary electrons, radial energy distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15683372 Robust Adaptation to Background Noise in Multichannel C-OTDR Monitoring Systems
Authors: Andrey V. Timofeev, Viktor M. Denisov
Abstract:
A robust sequential nonparametric method is proposed for adaptation to background noise parameters for real-time. The distribution of background noise was modelled like to Huber contamination mixture. The method is designed to operate as an adaptation-unit, which is included inside a detection subsystem of an integrated multichannel monitoring system. The proposed method guarantees the given size of a nonasymptotic confidence set for noise parameters. Properties of the suggested method are rigorously proved. The proposed algorithm has been successfully tested in real conditions of a functioning C-OTDR monitoring system, which was designed to monitor railways.Keywords: Guaranteed estimation, multichannel monitoring systems, non-asymptotic confidence set, contamination mixture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19483371 CFD Modeling of a Radiator Axial Fan for Air Flow Distribution
Authors: S. Jain, Y. Deshpande
Abstract:
The fluid mechanics principle is used extensively in designing axial flow fans and their associated equipment. This paper presents a computational fluid dynamics (CFD) modeling of air flow distribution from a radiator axial flow fan used in an acid pump truck Tier4 (APT T4) Repower. This axial flow fan augments the transfer of heat from the engine mounted on the APT T4. CFD analysis was performed for an area weighted average static pressure difference at the inlet and outlet of the fan. Pressure contours, velocity vectors, and path lines were plotted for detailing the flow characteristics for different orientations of the fan blade. The results were then compared and verified against known theoretical observations and actual experimental data. This study shows that a CFD simulation can be very useful for predicting and understanding the flow distribution from a radiator fan for further research work.Keywords: Computational fluid dynamics (CFD), acid pump truck (APT) Tier4 Repower, axial flow fan, area weighted average static pressure difference, and contour plots.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 84943370 Protein Graph Partitioning by Mutually Maximization of cycle-distributions
Authors: Frank Emmert Streib
Abstract:
The classification of the protein structure is commonly not performed for the whole protein but for structural domains, i.e., compact functional units preserved during evolution. Hence, a first step to a protein structure classification is the separation of the protein into its domains. We approach the problem of protein domain identification by proposing a novel graph theoretical algorithm. We represent the protein structure as an undirected, unweighted and unlabeled graph which nodes correspond the secondary structure elements of the protein. This graph is call the protein graph. The domains are then identified as partitions of the graph corresponding to vertices sets obtained by the maximization of an objective function, which mutually maximizes the cycle distributions found in the partitions of the graph. Our algorithm does not utilize any other kind of information besides the cycle-distribution to find the partitions. If a partition is found, the algorithm is iteratively applied to each of the resulting subgraphs. As stop criterion, we calculate numerically a significance level which indicates the stability of the predicted partition against a random rewiring of the protein graph. Hence, our algorithm terminates automatically its iterative application. We present results for one and two domain proteins and compare our results with the manually assigned domains by the SCOP database and differences are discussed.Keywords: Graph partitioning, unweighted graph, protein domains.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13563369 Steady State Creep Behavior of Functionally Graded Thick Cylinder
Authors: Tejeet Singh, Harmanjit Singh
Abstract:
Creep behavior of thick-walled functionally graded cylinder consisting of AlSiC and subjected to internal pressure and high temperature has been analyzed. The functional relationship between strain rate with stress can be described by the well known threshold stress based creep law with a stress exponent of five. The effect of imposing non-linear particle gradient on the distribution of creep stresses in the thick-walled functionally graded composite cylinder has been investigated. The study revealed that for the assumed non-linear particle distribution, the radial stress decreases throughout the cylinder, whereas the tangential, axial and effective stresses have averaging effect. The strain rates in the functionally graded composite cylinder could be reduced to significant extent by employing non-linear gradient in the distribution of reinforcement.
Keywords: Functionally Graded Material, Pressure, Steady State Creep, Thick-Cylinder.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19753368 Driver of Tectonic Plate Fracture and Movement
Authors: Xuguang Leng
Abstract:
The theory of tectonic plate asteroid driver provides that comet and asteroid collisions have ample energy to fracture, move, and deform tectonic plate. The enormous kinetic energy of an asteroid collision is dissipated through the fracture and violent movement of the tectonic plates, and stored in the plate deformations. The stored energy will be released in the future through plate slow movement. The reflection of plate edge upwards upon collision impact causes the plate to sit on top of adjacent plate and creates the subduction plate. Higher probability and higher energy of asteroid collision in the equator area provides the net energy to drive heavier land plates to higher latitudes, offsetting the tidal and self spin forces, creating a more random land plates distribution. The trend of asteroid collisions is less frequency and intensity as loose objects are merging into the planets and Jupiter is taking ever larger shares of collisions. As overall energy input from asteroid collision decreases, plate movement is slowing down and eventually land plates will congregate towards equator area. The current trajectory of plate movements is the cumulative effect of past asteroid collisions, and can be altered, new plates be created, by future collisions.
Keywords: Tectonic plate, Earth, asteroid, comet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2123367 Random Projections for Dimensionality Reduction in ICA
Authors: Sabrina Gaito, Andrea Greppi, Giuliano Grossi
Abstract:
In this paper we present a technique to speed up ICA based on the idea of reducing the dimensionality of the data set preserving the quality of the results. In particular we refer to FastICA algorithm which uses the Kurtosis as statistical property to be maximized. By performing a particular Johnson-Lindenstrauss like projection of the data set, we find the minimum dimensionality reduction rate ¤ü, defined as the ratio between the size k of the reduced space and the original one d, which guarantees a narrow confidence interval of such estimator with high confidence level. The derived dimensionality reduction rate depends on a system control parameter β easily computed a priori on the basis of the observations only. Extensive simulations have been done on different sets of real world signals. They show that actually the dimensionality reduction is very high, it preserves the quality of the decomposition and impressively speeds up FastICA. On the other hand, a set of signals, on which the estimated reduction rate is greater than 1, exhibits bad decomposition results if reduced, thus validating the reliability of the parameter β. We are confident that our method will lead to a better approach to real time applications.Keywords: Independent Component Analysis, FastICA algorithm, Higher-order statistics, Johnson-Lindenstrauss lemma.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18903366 Investigation into the Optimum Hydraulic Loading Rate for Selected Filter Media Packed in a Continuous Upflow Filter
Authors: A. Alzeyadi, E. Loffill, R. Alkhaddar
Abstract:
Continuous upflow filters can combine the nutrient (nitrogen and phosphate) and suspended solid removal in one unit process. The contaminant removal could be achieved chemically or biologically; in both processes the filter removal efficiency depends on the interaction between the packed filter media and the influent. In this paper a residence time distribution (RTD) study was carried out to understand and compare the transfer behaviour of contaminants through a selected filter media packed in a laboratory-scale continuous up flow filter; the selected filter media are limestone and white dolomite. The experimental work was conducted by injecting a tracer (red drain dye tracer –RDD) into the filtration system and then measuring the tracer concentration at the outflow as a function of time; the tracer injection was applied at hydraulic loading rates (HLRs) (3.8 to 15.2 m h-1). The results were analysed according to the cumulative distribution function F(t) to estimate the residence time of the tracer molecules inside the filter media. The mean residence time (MRT) and variance σ2 are two moments of RTD that were calculated to compare the RTD characteristics of limestone with white dolomite. The results showed that the exit-age distribution of the tracer looks better at HLRs (3.8 to 7.6 m h-1) and (3.8 m h-1) for limestone and white dolomite respectively. At these HLRs the cumulative distribution function F(t) revealed that the residence time of the tracer inside the limestone was longer than in the white dolomite; whereas all the tracer took 8 minutes to leave the white dolomite at 3.8 m h-1. On the other hand, the same amount of the tracer took 10 minutes to leave the limestone at the same HLR. In conclusion, the determination of the optimal level of hydraulic loading rate, which achieved the better influent distribution over the filtration system, helps to identify the applicability of the material as filter media. Further work will be applied to examine the efficiency of the limestone and white dolomite for phosphate removal by pumping a phosphate solution into the filter at HLRs (3.8 to 7.6 m h-1).Keywords: Filter media, hydraulic loading rate, residence time distribution, tracer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18713365 PIIN Suppression Using Random Diagonal Code for Spectral Amplitude Coding Optical CDMA System
Authors: Hilal Adnan Fadhil, Syed Alwei, R. Badlishah Ahmad
Abstract:
A new code for spectral-amplitude coding optical code-division multiple-access system is proposed called Random diagonal (RD) code. This code is constructed using code segment and data segment. One of the important properties of this code is that the cross correlation at data segment is always zero, which means that Phase Intensity Induced Noise (PIIN) is reduced. For the performance analysis, the effects of phase-induced intensity noise, shot noise, and thermal noise are considered simultaneously. Bit-error rate (BER) performance is compared with Hadamard and Modified Frequency Hopping (MFH) codes. It is shown that the system using this new code matrices not only suppress PIIN, but also allows larger number of active users compare with other codes. Simulation results shown that using point to point transmission with three encoded channels, RD code has better BER performance than other codes, also its found that at 0 dbm PIIN noise are 10-10 and 10-11 for RD and MFH respectively.Keywords: OCDMA, MFH, PIIN, and BER.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17873364 Determining the Best Fitting Distributions for Minimum Flows of Streams in Gediz Basin
Authors: Naci Büyükkaracığan
Abstract:
Today, the need for water sources is swiftly increasing due to population growth. At the same time, it is known that some regions will face with shortage of water and drought because of the global warming and climate change. In this context, evaluation and analysis of hydrological data such as the observed trends, drought and flood prediction of short term flow has great deal of importance. The most accurate selection probability distribution is important to describe the low flow statistics for the studies related to drought analysis. As in many basins In Turkey, Gediz River basin will be affected enough by the drought and will decrease the amount of used water. The aim of this study is to derive appropriate probability distributions for frequency analysis of annual minimum flows at 6 gauging stations of the Gediz Basin. After applying 10 different probability distributions, six different parameter estimation methods and 3 fitness test, the Pearson 3 distribution and general extreme values distributions were found to give optimal results.
Keywords: Gediz Basin, goodness-of-fit tests, Minimum flows, probability distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25053363 Unconditionally Secure Quantum Payment System
Authors: Essam Al-Daoud
Abstract:
A potentially serious problem with current payment systems is that their underlying hard problems from number theory may be solved by either a quantum computer or unanticipated future advances in algorithms and hardware. A new quantum payment system is proposed in this paper. The suggested system makes use of fundamental principles of quantum mechanics to ensure the unconditional security without prior arrangements between customers and vendors. More specifically, the new system uses Greenberger-Home-Zeilinger (GHZ) states and Quantum Key Distribution to authenticate the vendors and guarantee the transaction integrity.
Keywords: Bell state, GHZ state, Quantum key distribution, Quantum payment system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15503362 Optimizing Logistics for Courier Organizations with Considerations of Congestions and Pickups: A Courier Delivery System in Amman as Case Study
Authors: Nader A. Al Theeb, Zaid Abu Manneh, Ibrahim Al-Qadi
Abstract:
Traveling salesman problem (TSP) is a combinatorial integer optimization problem that asks "What is the optimal route for a vehicle to traverse in order to deliver requests to a given set of customers?”. It is widely used by the package carrier companies’ distribution centers. The main goal of applying the TSP in courier organizations is to minimize the time that it takes for the courier in each trip to deliver or pick up the shipments during a day. In this article, an optimization model is constructed to create a new TSP variant to optimize the routing in a courier organization with a consideration of congestion in Amman, the capital of Jordan. Real data were collected by different methods and analyzed. Then, concert technology - CPLEX was used to solve the proposed model for some random generated data instances and for the real collected data. At the end, results have shown a great improvement in time compared with the current trip times, and an economic study was conducted afterwards to figure out the impact of using such models.
Keywords: Travel salesman problem, congestions, pick-up, integer programming, package carriers, service engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9413361 A Novel Probablistic Strategy for Modeling Photovoltaic Based Distributed Generators
Authors: Engy A. Mohamed, Yasser G. Hegazy
Abstract:
This paper presents a novel algorithm for modeling photovoltaic based distributed generators for the purpose of optimal planning of distribution networks. The proposed algorithm utilizes sequential Monte Carlo method in order to accurately consider the stochastic nature of photovoltaic based distributed generators. The proposed algorithm is implemented in MATLAB environment and the results obtained are presented and discussed.Keywords: Comulative distribution function, distributed generation, Monte Carlo.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24833360 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling
Authors: M. Almutairi, S. Hadjiloucas
Abstract:
The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.
Keywords: Harmonics, passive filter, power factor, power quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21913359 Efficient Antenna Array Beamforming with Robustness against Random Steering Mismatch
Authors: Ju-Hong Lee, Ching-Wei Liao, Kun-Che Lee
Abstract:
This paper deals with the problem of using antenna sensors for adaptive beamforming in the presence of random steering mismatch. We present an efficient adaptive array beamformer with robustness to deal with the considered problem. The robustness of the proposed beamformer comes from the efficient designation of the steering vector. Using the received array data vector, we construct an appropriate correlation matrix associated with the received array data vector and a correlation matrix associated with signal sources. Then, the eigenvector associated with the largest eigenvalue of the constructed signal correlation matrix is designated as an appropriate estimate of the steering vector. Finally, the adaptive weight vector required for adaptive beamforming is obtained by using the estimated steering vector and the constructed correlation matrix of the array data vector. Simulation results confirm the effectiveness of the proposed method.
Keywords: Adaptive beamforming, antenna array, linearly constrained minimum variance, robustness, steering vector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6973358 Split-Pipe Design of Water Distribution Network Using Simulated Annealing
Authors: J. Tospornsampan, I. Kita, M. Ishii, Y. Kitamura
Abstract:
In this paper a procedure for the split-pipe design of looped water distribution network based on the use of simulated annealing is proposed. Simulated annealing is a heuristic-based search algorithm, motivated by an analogy of physical annealing in solids. It is capable for solving the combinatorial optimization problem. In contrast to the split-pipe design that is derived from a continuous diameter design that has been implemented in conventional optimization techniques, the split-pipe design proposed in this paper is derived from a discrete diameter design where a set of pipe diameters is chosen directly from a specified set of commercial pipes. The optimality and feasibility of the solutions are found to be guaranteed by using the proposed method. The performance of the proposed procedure is demonstrated through solving the three well-known problems of water distribution network taken from the literature. Simulated annealing provides very promising solutions and the lowest-cost solutions are found for all of these test problems. The results obtained from these applications show that simulated annealing is able to handle a combinatorial optimization problem of the least cost design of water distribution network. The technique can be considered as an alternative tool for similar areas of research. Further applications and improvements of the technique are expected as well.Keywords: Combinatorial problem, Heuristics, Least-cost design, Looped network, Pipe network, Optimization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26783357 The Estimation Method of Stress Distribution for Beam Structures Using the Terrestrial Laser Scanning
Authors: Sang Wook Park, Jun Su Park, Byung Kwan Oh, Yousok Kim, Hyo Seon Park
Abstract:
This study suggests the estimation method of stress distribution for the beam structures based on TLS (Terrestrial Laser Scanning). The main components of method are the creation of the lattices of raw data from TLS to satisfy the suitable condition and application of CSSI (Cubic Smoothing Spline Interpolation) for estimating stress distribution. Estimation of stress distribution for the structural member or the whole structure is one of the important factors for safety evaluation of the structure. Existing sensors which include ESG (Electric strain gauge) and LVDT (Linear Variable Differential Transformer) can be categorized as contact type sensor which should be installed on the structural members and also there are various limitations such as the need of separate space where the network cables are installed and the difficulty of access for sensor installation in real buildings. To overcome these problems inherent in the contact type sensors, TLS system of LiDAR (light detection and ranging), which can measure the displacement of a target in a long range without the influence of surrounding environment and also get the whole shape of the structure, has been applied to the field of structural health monitoring. The important characteristic of TLS measuring is a formation of point clouds which has many points including the local coordinate. Point clouds are not linear distribution but dispersed shape. Thus, to analyze point clouds, the interpolation is needed vitally. Through formation of averaged lattices and CSSI for the raw data, the method which can estimate the displacement of simple beam was developed. Also, the developed method can be extended to calculate the strain and finally applicable to estimate a stress distribution of a structural member. To verify the validity of the method, the loading test on a simple beam was conducted and TLS measured it. Through a comparison of the estimated stress and reference stress, the validity of the method is confirmed.Keywords: Structural health monitoring, terrestrial laser scanning, estimation of stress distribution, coordinate transformation, cubic smoothing spline interpolation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27433356 Using Artificial Neural Network to Forecast Groundwater Depth in Union County Well
Authors: Zahra Ghadampour, Gholamreza Rakhshandehroo
Abstract:
A concern that researchers usually face in different applications of Artificial Neural Network (ANN) is determination of the size of effective domain in time series. In this paper, trial and error method was used on groundwater depth time series to determine the size of effective domain in the series in an observation well in Union County, New Jersey, U.S. different domains of 20, 40, 60, 80, 100, and 120 preceding day were examined and the 80 days was considered as effective length of the domain. Data sets in different domains were fed to a Feed Forward Back Propagation ANN with one hidden layer and the groundwater depths were forecasted. Root Mean Square Error (RMSE) and the correlation factor (R2) of estimated and observed groundwater depths for all domains were determined. In general, groundwater depth forecast improved, as evidenced by lower RMSEs and higher R2s, when the domain length increased from 20 to 120. However, 80 days was selected as the effective domain because the improvement was less than 1% beyond that. Forecasted ground water depths utilizing measured daily data (set #1) and data averaged over the effective domain (set #2) were compared. It was postulated that more accurate nature of measured daily data was the reason for a better forecast with lower RMSE (0.1027 m compared to 0.255 m) in set #1. However, the size of input data in this set was 80 times the size of input data in set #2; a factor that may increase the computational effort unpredictably. It was concluded that 80 daily data may be successfully utilized to lower the size of input data sets considerably, while maintaining the effective information in the data set.Keywords: Neural networks, groundwater depth, forecast.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25163355 Isotropic Stress Distribution in Cu/(001) Fe Two Sheets
Authors: A. Derardja, L. Baroura, M. Brioua
Abstract:
The nanotechnology based on epitaxial systems includes single or arranged misfit dislocations. In general, whatever is the type of dislocation or the geometry of the array formed by the dislocations; it is important for experimental studies to know exactly the stress distribution for which there is no analytical expression [1, 2]. This work, using a numerical analysis, deals with relaxation of epitaxial layers having at their interface a periodic network of edge misfit dislocations. The stress distribution is estimated by using isotropic elasticity. The results show that the thickness of the two sheets is a crucial parameter in the stress distributions and then in the profile of the two sheets. A comparative study between the case of single dislocation and the case of parallel network shows that the layers relaxed better when the interface is covered by a parallel arrangement of misfit. Consequently, a single dislocation at the interface produces an important stress field which can be reduced by inserting a parallel network of dislocations with suitable periodicity.Keywords: Parallel array of misfit, interface, isotropic elasticity, single crystalline substrates, coherent interface
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15713354 Determination of the Optimum Size of Building Stone Blocks: Case Study of Delichai Travertine Mine
Authors: Hesam Sedaghat Nejad, Navid Hosseini, Arash Nikvar Hassani
Abstract:
Determination of the optimum block size with high profitability is one of the significant parameters in designation of the building stone mines. The aim of this study was to determine the optimum dimensions of building stone blocks in Delichai travertine mine of Damavand in Tehran province through combining the effective parameters proven in determination of the optimum dimensions in building stones such as the spacing of joints and gaps, extraction tools constraints with the help of modeling by Gemcom software. To this end, following simulation of the topography of the mine, the block model was prepared and then in order to use spacing joints and discontinuities as a limiting factor, the existing joints set was added to the model. Since only one almost horizontal joint set with a slope of 5 degrees was available, this factor was effective only in determining the optimum height of the block, and thus to determine the longitudinal and transverse optimum dimensions of the extracted block, the power of available loader in the mine was considered as the secondary limiting factor. According to the aforementioned factors, the optimal block size in this mine was measured as 3.4×4×7 meter.
Keywords: Building stone, optimum block size, Delichai Travertine Mine, loader power.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12443353 A Sociological Study of Rural Women Attitudes toward Education, Health and Work outside Home in Beheira Governorate, Egypt
Authors: A. A. Betah
Abstract:
This research was performed to evaluate the attitudes of rural women towards education, health and work outside the home. The study was based on a random sample of 147 rural women, Kafr-Rahmaniyah village was chosen for the study because its life expectancy at birth for females, education and percentage of females in the labor force, were the highest in the district. The study data were collected from rural female respondents, using a face-to-face questionnaire. In addition, the study estimated several factors like age, main occupation, family size, monthly household income, geographic cosmopolites, and degree of social participation for rural women respondents. Using Statistical Package for the Social Sciences (SPSS), data were analyzed by non-parametric statistical methods. The main finding in this study was a significant relationship between each of the previous variables and each of rural women’s attitudes toward education, health, and work outside home. The study concluded with some recommendations. The most important element is ensuring attention to rural women’s needs, requirements and rights via raising their health awareness, education and their contributions in their society.Keywords: Attitudes, education, health, rural women, work outside the home.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1071