Search results for: fast Fourier algorithms
3809 A Simple Chemical Precipitation Method of Titanium Dioxide Nanoparticles Using Polyvinyl Pyrrolidone as a Capping Agent and Their Characterization
Authors: V. P. Muhamed Shajudheen, K. Viswanathan, K. Anitha Rani, A. Uma Maheswari, S. Saravana Kumar
Abstract:
In this paper, a simple chemical precipitation route for the preparation of titanium dioxide nanoparticles, synthesized by using titanium tetra isopropoxide as a precursor and polyvinyl pyrrolidone (PVP) as a capping agent, is reported. The Differential Scanning Calorimetry (DSC) and Thermo Gravimetric Analysis (TGA) of the samples were recorded and the phase transformation temperature of titanium hydroxide, Ti(OH)4 to titanium oxide, TiO2 was investigated. The as-prepared Ti(OH)4 precipitate was annealed at 800°C to obtain TiO2 nanoparticles. The thermal, structural, morphological and textural characterizations of the TiO2 nanoparticle samples were carried out by different techniques such as DSC-TGA, X-Ray Diffraction (XRD), Fourier Transform Infra-Red spectroscopy (FTIR), Micro Raman spectroscopy, UV-Visible absorption spectroscopy (UV-Vis), Photoluminescence spectroscopy (PL) and Field Effect Scanning Electron Microscopy (FESEM) techniques. The as-prepared precipitate was characterized using DSC-TGA and confirmed the mass loss of around 30%. XRD results exhibited no diffraction peaks attributable to anatase phase, for the reaction products, after the solvent removal. The results indicate that the product is purely rutile. The vibrational frequencies of two main absorption bands of prepared samples are discussed from the results of the FTIR analysis. The formation of nanosphere of diameter of the order of 10 nm, has been confirmed by FESEM. The optical band gap was found by using UV-Visible spectrum. From photoluminescence spectra, a strong emission was observed. The obtained results suggest that this method provides a simple, efficient and versatile technique for preparing TiO2 nanoparticles and it has the potential to be applied to other systems for photocatalytic activity.Keywords: TiO2 nanoparticles, chemical precipitation route, phase transition, Fourier Transform Infra-Red spectroscopy (FTIR), micro-Raman spectroscopy, UV-Visible absorption spectroscopy (UV-Vis), Photoluminescence Spectroscopy (PL) and Field Effect Scanning electron microscopy (FESEM)
Procedia PDF Downloads 3243808 Study of Durability of Porous Polymer Materials, Glass-Fiber-Reinforced Polyurethane Foam (R-PUF) in MarkIII Containment Membrane System
Authors: Florent Cerdan, Anne-Gaëlle Denay, Annette Roy, Jean-Claude Grandidier, Éric Laine
Abstract:
The insulation of MarkIII membrane of the Liquid Natural Gas Carriers (LNGC) consists of a load- bearing system made of panels in reinforced polyurethane foam (R-PUF). During the shipping, the cargo containment shall be potentially subject to risk events which can be water leakage through the wall ballast tank. The aim of these present works is to further develop understanding of water transfer mechanisms and water effect on properties of R-PUF. This multi-scale approach contributes to improve the durability. Macroscale / Mesoscale Firstly, the use of the gravimetric technique has allowed to define, at room temperature, the water transfer mechanisms and kinetic diffusion, in the R-PUF. The solubility follows a first kinetic fast growing connected to the water absorption by the micro-porosity, and then evolves linearly slowly, this second stage is connected to molecular diffusion and dissolution of water in the dense membranes polyurethane. Secondly, in the purpose of improving the understanding of the transfer mechanism, the study of the evolution of the buoyant force has been established. It allowed to identify the effect of the balance of total and partial pressure of mixture gas contained in pores surface. Mesoscale / Microscale The differential scanning calorimetry (DSC) and Dynamical Mechanical Analysis (DMA), have been used to investigate the hydration of the hard and soft segments of the polyurethane matrix. The purpose was to identify the sensitivity of these two phases. It been shown that the glass transition temperatures shifts towards the low temperatures when the solubility of the water increases. These observations permit to conclude to a plasticization of the polymer matrix. Microscale The Fourier Transform Infrared (FTIR) study has been used to investigate the characterization of functional groups on the edge, the center and mid-way of the sample according the duration of submersion. More water there is in the material, more the water fix themselves on the urethanes groups and more specifically on amide groups. The pic of C=O urethane shifts at lower frequencies quickly before 24 hours of submersion then grows slowly. The intensity of the pic decreases more flatly after that.Keywords: porous materials, water sorption, glass transition temperature, DSC, DMA, FTIR, transfer mechanisms
Procedia PDF Downloads 5293807 Models, Resources and Activities of Project Scheduling Problems
Authors: Jorge A. Ruiz-Vanoye, Ocotlán Díaz-Parra, Alejandro Fuentes-Penna, José J. Hernández-Flores, Edith Olaco Garcia
Abstract:
The Project Scheduling Problem (PSP) is a generic name given to a whole class of problems in which the best form, time, resources and costs for project scheduling are necessary. The PSP is an application area related to the project management. This paper aims at being a guide to understand PSP by presenting a survey of the general parameters of PSP: the Resources (those elements that realize the activities of a project), and the Activities (set of operations or own tasks of a person or organization); the mathematical models of the main variants of PSP and the algorithms used to solve the variants of the PSP. The project scheduling is an important task in project management. This paper contains mathematical models, resources, activities, and algorithms of project scheduling problems. The project scheduling problem has attracted researchers of the automotive industry, steel manufacturer, medical research, pharmaceutical research, telecommunication, industry, aviation industry, development of the software, manufacturing management, innovation and technology management, construction industry, government project management, financial services, machine scheduling, transportation management, and others. The project managers need to finish a project with the minimum cost and the maximum quality.Keywords: PSP, Combinatorial Optimization Problems, Project Management; Manufacturing Management, Technology Management.
Procedia PDF Downloads 4183806 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation
Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber
Abstract:
Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.Keywords: indoor power line, fault location, fault map trace, series arc fault
Procedia PDF Downloads 1383805 In situ Immobilization of Mercury in a Contaminated Calcareous Soil Using Water Treatment Residual Nanoparticles
Authors: Elsayed A. Elkhatib, Ahmed M. Mahdy, Mohamed L. Moharem, Mohamed O. Mesalem
Abstract:
Mercury (Hg) is one of the most toxic and bio-accumulative heavy metal in the environment. However, cheap and effective in situ remediation technology is lacking. In this study, the effects of water treatment residuals nanoparticles (nWTR) on mobility, fractionation and speciation of mercury in an arid zone soil from Egypt were evaluated. Water treatment residual nanoparticles with high surface area (129 m 2 g-1) were prepared using Fritsch planetary mono mill. Scanning and transmission electron microscopy revealed that the nanoparticles of WTR nanoparticles are spherical in shape, and single particle sizes are in the range of 45 to 96 nm. The x-ray diffraction (XRD) results ascertained that amorphous iron, aluminum (hydr)oxides and silicon oxide dominating all nWTR, with no apparent crystalline iron–Al (hydr)oxides. Addition of nWTR, greatly increased the Hg sorption capacities of studied soils and greatly reduced the cumulative Hg released from the soils. Application of nWTR at 0.10 and 0.30 % rates reduced the released Hg from the soil by 50 and 85 % respectively. The power function and first order kinetics models well described the desorption process from soils and nWTR amended soils as evidenced by high coefficient of determination (R2) and low SE values. Application of nWTR greatly increased the association of Hg with the residual fraction. Meanwhile, application of nWTR at a rate of 0.3% greatly increased the association of Hg with the residual fraction (>93%) and significantly increased the most stable Hg species (Hg(OH)2 amor) which in turn enhanced Hg immobilization in the studied soils. Fourier transmission infrared spectroscopy analysis indicated the involvement of nWTR in the retention of Hg (II) through OH groups which suggest inner-sphere adsorption of Hg ions to surface functional groups on nWTR. These results demonstrated the feasibility of using a low-cost nWTR as best management practice to immobilize excess Hg in contaminated soils.Keywords: release kinetics, Fourier transmission infrared spectroscopy, Hg fractionation, Hg species
Procedia PDF Downloads 2343804 Meta-Learning for Hierarchical Classification and Applications in Bioinformatics
Authors: Fabio Fabris, Alex A. Freitas
Abstract:
Hierarchical classification is a special type of classification task where the class labels are organised into a hierarchy, with more generic class labels being ancestors of more specific ones. Meta-learning for classification-algorithm recommendation consists of recommending to the user a classification algorithm, from a pool of candidate algorithms, for a dataset, based on the past performance of the candidate algorithms in other datasets. Meta-learning is normally used in conventional, non-hierarchical classification. By contrast, this paper proposes a meta-learning approach for more challenging task of hierarchical classification, and evaluates it in a large number of bioinformatics datasets. Hierarchical classification is especially relevant for bioinformatics problems, as protein and gene functions tend to be organised into a hierarchy of class labels. This work proposes meta-learning approach for recommending the best hierarchical classification algorithm to a hierarchical classification dataset. This work’s contributions are: 1) proposing an algorithm for splitting hierarchical datasets into new datasets to increase the number of meta-instances, 2) proposing meta-features for hierarchical classification, and 3) interpreting decision-tree meta-models for hierarchical classification algorithm recommendation.Keywords: algorithm recommendation, meta-learning, bioinformatics, hierarchical classification
Procedia PDF Downloads 3143803 Decentralized Peak-Shaving Strategies for Integrated Domestic Batteries
Authors: Corentin Jankowiak, Aggelos Zacharopoulos, Caterina Brandoni
Abstract:
In a context of increasing stress put on the electricity network by the decarbonization of many sectors, energy storage is likely to be the key mitigating element, by acting as a buffer between production and demand. In particular, the highest potential for storage is when connected closer to the loads. Yet, low voltage storage struggles to penetrate the market at a large scale due to the novelty and complexity of the solution, and the competitive advantage of fossil fuel-based technologies regarding regulations. Strong and reliable numerical simulations are required to show the benefits of storage located near loads and promote its development. The present study was restrained from excluding aggregated control of storage: it is assumed that the storage units operate independently to one another without exchanging information – as is currently mostly the case. A computationally light battery model is presented in detail and validated by direct comparison with a domestic battery operating in real conditions. This model is then used to develop Peak-Shaving (PS) control strategies as it is the decentralized service from which beneficial impacts are most likely to emerge. The aggregation of flatter, peak- shaved consumption profiles is likely to lead to flatter and arbitraged profile at higher voltage layers. Furthermore, voltage fluctuations can be expected to decrease if spikes of individual consumption are reduced. The crucial part to achieve PS lies in the charging pattern: peaks depend on the switching on and off of appliances in the dwelling by the occupants and are therefore impossible to predict accurately. A performant PS strategy must, therefore, include a smart charge recovery algorithm that can ensure enough energy is present in the battery in case it is needed without generating new peaks by charging the unit. Three categories of PS algorithms are introduced in detail. First, using a constant threshold or power rate for charge recovery, followed by algorithms using the State Of Charge (SOC) as a decision variable. Finally, using a load forecast – of which the impact of the accuracy is discussed – to generate PS. A performance metrics was defined in order to quantitatively evaluate their operating regarding peak reduction, total energy consumption, and self-consumption of domestic photovoltaic generation. The algorithms were tested on load profiles with a 1-minute granularity over a 1-year period, and their performance was assessed regarding these metrics. The results show that constant charging threshold or power are far from optimal: a certain value is not likely to fit the variability of a residential profile. As could be expected, forecast-based algorithms show the highest performance. However, these depend on the accuracy of the forecast. On the other hand, SOC based algorithms also present satisfying performance, making them a strong alternative when the reliable forecast is not available.Keywords: decentralised control, domestic integrated batteries, electricity network performance, peak-shaving algorithm
Procedia PDF Downloads 1183802 Acceleration of Lagrangian and Eulerian Flow Solvers via Graphics Processing Units
Authors: Pooya Niksiar, Ali Ashrafizadeh, Mehrzad Shams, Amir Hossein Madani
Abstract:
There are many computationally demanding applications in science and engineering which need efficient algorithms implemented on high performance computers. Recently, Graphics Processing Units (GPUs) have drawn much attention as compared to the traditional CPU-based hardware and have opened up new improvement venues in scientific computing. One particular application area is Computational Fluid Dynamics (CFD), in which mature CPU-based codes need to be converted to GPU-based algorithms to take advantage of this new technology. In this paper, numerical solutions of two classes of discrete fluid flow models via both CPU and GPU are discussed and compared. Test problems include an Eulerian model of a two-dimensional incompressible laminar flow case and a Lagrangian model of a two phase flow field. The CUDA programming standard is used to employ an NVIDIA GPU with 480 cores and a C++ serial code is run on a single core Intel quad-core CPU. Up to two orders of magnitude speed up is observed on GPU for a certain range of grid resolution or particle numbers. As expected, Lagrangian formulation is better suited for parallel computations on GPU although Eulerian formulation represents significant speed up too.Keywords: CFD, Eulerian formulation, graphics processing units, Lagrangian formulation
Procedia PDF Downloads 4183801 NFC Kenaf Core Graphene Paper: In-situ Method Application
Authors: M. A. Izzati, R. Rosazley, A. W. Fareezal, M. Z. Shazana, I. Rushdan, M. Jani
Abstract:
Ultrasonic probe were using to produce nanofibrillated cellulose (NFC) kenaf core. NFC kenaf core and graphene was mixed using in-situ method with the 5V voltage for 24 hours. The resulting NFC graphene paper was characterized by field emission scanning electron microscopy (FESEM), fourier transformed infrared (FTIR) spectra and thermogavimetric analysis (TGA). The properties of NFC kenaf core graphene paper are compared with properties of pure NFC kenaf core paper.Keywords: NFC, kenaf core, graphene, in-situ method
Procedia PDF Downloads 3943800 Study on the Thermal Conductivity about Porous Materials in Wet State
Authors: Han Yan, Jieren Luo, Qiuhui Yan, Xiaoqing Li
Abstract:
The thermal conductivity of porous materials is closely related to the thermal and moisture environment and the overall energy consumption of the building. The study of thermal conductivity of porous materials has great significance for the realization of low energy consumption building and economic construction building. Based on the study of effective thermal conductivity of porous materials at home and abroad, the thermal conductivity under a variety of different density of polystyrene board (EPS), plastic extruded board (XPS) and polyurethane (PU) and phenolic resin (PF) in wet state through theoretical analysis and experimental research has been studied. Initially, the moisture absorption and desorption properties of specimens had been discussed under different density, which led a result indicates the moisture absorption of four porous materials all have three stages, fast, stable and gentle. For the moisture desorption, there are two types. One is the existence of the rapid phase of the stage, such as XPS board, PU board. The other one does not have the fast desorption, instead, it is more stabilized, such as XPS board, PF board. Furthermore, the relationship between water content and thermal conductivity of porous materials had been studied and fitted, which figured out that in the wake of the increasing water content, the thermal conductivity of porous material is continually improving. At the same time, this result also shows, in different density, when the same kind of materials decreases, the saturated moisture content increases. Finally, the moisture absorption and desorption properties of the four kinds of materials are compared comprehensively, and it turned out that the heat preservation performance of PU board is the best, followed by EPS board, XPS board, PF board.Keywords: porous materials, thermal conductivity, moisture content, transient hot-wire method
Procedia PDF Downloads 1873799 Alumina Nanoparticles in One-Pot Synthesis of Pyrazolopyranopyrimidinones
Authors: Saeed Khodabakhshi, Alimorad Rashidi, Ziba Tavakoli, Sajad Kiani, Sadegh Dastkhoon
Abstract:
Alumina nanoparticles (γ-Al2O3 NPs) were prepared via a new and simple synthetic route and characterized by field emission scanning electron microscope, X-ray diffraction, and Fourier transform infrared spectroscopy. The catalytic activity of prepared γ-Al2O3 NPs was investigated for the one-pot, four-component synthesis of fused tri-heterocyclic compounds containing pyrazole, pyran, and pyrimidine. This procedure has some advantages such as high efficiency, simplicity, high rate and environmental safety.Keywords: alumina nanoparticles, one-pot, fused tri-heterocyclic compounds, pyran
Procedia PDF Downloads 3323798 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour
Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling
Abstract:
Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model
Procedia PDF Downloads 1003797 Application of Machine Learning Models to Predict Couchsurfers on Free Homestay Platform Couchsurfing
Authors: Yuanxiang Miao
Abstract:
Couchsurfing is a free homestay and social networking service accessible via the website and mobile app. Couchsurfers can directly request free accommodations from others and receive offers from each other. However, it is typically difficult for people to make a decision that accepts or declines a request when they receive it from Couchsurfers because they do not know each other at all. People are expected to meet up with some Couchsurfers who are kind, generous, and interesting while it is unavoidable to meet up with someone unfriendly. This paper utilized classification algorithms of Machine Learning to help people to find out the Good Couchsurfers and Not Good Couchsurfers on the Couchsurfing website. By knowing the prior experience, like Couchsurfer’s profiles, the latest references, and other factors, it became possible to recognize what kind of the Couchsurfers, and furthermore, it helps people to make a decision that whether to host the Couchsurfers or not. The value of this research lies in a case study in Kyoto, Japan in where the author has hosted 54 Couchsurfers, and the author collected relevant data from the 54 Couchsurfers, finally build a model based on classification algorithms for people to predict Couchsurfers. Lastly, the author offered some feasible suggestions for future research.Keywords: Couchsurfing, Couchsurfers prediction, classification algorithm, hospitality tourism platform, hospitality sciences, machine learning
Procedia PDF Downloads 1333796 Permanent Reduction of Arc Flash Energy to Safe Limit on Line Side of 480 Volt Switchgear Incomer Breaker
Authors: Abid Khan
Abstract:
A recognized engineering challenge is related to personnel protection from fatal arc flash incident energy in the line side of the 480-volt switchgear incomer breakers during maintenance activities. The incident energy is typically high due to slow fault clearance, and it can be higher than the available personnel protective equipment (PPE) ratings. A fault in this section of the switchgear is cleared by breakers or fuses in the upstream higher voltage system (4160 Volt or higher). The current reflection in the higher voltage upstream system for a fault in the 480-volt switchgear is low, the clearance time is slower, and the inversely proportional incident energy is hence higher. The installation of overcurrent protection at a 480-volt system upstream of the incomer breaker will operate fast enough and trips the upstream higher voltage breaker when a fault develops at the incomer breaker. Therefore, fault current reduction as reflected in the upstream higher voltage system is eliminated. Since the fast overcurrent protection is permanently installed, it is always functional, does not require human interventions, and eliminates exposure to human errors. It is installed at the maintenance activities location, and its operations can be locally monitored by craftsmen during maintenance activities.Keywords: arc flash, mitigation, maintenance switch, energy level
Procedia PDF Downloads 1953795 Relay Node Placement for Connectivity Restoration in Wireless Sensor Networks Using Genetic Algorithms
Authors: Hanieh Tarbiat Khosrowshahi, Mojtaba Shakeri
Abstract:
Wireless Sensor Networks (WSNs) consist of a set of sensor nodes with limited capability. WSNs may suffer from multiple node failures when they are exposed to harsh environments such as military zones or disaster locations and lose connectivity by getting partitioned into disjoint segments. Relay nodes (RNs) are alternatively introduced to restore connectivity. They cost more than sensors as they benefit from mobility, more power and more transmission range, enforcing a minimum number of them to be used. This paper addresses the problem of RN placement in a multiple disjoint network by developing a genetic algorithm (GA). The problem is reintroduced as the Steiner tree problem (which is known to be an NP-hard problem) by the aim of finding the minimum number of Steiner points where RNs are to be placed for restoring connectivity. An upper bound to the number of RNs is first computed to set up the length of initial chromosomes. The GA algorithm then iteratively reduces the number of RNs and determines their location at the same time. Experimental results indicate that the proposed GA is capable of establishing network connectivity using a reasonable number of RNs compared to the best existing work.Keywords: connectivity restoration, genetic algorithms, multiple-node failure, relay nodes, wireless sensor networks
Procedia PDF Downloads 2443794 Convergence Results of Two-Dimensional Homogeneous Elastic Plates from Truncation of Potential Energy
Authors: Erick Pruchnicki, Nikhil Padhye
Abstract:
Plates are important engineering structures which have attracted extensive research since the 19th century. The subject of this work is statical analysis of a linearly elastic homogenous plate under small deformations. A 'thin plate' is a three-dimensional structure comprising of a small transverse dimension with respect to a flat mid-surface. The general aim of any plate theory is to deduce a two-dimensional model, in terms of mid-surface quantities, to approximately and accurately describe the plate's deformation in terms of mid-surface quantities. In recent decades, a common starting point for this purpose is to utilize series expansion of a displacement field across the thickness dimension in terms of the thickness parameter (h). These attempts are mathematically consistent in deriving leading-order plate theories based on certain a priori scaling between the thickness and the applied loads; for example, asymptotic methods which are aimed at generating leading-order two-dimensional variational problems by postulating formal asymptotic expansion of the displacement fields. Such methods rigorously generate a hierarchy of two-dimensional models depending on the order of magnitude of the applied load with respect to the plate-thickness. However, in practice, applied loads are external and thus not directly linked or dependent on the geometry/thickness of the plate; thus, rendering any such model (based on a priori scaling) of limited practical utility. In other words, the main limitation of these approaches is that they do not furnish a single plate model for all orders of applied loads. Following analogy of recent efforts of deploying Fourier-series expansion to study convergence of reduced models, we propose two-dimensional model(s) resulting from truncation of the potential energy and rigorously prove the convergence of these two-dimensional plate models to the parent three-dimensional linear elasticity with increasing truncation order of the potential energy.Keywords: plate theory, Fourier-series expansion, convergence result, Legendre polynomials
Procedia PDF Downloads 1133793 Analyzing the Factors that Cause Parallel Performance Degradation in Parallel Graph-Based Computations Using Graph500
Authors: Mustafa Elfituri, Jonathan Cook
Abstract:
Recently, graph-based computations have become more important in large-scale scientific computing as they can provide a methodology to model many types of relations between independent objects. They are being actively used in fields as varied as biology, social networks, cybersecurity, and computer networks. At the same time, graph problems have some properties such as irregularity and poor locality that make their performance different than regular applications performance. Therefore, parallelizing graph algorithms is a hard and challenging task. Initial evidence is that standard computer architectures do not perform very well on graph algorithms. Little is known exactly what causes this. The Graph500 benchmark is a representative application for parallel graph-based computations, which have highly irregular data access and are driven more by traversing connected data than by computation. In this paper, we present results from analyzing the performance of various example implementations of Graph500, including a shared memory (OpenMP) version, a distributed (MPI) version, and a hybrid version. We measured and analyzed all the factors that affect its performance in order to identify possible changes that would improve its performance. Results are discussed in relation to what factors contribute to performance degradation.Keywords: graph computation, graph500 benchmark, parallel architectures, parallel programming, workload characterization.
Procedia PDF Downloads 1493792 Probability Modeling and Genetic Algorithms in Small Wind Turbine Design Optimization: Mentored Interdisciplinary Undergraduate Research at LaGuardia Community College
Authors: Marina Nechayeva, Malgorzata Marciniak, Vladimir Przhebelskiy, A. Dragutan, S. Lamichhane, S. Oikawa
Abstract:
This presentation is a progress report on a faculty-student research collaboration at CUNY LaGuardia Community College (LaGCC) aimed at designing a small horizontal axis wind turbine optimized for the wind patterns on the roof of our campus. Our project combines statistical and engineering research. Our wind modeling protocol is based upon a recent wind study by a faculty-student research group at MIT, and some of our blade design methods are adopted from a senior engineering project at CUNY City College. Our use of genetic algorithms has been inspired by the work on small wind turbines’ design by David Wood. We combine these diverse approaches in our interdisciplinary project in a way that has not been done before and improve upon certain techniques used by our predecessors. We employ several estimation methods to determine the best fitting parametric probability distribution model for the local wind speed data obtained through correlating short-term on-site measurements with a long-term time series at the nearby airport. The model serves as a foundation for engineering research that focuses on adapting and implementing genetic algorithms (GAs) to engineering optimization of the wind turbine design using Blade Element Momentum Theory. GAs are used to create new airfoils with desirable aerodynamic specifications. Small scale models of best performing designs are 3D printed and tested in the wind tunnel to verify the accuracy of relevant calculations. Genetic algorithms are applied to selected airfoils to determine the blade design (radial cord and pitch distribution) that would optimize the coefficient of power profile of the turbine. Our approach improves upon the traditional blade design methods in that it lets us dispense with assumptions necessary to simplify the system of Blade Element Momentum Theory equations, thus resulting in more accurate aerodynamic performance calculations. Furthermore, it enables us to design blades optimized for a whole range of wind speeds rather than a single value. Lastly, we improve upon known GA-based methods in that our algorithms are constructed to work with XFoil generated airfoils data which enables us to optimize blades using our own high glide ratio airfoil designs, without having to rely upon available empirical data from existing airfoils, such as NACA series. Beyond its immediate goal, this ongoing project serves as a training and selection platform for CUNY Research Scholars Program (CRSP) through its annual Aerodynamics and Wind Energy Research Seminar (AWERS), an undergraduate summer research boot camp, designed to introduce prospective researchers to the relevant theoretical background and methodology, get them up to speed with the current state of our research, and test their abilities and commitment to the program. Furthermore, several aspects of the research (e.g., writing code for 3D printing of airfoils) are adapted in the form of classroom research activities to enhance Calculus sequence instruction at LaGCC.Keywords: engineering design optimization, genetic algorithms, horizontal axis wind turbine, wind modeling
Procedia PDF Downloads 2323791 Geospatial Network Analysis Using Particle Swarm Optimization
Authors: Varun Singh, Mainak Bandyopadhyay, Maharana Pratap Singh
Abstract:
The shortest path (SP) problem concerns with finding the shortest path from a specific origin to a specified destination in a given network while minimizing the total cost associated with the path. This problem has widespread applications. Important applications of the SP problem include vehicle routing in transportation systems particularly in the field of in-vehicle Route Guidance System (RGS) and traffic assignment problem (in transportation planning). Well known applications of evolutionary methods like Genetic Algorithms (GA), Ant Colony Optimization, Particle Swarm Optimization (PSO) have come up to solve complex optimization problems to overcome the shortcomings of existing shortest path analysis methods. It has been reported by various researchers that PSO performs better than other evolutionary optimization algorithms in terms of success rate and solution quality. Further Geographic Information Systems (GIS) have emerged as key information systems for geospatial data analysis and visualization. This research paper is focused towards the application of PSO for solving the shortest path problem between multiple points of interest (POI) based on spatial data of Allahabad City and traffic speed data collected using GPS. Geovisualization of results of analysis is carried out in GIS.Keywords: particle swarm optimization, GIS, traffic data, outliers
Procedia PDF Downloads 4843790 Investigating the Need to Align with and Adapt Sustainability of Cotton
Authors: Girija Jha
Abstract:
This paper investigates the need of cotton to integrate sustainability. The methodology used in the paper is to do secondary research to find out the various environmental implications of cotton as textile material across its life cycle and try to look at ways and possibilities of minimizing its ecological footprint. Cotton is called ‘The Fabric of Our Lives’. History is replete with examples where this fabric used to be more than a fabric of lives. It used to be a miracle fabric, a symbol India’s pride and social Movement of Swaraj, Gandhijee’s clarion call to self reliance. Cotton is grown in more than 90 countries across the globe on 2.5 percent of the world's arable land in countries like China, India, United States, etc. accounting for almost three fourth of global production. But cotton as a raw material has come under the scanner of sustainability experts because of myriad reasons a few have been discussed here. It may take more than 20,000 liters of water to produce 1kg of cotton. Cotton harvest is primarily done from irrigated land which leads to Salinization and depletion of local water reservoirs, e.g., Drying up of Aral Sea. Cotton is cultivated on 2.4% of total world’s crop land but accounts for 24% usage of insecticide and shares the blame of 11% usage of pesticides leading to health hazards and having an alarmingly dangerous impact on the ecosystem. One of the possible solutions to these problems as proposed was GM, Genetically Modified cotton crop. However, use of GM cotton is still debatable and has many ethical issues. The practice of mass production and increasing consumerism and especially fast fashion has been major culprits to disrupt this delicate balance. Disposable fashion or fast fashion is on the rise and cotton being one of the major choices adds on to the problem. Denims – made of cotton and have a strong fashion statement and the washes being an integral part of their creation they share a lot of blame. These are just a few problems listed. Today Sustainability is the need of the hour and it is inevitable to incorporate have major changes in the way we cultivate and process cotton to make it a sustainable choice. The answer lies in adopting minimalism and boycotting fast fashion, in using Khadi, in saying no to washed denims and using selvedge denims or using better methods of finishing the washed out fabric so that the environment does not bleed blue. Truly, the answer lies in integrating state of art technology with age old sustainable practices so that the synergy of the two may help us come out of the vicious circle.Keywords: cotton, sustainability, denim, Khadi
Procedia PDF Downloads 1583789 Development of Kenaf Cellulose CNT Paper for Electrical Conductive Paper
Authors: A. W. Fareezal, R. Rosazley, M. A. Izzati, M. Z. Shazana, I. Rushdan
Abstract:
Kenaf cellulose CNT paper production was for lightweight, high strength and excellent flexibility electrical purposes. Aqueous dispersions of kenaf cellulose and varied weight percentage of CNT were combined with the assistance of PEI solution by using ultrasonic probe. The solution was dried using vacuum filter continued with air drying in condition room for 2 days. Circle shape conductive paper was characterized with Fourier transformed infrared (FTIR) spectra, scanning electron microscopy (SEM) and therma gravimetric analysis (TGA).Keywords: cellulose, CNT paper, PEI solution, electrical conductive paper
Procedia PDF Downloads 2403788 Evaluation of Features Extraction Algorithms for a Real-Time Isolated Word Recognition System
Authors: Tomyslav Sledevič, Artūras Serackis, Gintautas Tamulevičius, Dalius Navakauskas
Abstract:
This paper presents a comparative evaluation of features extraction algorithm for a real-time isolated word recognition system based on FPGA. The Mel-frequency cepstral, linear frequency cepstral, linear predictive and their cepstral coefficients were implemented in hardware/software design. The proposed system was investigated in the speaker-dependent mode for 100 different Lithuanian words. The robustness of features extraction algorithms was tested recognizing the speech records at different signals to noise rates. The experiments on clean records show highest accuracy for Mel-frequency cepstral and linear frequency cepstral coefficients. For records with 15 dB signal to noise rate the linear predictive cepstral coefficients give best result. The hard and soft part of the system is clocked on 50 MHz and 100 MHz accordingly. For the classification purpose, the pipelined dynamic time warping core was implemented. The proposed word recognition system satisfies the real-time requirements and is suitable for applications in embedded systems.Keywords: isolated word recognition, features extraction, MFCC, LFCC, LPCC, LPC, FPGA, DTW
Procedia PDF Downloads 4973787 An Evolutionary Approach for QAOA for Max-Cut
Authors: Francesca Schiavello
Abstract:
This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization
Procedia PDF Downloads 603786 Optimization of Pumping Power of Water between Reservoir Using Ant Colony System
Authors: Thiago Ribeiro De Alencar, Jacyro Gramulia Junior, Patricia Teixeira Leite Asano
Abstract:
The area of the electricity sector that deals with energy needs by the hydropower and thermoelectric in a coordinated way is called Planning Operating Hydrothermal Power Systems. The aim of this area is to find a political operative to provide electrical power to the system in a specified period with minimization of operating cost. This article proposes a computational tool for solving the planning problem. In addition, this article will be introducing a methodology to find new transfer points between reservoirs increasing energy production in hydroelectric power plants cascade systems. The computational tool proposed in this article applies: i) genetic algorithms to optimize the water transfer and operation of hydroelectric plants systems; and ii) Ant Colony algorithm to find the trajectory with the least energy pumping for the construction of pipes transfer between reservoirs considering the topography of the region. The computational tool has a database consisting of 35 hydropower plants and 41 reservoirs, which are part of the southeastern Brazilian system, which has been implemented in an individualized way.Keywords: ant colony system, genetic algorithms, hydroelectric, hydrothermal systems, optimization, water transfer between rivers
Procedia PDF Downloads 3263785 MAOD Is Estimated by Sum of Contributions
Authors: David W. Hill, Linda W. Glass, Jakob L. Vingren
Abstract:
Maximal accumulated oxygen deficit (MAOD), the gold standard measure of anaerobic capacity, is the difference between the oxygen cost of exhaustive severe intensity exercise and the accumulated oxygen consumption (O2; mL·kg–1). In theory, MAOD can be estimated as the sum of independent estimates of the phosphocreatine and glycolysis contributions, which we refer to as PCr+glycolysis. Purpose: The purpose was to test the hypothesis that PCr+glycolysis provides a valid measure of anaerobic capacity in cycling and running. Methods: The participants were 27 women (mean ± SD, age 22 ±1 y, height 165 ± 7 cm, weight 63.4 ± 9.7 kg) and 25 men (age 22 ± 1 y, height 179 ± 6 cm, weight 80.8 ± 14.8 kg). They performed two exhaustive cycling and running tests, at speeds and work rates that were tolerable for ~5 min. The rate of oxygen consumption (VO2; mL·kg–1·min–1) was measured in warmups, in the tests, and during 7 min of recovery. Fingerprick blood samples obtained after exercise were analysed to determine peak blood lactate concentration (PeakLac). The VO2 response in exercise was fitted to a model, with a fast ‘primary’ phase followed by a delayed ‘slow’ component, from which was calculated the accumulated O2 and the excess O2 attributable to the slow component. The VO2 response in recovery was fitted to a model with a fast phase and slow component, sharing a common time delay. Oxygen demand (in mL·kg–1·min–1) was determined by extrapolation from steady-state VO2 in warmups; the total oxygen cost (in mL·kg–1) was determined by multiplying this demand by time to exhaustion and adding the excess O2; then, MAOD was calculated as total oxygen cost minus accumulated O2. The phosphocreatine contribution (area under the fast phase of the post-exercise VO2) and the glycolytic contribution (converted from PeakLac) were summed to give PCr+glycolysis. There was not an interaction effect involving sex, so values for anaerobic capacity were examined using a two-way ANOVA, with repeated measures across method (PCr+glycolysis vs MAOD) and mode (cycling vs running). Results: There was a significant effect only for exercise mode. There was no difference between MAOD and PCr+glycolysis: values were 59 ± 6 mL·kg–1 and 61 ± 8 mL·kg–1 in cycling and 78 ± 7 mL·kg–1 and 75 ± 8 mL·kg–1 in running. Discussion: PCr+glycolysis is a valid measure of anaerobic capacity in cycling and running, and it is as valid for women as for men.Keywords: alactic, anaerobic, cycling, ergometer, glycolysis, lactic, lactate, oxygen deficit, phosphocreatine, running, treadmill
Procedia PDF Downloads 1393784 Fuzzy Logic Classification Approach for Exponential Data Set in Health Care System for Predication of Future Data
Authors: Manish Pandey, Gurinderjit Kaur, Meenu Talwar, Sachin Chauhan, Jagbir Gill
Abstract:
Health-care management systems are a unit of nice connection as a result of the supply a straightforward and fast management of all aspects relating to a patient, not essentially medical. What is more, there are unit additional and additional cases of pathologies during which diagnosing and treatment may be solely allotted by victimization medical imaging techniques. With associate ever-increasing prevalence, medical pictures area unit directly acquired in or regenerate into digital type, for his or her storage additionally as sequent retrieval and process. Data Mining is the process of extracting information from large data sets through using algorithms and Techniques drawn from the field of Statistics, Machine Learning and Data Base Management Systems. Forecasting may be a prediction of what's going to occur within the future, associated it's an unsure method. Owing to the uncertainty, the accuracy of a forecast is as vital because the outcome foretold by foretelling the freelance variables. A forecast management should be wont to establish if the accuracy of the forecast is within satisfactory limits. Fuzzy regression strategies have normally been wont to develop shopper preferences models that correlate the engineering characteristics with shopper preferences relating to a replacement product; the patron preference models offer a platform, wherever by product developers will decide the engineering characteristics so as to satisfy shopper preferences before developing the merchandise. Recent analysis shows that these fuzzy regression strategies area units normally will not to model client preferences. We tend to propose a Testing the strength of Exponential Regression Model over regression toward the mean Model.Keywords: health-care management systems, fuzzy regression, data mining, forecasting, fuzzy membership function
Procedia PDF Downloads 2803783 Application of Groundwater Level Data Mining in Aquifer Identification
Authors: Liang Cheng Chang, Wei Ju Huang, You Cheng Chen
Abstract:
Investigation and research are keys for conjunctive use of surface and groundwater resources. The hydrogeological structure is an important base for groundwater analysis and simulation. Traditionally, the hydrogeological structure is artificially determined based on geological drill logs, the structure of wells, groundwater levels, and so on. In Taiwan, groundwater observation network has been built and a large amount of groundwater-level observation data are available. The groundwater level is the state variable of the groundwater system, which reflects the system response combining hydrogeological structure, groundwater injection, and extraction. This study applies analytical tools to the observation database to develop a methodology for the identification of confined and unconfined aquifers. These tools include frequency analysis, cross-correlation analysis between rainfall and groundwater level, groundwater regression curve analysis, and decision tree. The developed methodology is then applied to groundwater layer identification of two groundwater systems: Zhuoshui River alluvial fan and Pingtung Plain. The abovementioned frequency analysis uses Fourier Transform processing time-series groundwater level observation data and analyzing daily frequency amplitude of groundwater level caused by artificial groundwater extraction. The cross-correlation analysis between rainfall and groundwater level is used to obtain the groundwater replenishment time between infiltration and the peak groundwater level during wet seasons. The groundwater regression curve, the average rate of groundwater regression, is used to analyze the internal flux in the groundwater system and the flux caused by artificial behaviors. The decision tree uses the information obtained from the above mentioned analytical tools and optimizes the best estimation of the hydrogeological structure. The developed method reaches training accuracy of 92.31% and verification accuracy 93.75% on Zhuoshui River alluvial fan and training accuracy 95.55%, and verification accuracy 100% on Pingtung Plain. This extraordinary accuracy indicates that the developed methodology is a great tool for identifying hydrogeological structures.Keywords: aquifer identification, decision tree, groundwater, Fourier transform
Procedia PDF Downloads 1573782 Detection and Identification of Antibiotic Resistant Bacteria Using Infra-Red-Microscopy and Advanced Multivariate Analysis
Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel
Abstract:
Antimicrobial drugs have an important role in controlling illness associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global health-care problem. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing like disk diffusion are time-consuming and other method including E-test, genotyping are relatively expensive. Fourier transform infrared (FTIR) microscopy is rapid, safe, and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 550 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 85% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.Keywords: antibiotics, E. coli, FTIR, multivariate analysis, susceptibility
Procedia PDF Downloads 2663781 Degradation of Acetaminophen with Fe3O4 and Fe2+ as Activator of Peroxymonosulfate
Authors: Chaoqun Tan, Naiyun Gao, Xiaoyan Xin
Abstract:
Perxymonosulfate (PMS)-based oxidation processes, as an alternative of hydrogen peroxide-based oxidation processes, are more and more popular because of reactive radical species (SO4-•, OH•) produced in systems. Magnetic nano-scaled particles Fe3O4 and ferrous anion (Fe2+) were studied for the activation of PMS for degradation of acetaminophen (APAP) in water. The Fe3O4 MNPs were found to effectively catalyze PMS for APAP and the reactions well followed a pseudo-first-order kinetics pattern (R2 > 0.95), while the degradation of APAP in PMS-Fe2+ system proceeds through two stages: a fast stage and a much slower stage. Within 5 min, approximately 7% and 18% of 10 ppm APAP was accomplished by 0.2 mM PMS in Fe3O4 (0.8g/L) and Fe2+ (0.1mM) activation process. However, as reaction proceed to 120 min, approximately 75% and 35% of APAP was removed in Fe3O4 activation process and Fe2+ activation process, respectively. Within 120 min, the mineralization of APAP was about 7.5% and 5.0% (initial APAP of 10 ppm and [PMS]0 of 0.2 mM) in Fe3O4-PMS and Fe2+-PMS system, while the mineralization could be greatly increased to about 31% and 40% as [PMS]0 increased to 2.0 mM in in Fe3O4-PMS and Fe2+-PMS system, respectively. At last, the production of reactive radical species were validated directly from Electron Paramagnetic Resonance (ESR) tests with 0.1 M 5,5-dimethyl-1-pyrrolidine N-oxide (DMPO). Plausible mechanisms on the radical generation from Fe3O4 and Fe2+ activation of PMS are proposed on the results of radial identification tests. The results demonstrated that Fe3O4 MNPs activated PMS and Fe2+ anion activated PMS systems are promising technologies for water pollution caused by contaminants such as pharmaceutical. Fe3O4-PMS system is more suitable for slowly remediation, while Fe2+-PMS system is more suitable for fast remediation.Keywords: acetaminophen, peroxymonosulfate, radicals, Fe3O4
Procedia PDF Downloads 2583780 A Technique for Image Segmentation Using K-Means Clustering Classification
Authors: Sadia Basar, Naila Habib, Awais Adnan
Abstract:
The paper presents the Technique for Image Segmentation Using K-Means Clustering Classification. The presented algorithms were specific, however, missed the neighboring information and required high-speed computerized machines to run the segmentation algorithms. Clustering is the process of partitioning a group of data points into a small number of clusters. The proposed method is content-aware and feature extraction method which is able to run on low-end computerized machines, simple algorithm, required low-quality streaming, efficient and used for security purpose. It has the capability to highlight the boundary and the object. At first, the user enters the data in the representation of the input. Then in the next step, the digital image is converted into groups clusters. Clusters are divided into many regions. The same categories with same features of clusters are assembled within a group and different clusters are placed in other groups. Finally, the clusters are combined with respect to similar features and then represented in the form of segments. The clustered image depicts the clear representation of the digital image in order to highlight the regions and boundaries of the image. At last, the final image is presented in the form of segments. All colors of the image are separated in clusters.Keywords: clustering, image segmentation, K-means function, local and global minimum, region
Procedia PDF Downloads 376