Search results for: Grid computing
1679 Towards Resilient Cloud Computing through Cyber Risk Assessment
Authors: Hilalah Alturkistani, Alaa AlFaadhel, Nora AlJahani, Fatiha Djebbar
Abstract:
Cloud computing is one of the most widely used technology which provides opportunities and services to government entities, large companies, and standard users. However, cybersecurity risk management studies of cloud computing and resiliency approaches are lacking. This paper proposes resilient cloud cybersecurity risk assessment and management tailored specifically, to Dropbox with two approaches:1) technical-based solution motivated by a cybersecurity risk assessment of cloud services, and 2)a target personnel-based solution guided by cybersecurity-related survey among employees to identify their knowledge that qualifies them withstand to any cyberattack. The proposed work attempts to identify cloud vulnerabilities, assess threats and detect high risk components, to finally propose appropriate safeguards such as failure predicting and removing, redundancy or load balancing techniques for quick recovery and return to pre-attack state if failure happens.Keywords: cybersecurity risk management plan, resilient cloud computing, cyberattacks, cybersecurity risk assessment
Procedia PDF Downloads 1431678 Data Stream Association Rule Mining with Cloud Computing
Authors: B. Suraj Aravind, M. H. M. Krishna Prasad
Abstract:
There exist emerging applications of data streams that require association rule mining, such as network traffic monitoring, web click streams analysis, sensor data, data from satellites etc. Data streams typically arrive continuously in high speed with huge amount and changing data distribution. This raises new issues that need to be considered when developing association rule mining techniques for stream data. This paper proposes to introduce an improved data stream association rule mining algorithm by eliminating the limitation of resources. For this, the concept of cloud computing is used. Inclusion of this may lead to additional unknown problems which needs further research.Keywords: data stream, association rule mining, cloud computing, frequent itemsets
Procedia PDF Downloads 5031677 Going Horizontal: Confronting the Challenges When Transitioning to Cloud
Authors: Harvey Hyman, Thomas Hull
Abstract:
As one of the largest cancer treatment centers in the United States, we continuously confront the challenge of how to leverage the best possible technological solutions, in order to provide the highest quality of service to our customers – the doctors, nurses and patients at Moffitt who are fighting every day for the prevention and cure of cancer. This paper reports on the transition from a vertical to a horizontal IT infrastructure. We discuss how the new frameworks and methods such as public, private and hybrid cloud, brokering cloud services are replacing the traditional vertical paradigm for computing. We also report on the impact of containers, micro services, and the shift to continuous integration/continuous delivery. These impacts and changes in delivery methodology for computing are driving how we accomplish our strategic IT goals across the enterprise.Keywords: cloud computing, IT infrastructure, IT architecture, healthcare
Procedia PDF Downloads 3811676 Finite Element Analysis of the Blanking and Stamping Processes of Nuclear Fuel Spacer Grids
Authors: Rafael Oliveira Santos, Luciano Pessanha Moreira, Marcelo Costa Cardoso
Abstract:
Spacer grid assembly supporting the nuclear fuel rods is an important concern in the design of structural components of a Pressurized Water Reactor (PWR). The spacer grid is composed by springs and dimples which are formed from a strip sheet by means of blanking and stamping processes. In this paper, the blanking process and tooling parameters are evaluated by means of a 2D plane-strain finite element model in order to evaluate the punch load and quality of the sheared edges of Inconel 718 strips used for nuclear spacer grids. A 3D finite element model is also proposed to predict the tooling loads resulting from the stamping process of a preformed Inconel 718 strip and to analyse the residual stress effects upon the spring and dimple design geometries of a nuclear spacer grid.Keywords: blanking process, damage model, finite element modelling, inconel 718, spacer grids, stamping process
Procedia PDF Downloads 3461675 A Study on How to Link BIM Services to Cloud Computing Architecture
Authors: Kim Young-Jin, Kim Byung-Kon
Abstract:
Although more efforts to expand the application of BIM (Building Information Modeling) technologies have be pursued in recent years than ever, it’s true that there have been various challenges in doing so, including a lack or absence of relevant institutions, lots of costs required to build BIM-related infrastructure, incompatible processes, etc. This, in turn, has led to a more prolonged delay in the expansion of their application than expected at an early stage. Especially, attempts to save costs for building BIM-related infrastructure and provide various BIM services compatible with domestic processes include studies to link between BIM and cloud computing technologies. Also in this study, the author attempted to develop a cloud BIM service operation model through analyzing the level of BIM applications for the construction sector and deriving relevant service areas, and find how to link BIM services to the cloud operation model, as through archiving BIM data and creating a revenue structure so that the BIM services may grow spontaneously, considering a demand for cloud resources.Keywords: construction IT, BIM (building information modeling), cloud computing, BIM service based cloud computing
Procedia PDF Downloads 4881674 Real-World Vehicle to Grid: Case Study on School Buses in New England
Authors: Aaron Huber, Manoj Karwa
Abstract:
Floods, heat waves, drought, wildfires, tornadoes and other environmental disasters are a snapshot of looming national problems that can create increasing demands on the national grid. With nearly 500,000 school buses on the road and the environmental protection agency (EPA) providing nearly $1B for electric school buses, there is a solution for this national issue. Bidirectional batteries in electric school buses enable a future proof solution to sustain the power grid during adverse environmental conditions and other periods of high demand. School buses have larger batteries than standard electric vehicles. When they are not transporting students, these buses can spend peak solar hours parked and plugged into bi-directional direct current fast chargers (DCFC). A partnership with Highland Electric, Proterra and Rhombus enabled over 7 MWh of energy servicing Massachusetts and Vermont grids. The buses were part of a vehicle to grid (V2G) program with National Grid and Green Mountain Power that can charge an average American home for one month with a single bus. V2G infrastructure enables school systems to future proof their charging strategies, strengthen their local grids and can create additional revenue streams with their EV fleets. A bidirectional ecosystem with Highland, Proterra and Rhombus can enable grid resiliency or the ability to withstand power outages caused by excessive demands, natural disasters or rogue nation's attacks with no loss of service. A fleet of school buses is a standalone resilient asset that can be accessed across a city to keep its citizens safe without having any toxic fumes. Nearly 95% of all school buses across USA are powered by diesel internal combustion engines. Diesel exhaust has been classified as a human carcinogen, and it can lead to and exacerbate respiratory conditions. Bidirectional school buses and chargers enable energy justice by providing backup power in case of emergencies or high demand for marginalized communities and aim to make energy more accessible, affordable, clean, and democratically managed.Keywords: V2G, vehicle to grid, electric buses, eBuses, DC fast chargers, DCFC
Procedia PDF Downloads 771673 DNA PLA: A Nano-Biotechnological Programmable Device
Authors: Hafiz Md. HasanBabu, Khandaker Mohammad Mohi Uddin, Md. IstiakJaman Ami, Rahat Hossain Faisal
Abstract:
Computing in biomolecular programming performs through the different types of reactions. Proteins and nucleic acids are used to store the information generated by biomolecular programming. DNA (Deoxyribose Nucleic Acid) can be used to build a molecular computing system and operating system for its predictable molecular behavior property. The DNA device has clear advantages over conventional devices when applied to problems that can be divided into separate, non-sequential tasks. The reason is that DNA strands can hold so much data in memory and conduct multiple operations at once, thus solving decomposable problems much faster. Programmable Logic Array, abbreviated as PLA is a programmable device having programmable AND operations and OR operations. In this paper, a DNA PLA is designed by different molecular operations using DNA molecules with the proposed algorithms. The molecular PLA could take advantage of DNA's physical properties to store information and perform calculations. These include extremely dense information storage, enormous parallelism, and extraordinary energy efficiency.Keywords: biological systems, DNA computing, parallel computing, programmable logic array, PLA, DNA
Procedia PDF Downloads 1301672 Navigating Cyber Attacks with Quantum Computing: Leveraging Vulnerabilities and Forensics for Advanced Penetration Testing in Cybersecurity
Authors: Sayor Ajfar Aaron, Ashif Newaz, Sajjat Hossain Abir, Mushfiqur Rahman
Abstract:
This paper examines the transformative potential of quantum computing in the field of cybersecurity, with a focus on advanced penetration testing and forensics. It explores how quantum technologies can be leveraged to identify and exploit vulnerabilities more efficiently than traditional methods and how they can enhance the forensic analysis of cyber-attacks. Through theoretical analysis and practical simulations, this study highlights the enhanced capabilities of quantum algorithms in detecting and responding to sophisticated cyber threats, providing a pathway for developing more resilient cybersecurity infrastructures.Keywords: cybersecurity, cyber forensics, penetration testing, quantum computing
Procedia PDF Downloads 721671 Method and Apparatus for Optimized Job Scheduling in the High-Performance Computing Cloud Environment
Authors: Subodh Kumar, Amit Varde
Abstract:
Typical on-premises high-performance computing (HPC) environments consist of a fixed number and a fixed set of computing hardware. During the design of the HPC environment, the hardware components, including but not limited to CPU, Memory, GPU, and networking, are carefully chosen from select vendors for optimal performance. High capital cost for building the environment is a prime factor influencing the design environment. A class of software called “Job Schedulers” are critical to maximizing these resources and running multiple workloads to extract the maximum value for the high capital cost. In principle, schedulers work by preventing workloads and users from monopolizing the finite hardware resources by queuing jobs in a workload. A cloud-based HPC environment does not have the limitations of fixed (type of and quantity of) hardware resources. In theory, users and workloads could spin up any number and type of hardware resource. This paper discusses the limitations of using traditional scheduling algorithms for cloud-based HPC workloads. It proposes a new set of features, called “HPC optimizers,” for maximizing the benefits of the elasticity and scalability of the cloud with the goal of cost-performance optimization of the workload.Keywords: high performance computing, HPC, cloud computing, optimization, schedulers
Procedia PDF Downloads 941670 Recognizing Human Actions by Multi-Layer Growing Grid Architecture
Authors: Z. Gharaee
Abstract:
Recognizing actions performed by others is important in our daily lives since it is necessary for communicating with others in a proper way. We perceive an action by observing the kinematics of motions involved in the performance. We use our experience and concepts to make a correct recognition of the actions. Although building the action concepts is a life-long process, which is repeated throughout life, we are very efficient in applying our learned concepts in analyzing motions and recognizing actions. Experiments on the subjects observing the actions performed by an actor show that an action is recognized after only about two hundred milliseconds of observation. In this study, hierarchical action recognition architecture is proposed by using growing grid layers. The first-layer growing grid receives the pre-processed data of consecutive 3D postures of joint positions and applies some heuristics during the growth phase to allocate areas of the map by inserting new neurons. As a result of training the first-layer growing grid, action pattern vectors are generated by connecting the elicited activations of the learned map. The ordered vector representation layer receives action pattern vectors to create time-invariant vectors of key elicited activations. Time-invariant vectors are sent to second-layer growing grid for categorization. This grid creates the clusters representing the actions. Finally, one-layer neural network developed by a delta rule labels the action categories in the last layer. System performance has been evaluated in an experiment with the publicly available MSR-Action3D dataset. There are actions performed by using different parts of human body: Hand Clap, Two Hands Wave, Side Boxing, Bend, Forward Kick, Side Kick, Jogging, Tennis Serve, Golf Swing, Pick Up and Throw. The growing grid architecture was trained by applying several random selections of generalization test data fed to the system during on average 100 epochs for each training of the first-layer growing grid and around 75 epochs for each training of the second-layer growing grid. The average generalization test accuracy is 92.6%. A comparison analysis between the performance of growing grid architecture and self-organizing map (SOM) architecture in terms of accuracy and learning speed show that the growing grid architecture is superior to the SOM architecture in action recognition task. The SOM architecture completes learning the same dataset of actions in around 150 epochs for each training of the first-layer SOM while it takes 1200 epochs for each training of the second-layer SOM and it achieves the average recognition accuracy of 90% for generalization test data. In summary, using the growing grid network preserves the fundamental features of SOMs, such as topographic organization of neurons, lateral interactions, the abilities of unsupervised learning and representing high dimensional input space in the lower dimensional maps. The architecture also benefits from an automatic size setting mechanism resulting in higher flexibility and robustness. Moreover, by utilizing growing grids the system automatically obtains a prior knowledge of input space during the growth phase and applies this information to expand the map by inserting new neurons wherever there is high representational demand.Keywords: action recognition, growing grid, hierarchical architecture, neural networks, system performance
Procedia PDF Downloads 1581669 Crow Search Algorithm-Based Task Offloading Strategies for Fog Computing Architectures
Authors: Aniket Ganvir, Ritarani Sahu, Suchismita Chinara
Abstract:
The rapid digitization of various aspects of life is leading to the creation of smart IoT ecosystems, where interconnected devices generate significant amounts of valuable data. However, these IoT devices face constraints such as limited computational resources and bandwidth. Cloud computing emerges as a solution by offering ample resources for offloading tasks efficiently despite introducing latency issues, especially for time-sensitive applications like fog computing. Fog computing (FC) addresses latency concerns by bringing computation and storage closer to the network edge, minimizing data travel distance, and enhancing efficiency. Offloading tasks to fog nodes or the cloud can conserve energy and extend IoT device lifespan. The offloading process is intricate, with tasks categorized as full or partial, and its optimization presents an NP-hard problem. Traditional greedy search methods struggle to address the complexity of task offloading efficiently. To overcome this, the efficient crow search algorithm (ECSA) has been proposed as a meta-heuristic optimization algorithm. ECSA aims to effectively optimize computation offloading, providing solutions to this challenging problem.Keywords: IoT, fog computing, task offloading, efficient crow search algorithm
Procedia PDF Downloads 581668 Design and Analysis of 1.4 MW Hybrid Saps System for Rural Electrification in Off-Grid Applications
Authors: Arpan Dwivedi, Yogesh Pahariya
Abstract:
In this paper, optimal design of hybrid standalone power supply system (SAPS) is done for off grid applications in remote areas where transmission of power is difficult. The hybrid SAPS system uses two primary energy sources, wind and solar, and in addition to these diesel generator is also connected to meet the load demand in case of failure of wind and solar system. This paper presents mathematical modeling of 1.4 MW hybrid SAPS system for rural electrification. This paper firstly focuses on mathematical modeling of PV module connected in a string, secondly focuses on modeling of permanent magnet wind turbine generator (PMWTG). The hybrid controller is also designed for selection of power from the source available as per the load demand. The power output of hybrid SAPS system is analyzed for meeting load demands at urban as well as for rural areas.Keywords: SAPS, DG, PMWTG, rural area, off-grid, PV module
Procedia PDF Downloads 2491667 Real-Time Control of Grid-Connected Inverter Based on labVIEW
Authors: L. Benbaouche, H. E. , F. Krim
Abstract:
In this paper we propose real-time control of grid-connected single phase inverter, which is flexible and efficient. The first step is devoted to the study and design of the controller through simulation, conducted by the LabVIEW software on the computer 'host'. The second step is running the application from PXI 'target'. LabVIEW software, combined with NI-DAQmx, gives the tools to easily build applications using the digital to analog converter to generate the PWM control signals. Experimental results show that the effectiveness of LabVIEW software applied to power electronics.Keywords: real-time control, labview, inverter, PWM
Procedia PDF Downloads 5101666 Optimal Design and Simulation of a Grid-Connected Photovoltaic (PV) Power System for an Electrical Department in University of Tripoli, Libya
Authors: Mustafa Al-Refai
Abstract:
This paper presents the optimal design and simulation of a grid-connected Photovoltaic (PV) system to supply electric power to meet the energy demand by Electrical Department in University of Tripoli Libya. Solar radiation is the key factor determining electricity produced by photovoltaic (PV) systems. This paper is designed to develop a novel method to calculate the solar photovoltaic generation capacity on the basis of Mean Global Solar Radiation data available for Tripoli Libya and finally develop a system design of possible plant capacity for the available roof area. MatLab/Simulink Programming tools and monthly average solar radiation data are used for this design and simulation. The specifications of equipments are provided based on the availability of the components in the market. Simulation results and analyses are presented to validate the proposed system configuration.Keywords: photovoltaic (PV), grid, Simulink, solar energy, power plant, solar irradiation
Procedia PDF Downloads 3021665 A Distributed Cryptographically Generated Address Computing Algorithm for Secure Neighbor Discovery Protocol in IPv6
Authors: M. Moslehpour, S. Khorsandi
Abstract:
Due to shortage in IPv4 addresses, transition to IPv6 has gained significant momentum in recent years. Like Address Resolution Protocol (ARP) in IPv4, Neighbor Discovery Protocol (NDP) provides some functions like address resolution in IPv6. Besides functionality of NDP, it is vulnerable to some attacks. To mitigate these attacks, Internet Protocol Security (IPsec) was introduced, but it was not efficient due to its limitation. Therefore, SEND protocol is proposed to automatic protection of auto-configuration process. It is secure neighbor discovery and address resolution process. To defend against threats on NDP’s integrity and identity, Cryptographically Generated Address (CGA) and asymmetric cryptography are used by SEND. Besides advantages of SEND, its disadvantages like the computation process of CGA algorithm and sequentially of CGA generation algorithm are considerable. In this paper, we parallel this process between network resources in order to improve it. In addition, we compare the CGA generation time in self-computing and distributed-computing process. We focus on the impact of the malicious nodes on the CGA generation time in the network. According to the result, although malicious nodes participate in the generation process, CGA generation time is less than when it is computed in a one-way. By Trust Management System, detecting and insulating malicious nodes is easier.Keywords: NDP, IPsec, SEND, CGA, modifier, malicious node, self-computing, distributed-computing
Procedia PDF Downloads 2781664 Grid-Connected Photovoltaic System: System Overview and Sizing Principles
Authors: Najiya Omar, Hamed Aly, Timothy Little
Abstract:
The optimal size of a photovoltaic (PV) array is considered a critical factor in designing an efficient PV system due to the dependence of the PV cell performance on temperature. A high temperature can lead to voltage losses of solar panels, whereas a low temperature can cause voltage overproduction. There are two possible scenarios of the inverter’s operation in which they are associated with the erroneous calculations of the number of PV panels: 1) If the number of the panels is scant and the temperature is high, the minimum voltage required to operate the inverter will not be reached. As a result, the inverter will shut down. 2) Comparably, if the number of panels is excessive and the temperature is low, the produced voltage will be more than the maximum limit of the inverter which can cause the inverter to get disconnected or even damaged. This article aims to assess theoretical and practical methodologies to calculate size and determine the topology of a PV array. The results are validated by applying an experimental evaluation for a 100 kW Grid-connected PV system for a location in Halifax, Nova Scotia and achieving a satisfactory system performance compared to the previous work done.Keywords: sizing PV panels, theoretical and practical methodologies, topology of PV array, grid-connected PV
Procedia PDF Downloads 3661663 Unsteady Three-Dimensional Adaptive Spatial-Temporal Multi-Scale Direct Simulation Monte Carlo Solver to Simulate Rarefied Gas Flows in Micro/Nano Devices
Authors: Mirvat Shamseddine, Issam Lakkis
Abstract:
We present an efficient, three-dimensional parallel multi-scale Direct Simulation Monte Carlo (DSMC) algorithm for the simulation of unsteady rarefied gas flows in micro/nanosystems. The algorithm employs a novel spatiotemporal adaptivity scheme. The scheme performs a fully dynamic multi-level grid adaption based on the gradients of flow macro-parameters and an automatic temporal adaptation. The computational domain consists of a hierarchical octree-based Cartesian grid representation of the flow domain and a triangular mesh for the solid object surfaces. The hybrid mesh, combined with the spatiotemporal adaptivity scheme, allows for increased flexibility and efficient data management, rendering the framework suitable for efficient particle-tracing and dynamic grid refinement and coarsening. The parallel algorithm is optimized to run DSMC simulations of strongly unsteady, non-equilibrium flows over multiple cores. The presented method is validated by comparing with benchmark studies and then employed to improve the design of micro-scale hotwire thermal sensors in rarefied gas flows.Keywords: DSMC, oct-tree hierarchical grid, ray tracing, spatial-temporal adaptivity scheme, unsteady rarefied gas flows
Procedia PDF Downloads 3001662 Accelerating Side Channel Analysis with Distributed and Parallelized Processing
Authors: Kyunghee Oh, Dooho Choi
Abstract:
Although there is no theoretical weakness in a cryptographic algorithm, Side Channel Analysis can find out some secret data from the physical implementation of a cryptosystem. The analysis is based on extra information such as timing information, power consumption, electromagnetic leaks or even sound which can be exploited to break the system. Differential Power Analysis is one of the most popular analyses, as computing the statistical correlations of the secret keys and power consumptions. It is usually necessary to calculate huge data and takes a long time. It may take several weeks for some devices with countermeasures. We suggest and evaluate the methods to shorten the time to analyze cryptosystems. Our methods include distributed computing and parallelized processing.Keywords: DPA, distributed computing, parallelized processing, side channel analysis
Procedia PDF Downloads 4301661 Low Voltage Ride through Capability Techniques for DFIG-Based Wind Turbines
Authors: Sherif O. Zain Elabideen, Ahmed A. Helal, Ibrahim F. El-Arabawy
Abstract:
Due to the drastic increase of the wind turbines installed capacity; the grid codes are increasing the restrictions aiming to treat the wind turbines like other conventional sources sooner. In this paper, an intensive review has been presented for different techniques used to add low voltage ride through capability to Doubly Fed Induction Generator (DFIG) wind turbine. A system model with 1.5 MW DFIG wind turbine is constructed and simulated using MATLAB/SIMULINK to explore the effectiveness of the reviewed techniques.Keywords: DFIG, grid side converters, low voltage ride through, wind turbine
Procedia PDF Downloads 4251660 Parallel Computing: Offloading Matrix Multiplication to GPU
Authors: Bharath R., Tharun Sai N., Bhuvan G.
Abstract:
This project focuses on developing a Parallel Computing method aimed at optimizing matrix multiplication through GPU acceleration. Addressing algorithmic challenges, GPU programming intricacies, and integration issues, the project aims to enhance efficiency and scalability. The methodology involves algorithm design, GPU programming, and optimization techniques. Future plans include advanced optimizations, extended functionality, and integration with high-level frameworks. User engagement is emphasized through user-friendly interfaces, open- source collaboration, and continuous refinement based on feedback. The project's impact extends to significantly improving matrix multiplication performance in scientific computing and machine learning applications.Keywords: matrix multiplication, parallel processing, cuda, performance boost, neural networks
Procedia PDF Downloads 601659 A Study on How to Develop the Usage Metering Functions of BIM (Building Information Modeling) Software under Cloud Computing Environment
Authors: Kim Byung-Kon, Kim Young-Jin
Abstract:
As project opportunities for the Architecture, Engineering and Construction (AEC) industry have grown more complex and larger, the utilization of BIM (Building Information Modeling) technologies for 3D design and simulation practices has been increasing significantly; the typical applications of the BIM technologies include clash detection and design alternative based on 3D planning, which have been expanded over to the technology of construction management in the AEC industry for virtual design and construction. As for now, commercial BIM software has been operated under a single-user environment, which is why initial costs for its introduction are very high. Cloud computing, one of the most promising next-generation Internet technologies, enables simple Internet devices to use services and resources provided with BIM software. Recently in Korea, studies to link between BIM and cloud computing technologies have been directed toward saving costs to build BIM-related infrastructure, and providing various BIM services for small- and medium-sized enterprises (SMEs). This study addressed how to develop the usage metering functions of BIM software under cloud computing architecture in order to archive and use BIM data and create an optimal revenue structure so that the BIM services may grow spontaneously, considering a demand for cloud resources. To this end, the author surveyed relevant cases, and then analyzed needs and requirements from AEC industry. Based on the results & findings of the foregoing survey & analysis, the author proposed herein how to optimally develop the usage metering functions of cloud BIM software.Keywords: construction IT, BIM (Building Information Modeling), cloud computing, BIM-based cloud computing, 3D design, cloud BIM
Procedia PDF Downloads 5071658 ACO-TS: an ACO-based Algorithm for Optimizing Cloud Task Scheduling
Authors: Fahad Y. Al-dawish
Abstract:
The current trend by a large number of organizations and individuals to use cloud computing. Many consider it a significant shift in the field of computing. Cloud computing are distributed and parallel systems consisting of a collection of interconnected physical and virtual machines. With increasing request and profit of cloud computing infrastructure, diverse computing processes can be executed on cloud environment. Many organizations and individuals around the world depend on the cloud computing environments infrastructure to carry their applications, platform, and infrastructure. One of the major and essential issues in this environment related to allocating incoming tasks to suitable virtual machine (cloud task scheduling). Cloud task scheduling is classified as optimization problem, and there are several meta-heuristic algorithms have been anticipated to solve and optimize this problem. Good task scheduler should execute its scheduling technique on altering environment and the types of incoming task set. In this research project a cloud task scheduling methodology based on ant colony optimization ACO algorithm, we call it ACO-TS Ant Colony Optimization for Task Scheduling has been proposed and compared with different scheduling algorithms (Random, First Come First Serve FCFS, and Fastest Processor to the Largest Task First FPLTF). Ant Colony Optimization (ACO) is random optimization search method that will be used for assigning incoming tasks to available virtual machines VMs. The main role of proposed algorithm is to minimizing the makespan of certain tasks set and maximizing resource utilization by balance the load among virtual machines. The proposed scheduling algorithm was evaluated by using Cloudsim toolkit framework. Finally after analyzing and evaluating the performance of experimental results we find that the proposed algorithm ACO-TS perform better than Random, FCFS, and FPLTF algorithms in each of the makespaan and resource utilization.Keywords: cloud Task scheduling, ant colony optimization (ACO), cloudsim, cloud computing
Procedia PDF Downloads 4221657 Grid Connected Photovoltaic Micro Inverter
Authors: S. J. Bindhu, Edwina G. Rodrigues, Jijo Balakrishnan
Abstract:
A grid-connected photovoltaic (PV) micro inverter with good performance properties is proposed in this paper. The proposed inverter with a quadrupler, having more efficiency and less voltage stress across the diodes. The stress that come across the diodes that use in the inverter section is considerably low in the proposed converter, also the protection scheme that we provided can eliminate the chances of the error due to fault. The proposed converter is implemented using perturb and observe algorithm so that the fluctuation in the voltage can be reduce and can attain maximum power point. Finally, some simulation and experimental results are also presented to demonstrate the effectiveness of the proposed converter.Keywords: DC-DC converter, MPPT, quadrupler, PV panel
Procedia PDF Downloads 8421656 Control of Photovoltaic System Interfacing Grid
Authors: Zerzouri Nora
Abstract:
In this paper, author presented the generalities of a photovoltaic system study and simulation. Author inserted the DC-DC converter to raise the voltage level and improve the operation of the PV panel by continuing the operating point at maximum power by using the Perturb and Observe technique (P&O). The connection to the network is made by inserting a three-phase voltage inverter allowing synchronization with the network the inverter is controlled by a PWM control. The simulation results allow the author to visualize the operation of the different components of the system, as well as the behavior of the system during the variation of meteorological values.Keywords: photovoltaic generator PV, boost converter, P&O MPPT, PWM inverter, three phase grid
Procedia PDF Downloads 1211655 Classifying and Predicting Efficiencies Using Interval DEA Grid Setting
Authors: Yiannis G. Smirlis
Abstract:
The classification and the prediction of efficiencies in Data Envelopment Analysis (DEA) is an important issue, especially in large scale problems or when new units frequently enter the under-assessment set. In this paper, we contribute to the subject by proposing a grid structure based on interval segmentations of the range of values for the inputs and outputs. Such intervals combined, define hyper-rectangles that partition the space of the problem. This structure, exploited by Interval DEA models and a dominance relation, acts as a DEA pre-processor, enabling the classification and prediction of efficiency scores, without applying any DEA models.Keywords: data envelopment analysis, interval DEA, efficiency classification, efficiency prediction
Procedia PDF Downloads 1641654 A Type-2 Fuzzy Model for Link Prediction in Social Network
Authors: Mansoureh Naderipour, Susan Bastani, Mohammad Fazel Zarandi
Abstract:
Predicting links that may occur in the future and missing links in social networks is an attractive problem in social network analysis. Granular computing can help us to model the relationships between human-based system and social sciences in this field. In this paper, we present a model based on granular computing approach and Type-2 fuzzy logic to predict links regarding nodes’ activity and the relationship between two nodes. Our model is tested on collaboration networks. It is found that the accuracy of prediction is significantly higher than the Type-1 fuzzy and crisp approach.Keywords: social network, link prediction, granular computing, type-2 fuzzy sets
Procedia PDF Downloads 3271653 Using High Performance Computing for Online Flood Monitoring and Prediction
Authors: Stepan Kuchar, Martin Golasowski, Radim Vavrik, Michal Podhoranyi, Boris Sir, Jan Martinovic
Abstract:
The main goal of this article is to describe the online flood monitoring and prediction system Floreon+ primarily developed for the Moravian-Silesian region in the Czech Republic and the basic process it uses for running automatic rainfall-runoff and hydrodynamic simulations along with their calibration and uncertainty modeling. It takes a long time to execute such process sequentially, which is not acceptable in the online scenario, so the use of high-performance computing environment is proposed for all parts of the process to shorten their duration. Finally, a case study on the Ostravice river catchment is presented that shows actual durations and their gain from the parallel implementation.Keywords: flood prediction process, high performance computing, online flood prediction system, parallelization
Procedia PDF Downloads 4941652 DNA Multiplier: A Design Architecture of a Multiplier Circuit Using DNA Molecules
Authors: Hafiz Md. Hasan Babu, Khandaker Mohammad Mohi Uddin, Nitish Biswas, Sarreha Tasmin Rikta, Nuzmul Hossain Nahid
Abstract:
Nanomedicine and bioengineering use biological systems that can perform computing operations. In a biocomputational circuit, different types of biomolecules and DNA (Deoxyribose Nucleic Acid) are used as active components. DNA computing has the capability of performing parallel processing and a large storage capacity that makes it diverse from other computing systems. In most processors, the multiplier is treated as a core hardware block, and multiplication is one of the time-consuming and lengthy tasks. In this paper, cost-effective DNA multipliers are designed using algorithms of molecular DNA operations with respect to conventional ones. The speed and storage capacity of a DNA multiplier are also much higher than a traditional silicon-based multiplier.Keywords: biological systems, DNA multiplier, large storage, parallel processing
Procedia PDF Downloads 2181651 Study for an Optimal Cable Connection within an Inner Grid of an Offshore Wind Farm
Authors: Je-Seok Shin, Wook-Won Kim, Jin-O Kim
Abstract:
The offshore wind farm needs to be designed carefully considering economics and reliability aspects. There are many decision-making problems for designing entire offshore wind farm, this paper focuses on an inner grid layout which means the connection between wind turbines as well as between wind turbines and an offshore substation. A methodology proposed in this paper determines the connections and the cable type for each connection section using K-clustering, minimum spanning tree and cable selection algorithms. And then, a cost evaluation is performed in terms of investment, power loss and reliability. Through the cost evaluation, an optimal layout of inner grid is determined so as to have the lowest total cost. In order to demonstrate the validity of the methodology, the case study is conducted on 240MW offshore wind farm, and the results show that it is helpful to design optimally offshore wind farm.Keywords: offshore wind farm, optimal layout, k-clustering algorithm, minimum spanning algorithm, cable type selection, power loss cost, reliability cost
Procedia PDF Downloads 3861650 Numerical Study of Off-Design Performance of a Highly Loaded Low Pressure Turbine Cascade
Authors: Shidvash Vakilipour, Mehdi Habibnia, Rouzbeh Riazi, Masoud Mohammadi, Mohammad H. Sabour
Abstract:
The flow field passing through a highly loaded low pressure (LP) turbine cascade is numerically investigated at design and off-design conditions. The Field Operation And Manipulation (OpenFOAM) platform is used as the computational Fluid Dynamics (CFD) tool. Firstly, the influences of grid resolution on the results of k-ε, k-ω, and LES turbulence models are investigated and compared with those of experimental measurements. A numerical pressure under-shoot is appeared near the end of blade pressure surface which is sensitive to grid resolution and flow turbulence modeling. The LES model is able to resolve separation on a coarse and fine grid resolutions. Secondly, the off-design flow condition is modeled by negative and positive inflow incidence angles. The numerical experiments show that a separation bubble generated on blade pressure side is predicted by LES. The total pressure drop is also been calculated at incidence angle between -20◦ and +8◦. The minimum total pressure drop is obtained by k-ω and LES at the design point.Keywords: low pressure turbine, off-design performance, openFOAM, turbulence modeling, flow separation
Procedia PDF Downloads 363