Search results for: Linux servers
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 144

Search results for: Linux servers

54 Modelling a Hospital as a Queueing Network: Analysis for Improving Performance

Authors: Emad Alenany, M. Adel El-Baz

Abstract:

In this paper, the flow of different classes of patients into a hospital is modelled and analyzed by using the queueing network analyzer (QNA) algorithm and discrete event simulation. Input data for QNA are the rate and variability parameters of the arrival and service times in addition to the number of servers in each facility. Patient flows mostly match real flow for a hospital in Egypt. Based on the analysis of the waiting times, two approaches are suggested for improving performance: Separating patients into service groups, and adopting different service policies for sequencing patients through hospital units. The separation of a specific group of patients, with higher performance target, to be served separately from the rest of patients requiring lower performance target, requires the same capacity while improves performance for the selected group of patients with higher target. Besides, it is shown that adopting the shortest processing time and shortest remaining processing time service policies among other tested policies would results in, respectively, 11.47% and 13.75% reduction in average waiting time relative to first come first served policy.

Keywords: queueing network, discrete-event simulation, health applications, SPT

Procedia PDF Downloads 156
53 Reliability Analysis of Computer Centre at Yobe State University Using LRU Algorithm

Authors: V. V. Singh, Yusuf Ibrahim Gwanda, Rajesh Prasad

Abstract:

In this paper, we focus on the reliability and performance analysis of Computer Centre (CC) at Yobe State University, Damaturu, Nigeria. The CC consists of three servers: one database mail server, one redundant and one for sharing with the client computers in the CC (called as a local server). Observing the different possibilities of the functioning of the CC, the analysis has been done to evaluate the various popular measures of reliability such as availability, reliability, mean time to failure (MTTF), profit analysis due to the operation of the system. The system can ultimately fail due to the failure of router, redundant server before repairing the mail server and switch failure. The system can also partially fail when a local server fails. The failed devices have restored according to Least Recently Used (LRU) techniques. The system can also fail entirely due to a cooling failure of the server, electricity failure or some natural calamity like earthquake, fire tsunami, etc. All the failure rates are assumed to be constant and follow exponential time distribution, while the repair follows two types of distributions: i.e. general and Gumbel-Hougaard family copula distribution.

Keywords: reliability, availability Gumbel-Hougaard family copula, MTTF, internet data centre

Procedia PDF Downloads 501
52 Multi Cloud Storage Systems for Resource Constrained Mobile Devices: Comparison and Analysis

Authors: Rajeev Kumar Bedi, Jaswinder Singh, Sunil Kumar Gupta

Abstract:

Cloud storage is a model of online data storage where data is stored in virtualized pool of servers hosted by third parties (CSPs) and located in different geographical locations. Cloud storage revolutionized the way how users access their data online anywhere, anytime and using any device as a tablet, mobile, laptop, etc. A lot of issues as vendor lock-in, frequent service outage, data loss and performance related issues exist in single cloud storage systems. So to evade these issues, the concept of multi cloud storage introduced. There are a lot of multi cloud storage systems exists in the market for mobile devices. In this article, we are providing comparison of four multi cloud storage systems for mobile devices Otixo, Unclouded, Cloud Fuze, and Clouds and evaluate their performance on the basis of CPU usage, battery consumption, time consumption and data usage parameters on three mobile phones Nexus 5, Moto G and Nexus 7 tablet and using Wi-Fi network. Finally, open research challenges and future scope are discussed.

Keywords: cloud storage, multi cloud storage, vendor lock-in, mobile devices, mobile cloud computing

Procedia PDF Downloads 376
51 Applying Different Stenography Techniques in Cloud Computing Technology to Improve Cloud Data Privacy and Security Issues

Authors: Muhammad Muhammad Suleiman

Abstract:

Cloud Computing is a versatile concept that refers to a service that allows users to outsource their data without having to worry about local storage issues. However, the most pressing issues to be addressed are maintaining a secure and reliable data repository rather than relying on untrustworthy service providers. In this study, we look at how stenography approaches and collaboration with Digital Watermarking can greatly improve the system's effectiveness and data security when used for Cloud Computing. The main requirement of such frameworks, where data is transferred or exchanged between servers and users, is safe data management in cloud environments. Steganography is the cloud is among the most effective methods for safe communication. Steganography is a method of writing coded messages in such a way that only the sender and recipient can safely interpret and display the information hidden in the communication channel. This study presents a new text steganography method for hiding a loaded hidden English text file in a cover English text file to ensure data protection in cloud computing. Data protection, data hiding capability, and time were all improved using the proposed technique.

Keywords: cloud computing, steganography, information hiding, cloud storage, security

Procedia PDF Downloads 161
50 Analysis of Spamming Threats and Some Possible Solutions for Online Social Networking Sites (OSNS)

Authors: Dilip Singh Sisodia, Shrish Verma

Abstract:

Spamming is the most common issue seen nowadays in the Internet especially in Online Social Networking Sites (like Facebook, Twitter, and Google+ etc.). Spam messages keep wasting Internet bandwidth and the storage space of servers. On social network sites; spammers often disguise themselves by creating fake accounts and hijacking user’s accounts for personal gains. They behave like normal user and they continue to change their spamming strategy. To prevent this, most modern spam-filtering solutions are deployed on the receiver side; they are good at filtering spam for end users. In this paper we are presenting some spamming techniques their behaviour and possible solutions. We have analyzed how Spammers enters into online social networking sites (OSNSs) and how they target it and the techniques they use for it. The five discussed techniques of spamming techniques which are clickjacking, social engineered attacks, cross site scripting, URL shortening, and drive by download. We have used elgg framework for demonstration of some of spamming threats and respective implementation of solutions.

Keywords: online social networking sites, spam, attacks, internet, clickjacking / likejacking, drive-by-download, URL shortening, networking, socially engineered attacks, elgg framework

Procedia PDF Downloads 304
49 Automatic Verification Technology of Virtual Machine Software Patch on IaaS Cloud

Authors: Yoji Yamato

Abstract:

In this paper, we propose an automatic verification technology of software patches for user virtual environments on IaaS Cloud to decrease verification costs of patches. In these days, IaaS services have been spread and many users can customize virtual machines on IaaS Cloud like their own private servers. Regarding to software patches of OS or middleware installed on virtual machines, users need to adopt and verify these patches by themselves. This task increases operation costs of users. Our proposed method replicates user virtual environments, extracts verification test cases for user virtual environments from test case DB, distributes patches to virtual machines on replicated environments and conducts those test cases automatically on replicated environments. We have implemented the proposed method on OpenStack using Jenkins and confirmed the feasibility. Using the implementation, we confirmed the effectiveness of test case creation efforts by our proposed idea of 2-tier abstraction of software functions and test cases. We also evaluated the automatic verification performance of environment replications, test cases extractions and test cases conductions.

Keywords: OpenStack, cloud computing, automatic verification, jenkins

Procedia PDF Downloads 454
48 Isolation Preserving Medical Conclusion Hold Structure via C5 Algorithm

Authors: Swati Kishor Zode, Rahul Ambekar

Abstract:

Data mining is the extraction of fascinating examples on the other hand information from enormous measure of information and choice is made as indicated by the applicable information extracted. As of late, with the dangerous advancement in internet, stockpiling of information and handling procedures, privacy preservation has been one of the major (higher) concerns in data mining. Various techniques and methods have been produced for protection saving data mining. In the situation of Clinical Decision Support System, the choice is to be made on the premise of the data separated from the remote servers by means of Internet to diagnose the patient. In this paper, the fundamental thought is to build the precision of Decision Support System for multiple diseases for different maladies and in addition protect persistent information while correspondence between Clinician side (Client side) also, the Server side. A privacy preserving protocol for clinical decision support network is proposed so that patients information dependably stay scrambled amid diagnose prepare by looking after the accuracy. To enhance the precision of Decision Support System for various malady C5.0 classifiers and to save security, a Homomorphism encryption algorithm Paillier cryptosystem is being utilized.

Keywords: classification, homomorphic encryption, clinical decision support, privacy

Procedia PDF Downloads 309
47 Interplay of Power Management at Core and Server Level

Authors: Jörg Lenhardt, Wolfram Schiffmann, Jörg Keller

Abstract:

While the feature sizes of recent Complementary Metal Oxid Semiconductor (CMOS) devices decrease the influence of static power prevails their energy consumption. Thus, power savings that benefit from Dynamic Frequency and Voltage Scaling (DVFS) are diminishing and temporal shutdown of cores or other microchip components become more worthwhile. A consequence of powering off unused parts of a chip is that the relative difference between idle and fully loaded power consumption is increased. That means, future chips and whole server systems gain more power saving potential through power-aware load balancing, whereas in former times this power saving approach had only limited effect, and thus, was not widely adopted. While powering off complete servers was used to save energy, it will be superfluous in many cases when cores can be powered down. An important advantage that comes with that is a largely reduced time to respond to increased computational demand. We include the above developments in a server power model and quantify the advantage. Our conclusion is that strategies from datacenters when to power off server systems might be used in the future on core level, while load balancing mechanisms previously used at core level might be used in the future at server level.

Keywords: power efficiency, static power consumption, dynamic power consumption, CMOS

Procedia PDF Downloads 196
46 On the Monitoring of Structures and Soils by Tromograph

Authors: Magarò Floriana, Zinno Raffaele

Abstract:

Since 2009, with the coming into force of the January 14, 2008 Ministerial Decree "New technical standards for construction", and the explanatory ministerial circular N°.617 of February 2, 2009, the question of seismic hazard and the design of seismic-resistant structures in Italy has acquired increasing importance. One of the most discussed aspects in recent Italian and international scientific literature concerns the dynamic interaction between land and structure, and the effects which dynamic coupling may have on individual buildings. In effect, from systems dynamics, it is well known that resonance can have catastrophic effects on a stimulated system, leading to a response that is not compatible with the previsions in the design phase. The method used in this study to estimate the frequency of oscillation of the structure is as follows: the analysis of HVSR (Horizontal to Vertical Spectral Ratio) relations. This allows for evaluation of very simple oscillation frequencies for land and structures. The tool used for data acquisition is an experimental digital tromograph. This is an engineered development of the experimental Languamply RE 4500 tromograph, equipped with an engineered amplification circuit and improved electronically using extremely small electronic components (size of each individual amplifier 16 x 26 mm). This tromograph is a modular system, completely "free" and "open", designed to interface Windows, Linux, OSX and Android with the outside world. It an amplifier designed to carry out microtremor measurements, yet which will also be useful for seismological and seismic measurements in general. The development of single amplifiers of small dimension allows for a very clean signal since being able to position it a few centimetres from the geophone eliminates cable “antenna” phenomena, which is a necessary characteristic in seeking to have signals which are clean at the very low voltages to be measured.

Keywords: microtremor, HVSR, tromograph, structural engineering

Procedia PDF Downloads 380
45 Blockchain for Transport: Performance Simulations of Blockchain Network for Emission Monitoring Scenario

Authors: Dermot O'Brien, Vasileios Christaras, Georgios Fontaras, Igor Nai Fovino, Ioannis Kounelis

Abstract:

With the rise of the Internet of Things (IoT), 5G, and blockchain (BC) technologies, vehicles are becoming ever increasingly connected and are already transmitting substantial amounts of data to the original equipment manufacturers (OEMs) servers. This data could be used to help detect mileage fraud and enable more accurate vehicle emissions monitoring. This would not only help regulators but could enable applications such as permitting efficient drivers to pay less tax, geofencing for air quality improvement, as well as pollution tolling and trading platforms for transport-related businesses and EU citizens. Other applications could include traffic management and shared mobility systems. BC enables the transmission of data with additional security and removes single points of failure while maintaining data provenance, identity ownership, and the possibility to retain varying levels of privacy depending on the requirements of the applied use case. This research performs simulations of vehicles interacting with European member state authorities and European Commission BC nodes that are running hyperleger fabric and explores whether the technology is currently feasible for transport applications such as the emission monitoring use-case.

Keywords: future transportation systems, technological innovations, policy approaches for transportation future, economic and regulatory trends, blockchain

Procedia PDF Downloads 143
44 Performance Evaluation of Distributed Deep Learning Frameworks in Cloud Environment

Authors: Shuen-Tai Wang, Fang-An Kuo, Chau-Yi Chou, Yu-Bin Fang

Abstract:

2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn  features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design.

Keywords: artificial intelligence, machine learning, deep learning, convolutional neural networks

Procedia PDF Downloads 174
43 Malware Beaconing Detection by Mining Large-scale DNS Logs for Targeted Attack Identification

Authors: Andrii Shalaginov, Katrin Franke, Xiongwei Huang

Abstract:

One of the leading problems in Cyber Security today is the emergence of targeted attacks conducted by adversaries with access to sophisticated tools. These attacks usually steal senior level employee system privileges, in order to gain unauthorized access to confidential knowledge and valuable intellectual property. Malware used for initial compromise of the systems are sophisticated and may target zero-day vulnerabilities. In this work we utilize common behaviour of malware called ”beacon”, which implies that infected hosts communicate to Command and Control servers at regular intervals that have relatively small time variations. By analysing such beacon activity through passive network monitoring, it is possible to detect potential malware infections. So, we focus on time gaps as indicators of possible C2 activity in targeted enterprise networks. We represent DNS log files as a graph, whose vertices are destination domains and edges are timestamps. Then by using four periodicity detection algorithms for each pair of internal-external communications, we check timestamp sequences to identify the beacon activities. Finally, based on the graph structure, we infer the existence of other infected hosts and malicious domains enrolled in the attack activities.

Keywords: malware detection, network security, targeted attack, computational intelligence

Procedia PDF Downloads 228
42 Comparison of Authentication Methods in Internet of Things Technology

Authors: Hafizah Che Hasan, Fateen Nazwa Yusof, Maslina Daud

Abstract:

Internet of Things (IoT) is a powerful industry system, which end-devices are interconnected and automated, allowing the devices to analyze data and execute actions based on the analysis. The IoT technology leverages the technology of Radio-Frequency Identification (RFID) and Wireless Sensor Network (WSN), including mobile and sensor. These technologies contribute to the evolution of IoT. However, due to more devices are connected each other in the Internet, and data from various sources exchanged between things, confidentiality of the data becomes a major concern. This paper focuses on one of the major challenges in IoT; authentication, in order to preserve data integrity and confidentiality are in place. A few solutions are reviewed based on papers from the last few years. One of the proposed solutions is securing the communication between IoT devices and cloud servers with Elliptic Curve Cryptograhpy (ECC) based mutual authentication protocol. This solution focuses on Hyper Text Transfer Protocol (HTTP) cookies as security parameter.  Next proposed solution is using keyed-hash scheme protocol to enable IoT devices to authenticate each other without the presence of a central control server. Another proposed solution uses Physical Unclonable Function (PUF) based mutual authentication protocol. It emphasizes on tamper resistant and resource-efficient technology, which equals a 3-way handshake security protocol.

Keywords: Internet of Things (IoT), authentication, PUF ECC, keyed-hash scheme protocol

Procedia PDF Downloads 230
41 Performance Evaluation of a Prioritized, Limited Multi-Server Processor-Sharing System that Includes Servers with Various Capacities

Authors: Yoshiaki Shikata, Nobutane Hanayama

Abstract:

We present a prioritized, limited multi-server processor sharing (PS) system where each server has various capacities, and N (≥2) priority classes are allowed in each PS server. In each prioritized, limited server, different service ratio is assigned to each class request, and the number of requests to be processed is limited to less than a certain number. Routing strategies of such prioritized, limited multi-server PS systems that take into account the capacity of each server are also presented, and a performance evaluation procedure for these strategies is discussed. Practical performance measures of these strategies, such as loss probability, mean waiting time, and mean sojourn time, are evaluated via simulation. In the PS server, at the arrival (or departure) of a request, the extension (shortening) of the remaining sojourn time of each request receiving service can be calculated by using the number of requests of each class and the priority ratio. Utilising a simulation program which executes these events and calculations, the performance of the proposed prioritized, limited multi-server PS rule can be analyzed. From the evaluation results, most suitable routing strategy for the loss or waiting system is clarified.

Keywords: processor sharing, multi-server, various capacity, N-priority classes, routing strategy, loss probability, mean sojourn time, mean waiting time, simulation

Procedia PDF Downloads 305
40 Electronic Equipment Failure due to Corrosion

Authors: Yousaf Tariq

Abstract:

There are many reasons which are involved in electronic equipment failure i.e. temperature, humidity, dust, smoke etc. Corrosive gases are also one of the factor which may involve in failure of equipment. Sensitivity of electronic equipment increased when “lead-free” regulation enforced on manufacturers. In data center, equipment like hard disk, servers, printed circuit boards etc. have been exposed to gaseous contamination due to increase in sensitivity. There is a worldwide standard to protect electronic industrial electronic from corrosive gases. It is well known as “ANSI/ISA S71.04 – 1985 - Environmental Conditions for Control Systems: Airborne Contaminants. ASHRAE Technical Committee (TC) 9.9 members also recommended ISA standard in their whitepaper on Gaseous and Particulate Contamination Guideline for data centers. TC 9.9 members represented some of the major IT equipment manufacturers e.g. IBM, HP, Cisco etc. As per standard practices, first step is to monitor air quality in data center. If contamination level shows more than G1, it means that gas-phase air filtration is required other than dust/smoke air filtration. It is important that outside fresh air entering in data center should have pressurization/re-circulated process in order to absorb corrosive gases and to maintain level within specified limit. It is also important that air quality monitoring should be conducted once in a year. Temperature and humidity should also be monitored as per standard practices to maintain level within specified limit.

Keywords: corrosive gases, corrosion, electronic equipment failure, ASHRAE, hard disk

Procedia PDF Downloads 302
39 A Bi-Objective Model to Optimize the Total Time and Idle Probability for Facility Location Problem Behaving as M/M/1/K Queues

Authors: Amirhossein Chambari

Abstract:

This article proposes a bi-objective model for the facility location problem subject to congestion (overcrowding). Motivated by implementations to locate servers in internet mirror sites, communication networks, one-server-systems, so on. This model consider for situations in which immobile (or fixed) service facilities are congested (or queued) by stochastic demand to behave as M/M/1/K queues. We consider for this problem two simultaneous perspectives; (1) Customers (desire to limit times of accessing and waiting for service) and (2) Service provider (desire to limit average facility idle-time). A bi-objective model is setup for facility location problem with two objective functions; (1) Minimizing sum of expected total traveling and waiting time (customers) and (2) Minimizing the average facility idle-time percentage (service provider). The proposed model belongs to the class of mixed-integer nonlinear programming models and the class of NP-hard problems. In addition, to solve the model, controlled elitist non-dominated sorting genetic algorithms (Controlled NSGA-II) and controlled elitist non-dominated ranking genetic algorithms (NRGA-I) are proposed. Furthermore, the two proposed metaheuristics algorithms are evaluated by establishing standard multiobjective metrics. Finally, the results are analyzed and some conclusions are given.

Keywords: bi-objective, facility location, queueing, controlled NSGA-II, NRGA-I

Procedia PDF Downloads 551
38 A Unified Webcam Proctoring Solution on Edge

Authors: Saw Thiha, Jay Rajasekera

Abstract:

A boom in video conferencing generated millions of hours of video data daily to be analyzed. However, such enormous data pose certain scalability issues to be analyzed efficiently, let alone do it in real-time, as online conferences can involve hundreds of people and can last for hours. This paper proposes an efficient online proctoring solution that can analyze the online conferences real-time on edge devices such as Android, iOS, and desktops. Since the computation can be done upfront on the devices where online conferences take place, it can scale well without requiring intensive resources such as GPU servers and complex cloud infrastructure. According to the linear models, face orientation does indeed impact the perceived eye openness. Also, the proposed z score facial landmark standardization was proven to be functional in detecting face orientation and contributed to classifying eye blinks with single eyelid distance computation while achieving a better f1 score and accuracy than the Eye Aspect Ratio (EAR) threshold method. Last but not least, the authors implemented the solution natively in the MediaPipe framework and open-sourced it along with the reproducible experimental results on GitHub. The solution provides face orientation, eye blink, facial activity, and translation detections out of the box and is highly customizable and extensible.

Keywords: android, desktop, edge computing, blink, face orientation, facial activity and translation, MediaPipe, open source, real-time, video conference, web, iOS, Z score facial landmark standardization

Procedia PDF Downloads 71
37 Client Hacked Server

Authors: Bagul Abhijeet

Abstract:

Background: Client-Server model is the backbone of today’s internet communication. In which normal user can not have control over particular website or server? By using the same processing model one can have unauthorized access to particular server. In this paper, we discussed about application scenario of hacking for simple website or server consist of unauthorized way to access the server database. This application emerges to autonomously take direct access of simple website or server and retrieve all essential information maintain by administrator. In this system, IP address of server given as input to retrieve user-id and password of server. This leads to breaking administrative security of server and acquires the control of server database. Whereas virus helps to escape from server security by crashing the whole server. Objective: To control malicious attack and preventing all government website, and also find out illegal work to do hackers activity. Results: After implementing different hacking as well as non-hacking techniques, this system hacks simple web sites with normal security credentials. It provides access to server database and allow attacker to perform database operations from client machine. Above Figure shows the experimental result of this application upon different servers and provides satisfactory results as required. Conclusion: In this paper, we have presented a to view to hack the server which include some hacking as well as non-hacking methods. These algorithms and methods provide efficient way to hack server database. By breaking the network security allow to introduce new and better security framework. The terms “Hacking” not only consider for its illegal activities but also it should be use for strengthen our global network.

Keywords: Hacking, Vulnerabilities, Dummy request, Virus, Server monitoring

Procedia PDF Downloads 223
36 SISSLE in Consensus-Based Ripple: Some Improvements in Speed, Security, Last Mile Connectivity and Ease of Use

Authors: Mayank Mundhra, Chester Rebeiro

Abstract:

Cryptocurrencies are rapidly finding wide application in areas such as Real Time Gross Settlements and Payments Systems. Ripple is a cryptocurrency that has gained prominence with banks and payment providers. It solves the Byzantine General’s Problem with its Ripple Protocol Consensus Algorithm (RPCA), where each server maintains a list of servers, called Unique Node List (UNL) that represents the network for the server, and will not collectively defraud it. The server believes that the network has come to a consensus when members of the UNL come to a consensus on a transaction. In this paper we improve Ripple to achieve better speed, security, last mile connectivity and ease of use. We implement guidelines and automated systems for building and maintaining UNLs for resilience, robustness, improved security, and efficient information propagation. We enhance the system so as to ensure that each server receives information from across the whole network rather than just from the UNL members. We also introduce the paradigm of UNL overlap as a function of information propagation and the trust a server assigns to its own UNL. Our design not only reduces vulnerabilities such as eclipse attacks, but also makes it easier to identify malicious behaviour and entities attempting to fraudulently Double Spend or stall the system. We provide experimental evidence of the benefits of our approach over the current Ripple scheme. We observe ≥ 4.97x and 98.22x in speedup and success rate for information propagation respectively, and ≥ 3.16x and 51.70x in speedup and success rate in consensus.

Keywords: Ripple, Kelips, unique node list, consensus, information propagation

Procedia PDF Downloads 106
35 Distributed Real-Time Range Query Approximation in a Streaming Environment

Authors: Simon Keller, Rainer Mueller

Abstract:

Continuous range queries are a common means to handle mobile clients in high-density areas. Most existing approaches focus on settings in which the range queries for location-based services are more or less static, whereas the mobile clients in the ranges move. We focus on a category called dynamic real-time range queries (DRRQ), assuming that both, clients requested by the query and the inquirers, are mobile. In consequence, the query parameters and the query results continuously change. This leads to two requirements: the ability to deal with an arbitrarily high number of mobile nodes (scalability) and the real-time delivery of range query results. In this paper, we present the highly decentralized solution adaptive quad streaming (AQS) for the requirements of DRRQs. AQS approximates the query results in favor of a controlled real-time delivery and guaranteed scalability. While prior works commonly optimize data structures on the involved servers, we use AQS to focus on a highly distributed cell structure without data structures automatically adapting to changing client distributions. Instead of the commonly used request-response approach, we apply a lightweight streaming method in which no bidirectional communication and no storage or maintenance of queries are required at all.

Keywords: approximation of client distributions, continuous spatial range queries, mobile objects, streaming-based decentralization in spatial mobile environments

Procedia PDF Downloads 115
34 An Investigation of Performance Versus Security in Cognitive Radio Networks with Supporting Cloud Platforms

Authors: Kurniawan D. Irianto, Demetres D. Kouvatsos

Abstract:

The growth of wireless devices affects the availability of limited frequencies or spectrum bands as it has been known that spectrum bands are a natural resource that cannot be added. Many studies about available spectrum have been done and it shows that licensed frequencies are idle most of the time. Cognitive radio is one of the solutions to solve those problems. Cognitive radio is a promising technology that allows the unlicensed users known as secondary users (SUs) to access licensed bands without making interference to licensed users or primary users (PUs). As cloud computing has become popular in recent years, cognitive radio networks (CRNs) can be integrated with cloud platform. One of the important issues in CRNs is security. It becomes a problem since CRNs use radio frequencies as a medium for transmitting and CRNs share the same issues with wireless communication systems. Another critical issue in CRNs is performance. Security has adverse effect to performance and there are trade-offs between them. The goal of this paper is to investigate the performance related to security trade-off in CRNs with supporting cloud platforms. Furthermore, Queuing Network Models with preemptive resume and preemptive repeat identical priority are applied in this project to measure the impact of security to performance in CRNs with or without cloud platform. The generalized exponential (GE) type distribution is used to reflect the bursty inter-arrival and service times at the servers. The results show that the best performance is obtained when security is disable and cloud platform is enable.

Keywords: performance vs. security, cognitive radio networks, cloud platforms, GE-type distribution

Procedia PDF Downloads 323
33 Acoustic Modeling of a Data Center with a Hot Aisle Containment System

Authors: Arshad Alfoqaha, Seth Bard, Dustin Demetriou

Abstract:

A new multi-physics acoustic modeling approach using ANSYS Mechanical FEA and FLUENT CFD methods is developed for modeling servers mounted to racks, such as IBM Z and IBM Power Systems, in data centers. This new approach allows users to determine the thermal and acoustic conditions that people are exposed to within the data center. The sound pressure level (SPL) exposure for a human working inside a hot aisle containment system inside the data center is studied. The SPL is analyzed at the noise source, at the human body, on the rack walls, on the containment walls, and on the ceiling and flooring plenum walls. In the acoustic CFD simulation, it is assumed that a four-inch diameter sphere with monopole acoustic radiation, placed in the middle of each rack, provides a single-source representation of all noise sources within the rack. Ffowcs Williams & Hawkings (FWH) acoustic model is employed. The target frequency is 1000 Hz, and the total simulation time for the transient analysis is 1.4 seconds, with a very small time step of 3e-5 seconds and 10 iterations to ensure convergence and accuracy. A User Defined Function (UDF) is developed to accurately simulate the acoustic noise source, and a Dynamic Mesh is applied to ensure acoustic wave propagation. Initial validation of the acoustic CFD simulation using a closed-form solution for the spherical propagation of an acoustic point source is performed.

Keywords: data centers, FLUENT, acoustics, sound pressure level, SPL, hot aisle containment, IBM

Procedia PDF Downloads 145
32 Seaworthiness and Liability Risks Involving Technology and Cybersecurity in Transport and Logistics

Authors: Eugene Wong, Felix Chan, Linsey Chen, Joey Cheung

Abstract:

The widespread use of technologies and cyber/digital means for complex maritime operations have led to a sharp rise in global cyber-attacks. They have generated an increasing number of liability disputes, insurance claims, and legal proceedings. An array of antiquated case law, regulations, international conventions, and obsolete contractual clauses drafted in the pre-technology era have become grossly inadequate in addressing the contemporary challenges. This paper offers a critique of the ambiguity of cybersecurity liabilities under the obligation of seaworthiness entailed in the Hague-Visby Rules, which apply either by law in a large number of jurisdictions or by express incorporation into the shipping documents. This paper also evaluates the legal and technological criteria for assessing whether a vessel is properly equipped with the latest offshore technologies for navigation and cargo delivery operations. Examples include computer applications, networks and servers, enterprise systems, global positioning systems, and data centers. A critical analysis of the carriers’ obligations to exercise due diligence in preventing or mitigating cyber-attacks is also conducted in this paper. It is hoped that the present study will offer original and crucial insights to policymakers, regulators, carriers, cargo interests, and insurance underwriters closely involved in dispute prevention and resolution arising from cybersecurity liabilities.

Keywords: seaworthiness, cybersecurity, liabilities, risks, maritime, transport

Procedia PDF Downloads 111
31 Digital Forensic Exploration Framework for Email and Instant Messaging Applications

Authors: T. Manesh, Abdalla A. Alameen, M. Mohemmed Sha, A. Mohamed Mustaq Ahmed

Abstract:

Email and instant messaging applications are foremost and extensively used electronic communication methods in this era of information explosion. These applications are generally used for exchange of information using several frontend applications from various service providers by its users. Almost all such communications are now secured using SSL or TLS security over HTTP communication. At the same time, it is also noted that cyber criminals and terrorists have started exchanging information using these methods. Since communication is encrypted end-to-end, tracing significant forensic details and actual content of messages are found to be unattended and severe challenges by available forensic tools. These challenges seriously affect in procuring substantial evidences against such criminals from their working environments. This paper presents a vibrant forensic exploration and architectural framework which not only decrypts any communication or network session but also reconstructs actual message contents of email as well as instant messaging applications. The framework can be effectively used in proxy servers and individual computers and it aims to perform forensic reconstruction followed by analysis of webmail and ICQ messaging applications. This forensic framework exhibits a versatile nature as it is equipped with high speed packet capturing hardware, a well-designed packet manipulating algorithm. It regenerates message contents over regular as well as SSL encrypted SMTP, POP3 and IMAP protocols and catalyzes forensic presentation procedure for prosecution of cyber criminals by producing solid evidences of their actual communication as per court of law of specific countries.

Keywords: forensics, network sessions, packet reconstruction, packet reordering

Procedia PDF Downloads 304
30 Assessment of E-Readiness in Libraries of Public Sector Universities Khyber Pakhtunkhwa-Pakistan

Authors: Saeed Ullah Jan

Abstract:

This study has examined the e-readiness in libraries of public sector universities in Khyber Pakhtunkhwa. Efforts were made to evaluate the availability of human resources, electronic infrastructure, and network services and programs in the public sector university libraries. The population of the study was the twenty-seven public sector university libraries of Khyber Pakhtunkhwa. A quantitative approach was adopted, and a questionnaire-based survey was conducted to collect data from the librarian/in charge of public sector university libraries. The collected data were analyzed using Statistical Package for Social Sciences version 22 (SPSS). The mean score of the knowledge component interpreted magnitudes below three which indicates that the respondents are poorly or moderately satisfied regards knowledge of libraries. The satisfaction level of the respondents about the other components, such as electronic infrastructure, network services and programs, and enhancers of the networked world, was rated as average or below. The study suggested that major aspects of existing public-sector university libraries require significant transformation. For this purpose, the government should provide all the required resources and facilities to meet the population's informational and recreational demands. The Information Communication Technology (ICT) infrastructure of public university libraries needs improvement in terms of the availability of computer equipment, databases, network servers, multimedia projectors, digital cameras, uninterruptible power supply, scanners, and backup devices such as hard discs and Digital Video Disc/Compact Disc.

Keywords: ICT-libraries, e-readiness-libraries, e-readiness-university libraries, e-readiness-Pakistan

Procedia PDF Downloads 56
29 A Review on Cloud Computing and Internet of Things

Authors: Sahar S. Tabrizi, Dogan Ibrahim

Abstract:

Cloud Computing is a convenient model for on-demand networks that uses shared pools of virtual configurable computing resources, such as servers, networks, storage devices, applications, etc. The cloud serves as an environment for companies and organizations to use infrastructure resources without making any purchases and they can access such resources wherever and whenever they need. Cloud computing is useful to overcome a number of problems in various Information Technology (IT) domains such as Geographical Information Systems (GIS), Scientific Research, e-Governance Systems, Decision Support Systems, ERP, Web Application Development, Mobile Technology, etc. Companies can use Cloud Computing services to store large amounts of data that can be accessed from anywhere on Earth and also at any time. Such services are rented by the client companies where the actual rent depends upon the amount of data stored on the cloud and also the amount of processing power used in a given time period. The resources offered by the cloud service companies are flexible in the sense that the user companies can increase or decrease their storage requirements or the processing power requirements at any time, thus minimizing the overall rental cost of the service they receive. In addition, the Cloud Computing service providers offer fast processors and applications software that can be shared by their clients. This is especially important for small companies with limited budgets which cannot afford to purchase their own expensive hardware and software. This paper is an overview of the Cloud Computing, giving its types, principles, advantages, and disadvantages. In addition, the paper gives some example engineering applications of Cloud Computing and makes suggestions for possible future applications in the field of engineering.

Keywords: cloud computing, cloud systems, cloud services, IaaS, PaaS, SaaS

Procedia PDF Downloads 210
28 Load Balancing Technique for Energy - Efficiency in Cloud Computing

Authors: Rani Danavath, V. B. Narsimha

Abstract:

Cloud computing is emerging as a new paradigm of large scale distributed computing. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., three service models, and four deployment networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics models. Load balancing is one of the main challenges in cloud computing, which is required to distribute the dynamic workload across multiple nodes, to ensure that no single node is overloaded. It helps in optimal utilization of resources, enhancing the performance of the system. The goal of the load balancing is to minimize the resource consumption and carbon emission rate, that is the direct need of cloud computing. This determined the need of new metrics energy consumption and carbon emission for energy-efficiency load balancing techniques in cloud computing. Existing load balancing techniques mainly focuses on reducing overhead, services, response time and improving performance etc. In this paper we introduced a Technique for energy-efficiency, but none of the techniques have considered the energy consumption and carbon emission. Therefore, our proposed work will go towards energy – efficiency. So this energy-efficiency load balancing technique can be used to improve the performance of cloud computing by balancing the workload across all the nodes in the cloud with the minimum resource utilization, in turn, reducing energy consumption, and carbon emission to an extent, which will help to achieve green computing.

Keywords: cloud computing, distributed computing, energy efficiency, green computing, load balancing, energy consumption, carbon emission

Procedia PDF Downloads 419
27 One Step Further: Pull-Process-Push Data Processing

Authors: Romeo Botes, Imelda Smit

Abstract:

In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.

Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list

Procedia PDF Downloads 216
26 The Challenges of Cloud Computing Adoption in Nigeria

Authors: Chapman Eze Nnadozie

Abstract:

Cloud computing, a technology that is made possible through virtualization within networks represents a shift from the traditional ownership of infrastructure and other resources by distinct organization to a more scalable pattern in which computer resources are rented online to organizations on either as a pay-as-you-use basis or by subscription. In other words, cloud computing entails the renting of computing resources (such as storage space, memory, servers, applications, networks, etc.) by a third party to its clients on a pay-as-go basis. It is a new innovative technology that is globally embraced because of its renowned benefits, profound of which is its cost effectiveness on the part of organizations engaged with its services. In Nigeria, the services are provided either directly to companies mostly by the key IT players such as Microsoft, IBM, and Google; or in partnership with some other players such as Infoware, Descasio, and Sunnet. This action enables organizations to rent IT resources on a pay-as-you-go basis thereby salvaging them from wastages accruable on acquisition and maintenance of IT resources such as ownership of a separate data centre. This paper intends to appraise the challenges of cloud computing adoption in Nigeria, bearing in mind the country’s peculiarities’ in terms of infrastructural development. The methodologies used in this paper include the use of research questionnaires, formulated hypothesis, and the testing of the formulated hypothesis. The major findings of this paper include the fact that there are some addressable challenges to the adoption of cloud computing in Nigeria. Furthermore, the country will gain significantly if the challenges especially in the area of infrastructural development are well addressed. This is because the research established the fact that there are significant gains derivable by the adoption of cloud computing by organizations in Nigeria. However, these challenges can be overturned by concerted efforts in the part of government and other stakeholders.

Keywords: cloud computing, data centre, infrastructure, it resources, virtualization

Procedia PDF Downloads 325
25 Development of a Sequential Multimodal Biometric System for Web-Based Physical Access Control into a Security Safe

Authors: Babatunde Olumide Olawale, Oyebode Olumide Oyediran

Abstract:

The security safe is a place or building where classified document and precious items are kept. To prevent unauthorised persons from gaining access to this safe a lot of technologies had been used. But frequent reports of an unauthorised person gaining access into security safes with the aim of removing document and items from the safes are pointers to the fact that there is still security gap in the recent technologies used as access control for the security safe. In this paper we try to solve this problem by developing a multimodal biometric system for physical access control into a security safe using face and voice recognition. The safe is accessed by the combination of face and speech pattern recognition and also in that sequential order. User authentication is achieved through the use of camera/sensor unit and a microphone unit both attached to the door of the safe. The user face was captured by the camera/sensor while the speech was captured by the use of the microphone unit. The Scale Invariance Feature Transform (SIFT) algorithm was used to train images to form templates for the face recognition system while the Mel-Frequency Cepitral Coefficients (MFCC) algorithm was used to train the speech recognition system to recognise authorise user’s speech. Both algorithms were hosted in two separate web based servers and for automatic analysis of our work; our developed system was simulated in a MATLAB environment. The results obtained shows that the developed system was able to give access to authorise users while declining unauthorised person access to the security safe.

Keywords: access control, multimodal biometrics, pattern recognition, security safe

Procedia PDF Downloads 299