Search results for: data warehouse
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7465

Search results for: data warehouse

6805 Spatial Services in Cloud Environment

Authors: Sašo Pečnik, Borut Žalik

Abstract:

Cloud Computing is an approach that provides computation and storage services on-demand to clients over the network, independent of device and location. In the last few years, cloud computing became a trend in information technology with many companies that transfer their business processes and applications in the cloud. Cloud computing with service oriented architecture has contributed to rapid development of Geographic Information Systems. Open Geospatial Consortium with its standards provides the interfaces for hosted spatial data and GIS functionality to integrated GIS applications. Furthermore, with the enormous processing power, clouds provide efficient environment for data intensive applications that can be performed efficiently, with higher precision, and greater reliability. This paper presents our work on the geospatial data services within the cloud computing environment and its technology. A cloud computing environment with the strengths and weaknesses of the geographic information system will be introduced. The OGC standards that solve our application interoperability are highlighted. Finally, we outline our system architecture with utilities for requesting and invoking our developed data intensive applications as a web service.

Keywords: Cloud Computing, Geographic Information System, Open Geospatial Consortium, Interoperability, Spatial data, Web- Services.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1712
6804 The Trend of Injuries in Building Fire in Tehran from 2002 to 2012

Authors: Mohammadreza Ashouri, Majid Bayatian

Abstract:

Analysis of fire data is a way for the implementation of any plan to improve the level of safety in cities. Such an analysis is able to reveal signs of changes in a given period and can be used as a measure of safety. The information of about 66,341 fires (from 2002 to 2012) released by Tehran Safety Services and Fire-Fighting Organization and data on the population and the number of households provided by Tehran Municipality and the Statistical Yearbook of Iran were extracted. Using the data, the fire changes, the rate of injuries, and mortality rate were determined and analyzed. The rate of injuries and mortality rate of fires per one million population of Tehran were 59.58% and 86.12%, respectively. During the study period, the number of fires and fire stations increased by 104.38% and 102.63%, respectively. Most fires (9.21%) happened in the 4th District of Tehran. The results showed that the recorded fire data have not been systematically planned for fire prevention since one of the ways to reduce injuries caused by fires is to develop a systematic plan for necessary actions in emergency situations. To determine a reliable source for fire prevention, the stages, definitions of working processes and the cause and effect chains should be considered. Therefore, a comprehensive statistical system should be developed for reported and recorded fire data.

Keywords: Fire statistics, fire analysis, accident prevention, Tehran.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 769
6803 Choosing R-tree or Quadtree Spatial DataIndexing in One Oracle Spatial Database System to Make Faster Showing Geographical Map in Mobile Geographical Information System Technology

Authors: Maruto Masserie Sardadi, Mohd Shafry bin Mohd Rahim, Zahabidin Jupri, Daut bin Daman

Abstract:

The latest Geographic Information System (GIS) technology makes it possible to administer the spatial components of daily “business object," in the corporate database, and apply suitable geographic analysis efficiently in a desktop-focused application. We can use wireless internet technology for transfer process in spatial data from server to client or vice versa. However, the problem in wireless Internet is system bottlenecks that can make the process of transferring data not efficient. The reason is large amount of spatial data. Optimization in the process of transferring and retrieving data, however, is an essential issue that must be considered. Appropriate decision to choose between R-tree and Quadtree spatial data indexing method can optimize the process. With the rapid proliferation of these databases in the past decade, extensive research has been conducted on the design of efficient data structures to enable fast spatial searching. Commercial database vendors like Oracle have also started implementing these spatial indexing to cater to the large and diverse GIS. This paper focuses on the decisions to choose R-tree and quadtree spatial indexing using Oracle spatial database in mobile GIS application. From our research condition, the result of using Quadtree and R-tree spatial data indexing method in one single spatial database can save the time until 42.5%.

Keywords: Indexing, Mobile GIS, MapViewer, Oracle SpatialDatabase.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4035
6802 Design and Implementation a Virtualization Platform for Providing Smart Tourism Services

Authors: Nam Don Kim, Jungho Moon, Tae Yun Chung

Abstract:

This paper proposes an Internet of Things (IoT) based virtualization platform for providing smart tourism services. The virtualization platform provides a consistent access interface to various types of data by naming IoT devices and legacy information systems as pathnames in a virtual file system. In the other words, the IoT virtualization platform functions as a middleware which uses the metadata for underlying collected data. The proposed platform makes it easy to provide customized tourism information by using tourist locations collected by IoT devices and additionally enables to create new interactive smart tourism services focused on the tourist locations. The proposed platform is very efficient so that the provided tourism services are isolated from changes in raw data and the services can be modified or expanded without changing the underlying data structure.

Keywords: Internet of Things, IoT platform, service platform, virtual file system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1094
6801 Service-Oriented Architecture for Object- Centric Information Fusion

Authors: Jeffrey A. Dunne, Kevin Ligozio

Abstract:

In many applications there is a broad variety of information relevant to a focal “object" of interest, and the fusion of such heterogeneous data types is desirable for classification and categorization. While these various data types can sometimes be treated as orthogonal (such as the hull number, superstructure color, and speed of an oil tanker), there are instances where the inference and the correlation between quantities can provide improved fusion capabilities (such as the height, weight, and gender of a person). A service-oriented architecture has been designed and prototyped to support the fusion of information for such “object-centric" situations. It is modular, scalable, and flexible, and designed to support new data sources, fusion algorithms, and computational resources without affecting existing services. The architecture is designed to simplify the incorporation of legacy systems, support exact and probabilistic entity disambiguation, recognize and utilize multiple types of uncertainties, and minimize network bandwidth requirements.

Keywords: Data fusion, distributed computing, service-oriented architecture, SOA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1469
6800 Application of Data Mining Tools to Predicate Completion Time of a Project

Authors: Seyed Hossein Iranmanesh, Zahra Mokhtari

Abstract:

Estimation time and cost of work completion in a project and follow up them during execution are contributors to success or fail of a project, and is very important for project management team. Delivering on time and within budgeted cost needs to well managing and controlling the projects. To dealing with complex task of controlling and modifying the baseline project schedule during execution, earned value management systems have been set up and widely used to measure and communicate the real physical progress of a project. But it often fails to predict the total duration of the project. In this paper data mining techniques is used predicting the total project duration in term of Time Estimate At Completion-EAC (t). For this purpose, we have used a project with 90 activities, it has updated day by day. Then, it is used regular indexes in literature and applied Earned Duration Method to calculate time estimate at completion and set these as input data for prediction and specifying the major parameters among them using Clem software. By using data mining, the effective parameters on EAC and the relationship between them could be extracted and it is very useful to manage a project with minimum delay risks. As we state, this could be a simple, safe and applicable method in prediction the completion time of a project during execution.

Keywords: Data Mining Techniques, Earned Duration Method, Earned Value, Estimate At Completion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1803
6799 Automated Process Quality Monitoring with Prediction of Fault Condition Using Measurement Data

Authors: Hyun-Woo Cho

Abstract:

Detection of incipient abnormal events is important to improve safety and reliability of machine operations and reduce losses caused by failures. Improper set-ups or aligning of parts often leads to severe problems in many machines. The construction of prediction models for predicting faulty conditions is quite essential in making decisions on when to perform machine maintenance. This paper presents a multivariate calibration monitoring approach based on the statistical analysis of machine measurement data. The calibration model is used to predict two faulty conditions from historical reference data. This approach utilizes genetic algorithms (GA) based variable selection, and we evaluate the predictive performance of several prediction methods using real data. The results shows that the calibration model based on supervised probabilistic principal component analysis (SPPCA) yielded best performance in this work. By adopting a proper variable selection scheme in calibration models, the prediction performance can be improved by excluding non-informative variables from their model building steps.

Keywords: Prediction, operation monitoring, on-line data, nonlinear statistical methods, empirical model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658
6798 Soccer Video Edition Using a Multimodal Annotation

Authors: Fendri Emna, Ben-Abdallah Hanêne, Ben-Hamadou Abdelmajid

Abstract:

In this paper, we present an approach for soccer video edition using a multimodal annotation. We propose to associate with each video sequence of a soccer match a textual document to be used for further exploitation like search, browsing and abstract edition. The textual document contains video meta data, match meta data, and match data. This document, generated automatically while the video is analyzed, segmented and classified, can be enriched semi automatically according to the user type and/or a specialized recommendation system.

Keywords: XML, Multimodal Annotation, recommendation system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1439
6797 Using Probe Person Data for Travel Mode Detection

Authors: Muhammad Awais Shafique, Eiji Hato, Hideki Yaginuma

Abstract:

Recently GPS data is used in a lot of studies to automatically reconstruct travel patterns for trip survey. The aim is to minimize the use of questionnaire surveys and travel diaries so as to reduce their negative effects. In this paper data acquired from GPS and accelerometer embedded in smart phones is utilized to predict the mode of transportation used by the phone carrier. For prediction, Support Vector Machine (SVM) and Adaptive boosting (AdaBoost) are employed. Moreover a unique method to improve the prediction results from these algorithms is also proposed. Results suggest that the prediction accuracy of AdaBoost after improvement is relatively better than the rest.

Keywords: Accelerometer, AdaBoost, GPS, Mode Prediction, Support vector Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2450
6796 SUPAR: System for User-Centric Profiling of Association Rules in Streaming Data

Authors: Sarabjeet Kaur Kochhar

Abstract:

With a surge of stream processing applications novel techniques are required for generation and analysis of association rules in streams. The traditional rule mining solutions cannot handle streams because they generally require multiple passes over the data and do not guarantee the results in a predictable, small time. Though researchers have been proposing algorithms for generation of rules from streams, there has not been much focus on their analysis. We propose Association rule profiling, a user centric process for analyzing association rules and attaching suitable profiles to them depending on their changing frequency behavior over a previous snapshot of time in a data stream. Association rule profiles provide insights into the changing nature of associations and can be used to characterize the associations. We discuss importance of characteristics such as predictability of linkages present in the data and propose metric to quantify it. We also show how association rule profiles can aid in generation of user specific, more understandable and actionable rules. The framework is implemented as SUPAR: System for Usercentric Profiling of Association Rules in streaming data. The proposed system offers following capabilities: i) Continuous monitoring of frequency of streaming item-sets and detection of significant changes therein for association rule profiling. ii) Computation of metrics for quantifying predictability of associations present in the data. iii) User-centric control of the characterization process: user can control the framework through a) constraint specification and b) non-interesting rule elimination.

Keywords: Data Streams, User subjectivity, Change detection, Association rule profiles, Predictability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1458
6795 Effects of Signaling on the Performance of Directed Diffusion Routing Protocol

Authors: Apidet Booranawong

Abstract:

In an original directed diffusion routing protocol, a sink requests sensing data from a source node by flooding interest messages to the network. Then, the source finds the sink by sending exploratory data messages to all nodes that generate incoming interest messages. This protocol signaling can cause heavy traffic in the network, an interference of the radio signal, collisions, great energy consumption of sensor nodes, etc. According to this research problem, this paper investigates the effect of sending interest and exploratory data messages on the performance of directed diffusion routing protocol. We demonstrate the research problem occurred from employing directed diffusion protocol in mobile wireless environments. For this purpose, we perform a set of experiments by using NS2 (network simulator 2). The radio propagation models; Two-ray ground reflection with and without shadow fading are included to investigate the effect of signaling. The simulation results show that the number of times of sent and received protocol signaling in the case of sending interest and exploratory data messages are larger than the case of sending other protocol signals, especially in the case of shadowing model. Additionally, the number of exploratory data message is largest in one round of the protocol procedure.

Keywords: Directed diffusion, Flooding, Interest message, Exploratory data message, Radio propagation model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1784
6794 Rethinking the Analysis of Means-End Chain Data in Marketing Research

Authors: P. Puustinen, A. Kanto

Abstract:

This paper proposes a new procedure for analyzing means-end chain data in marketing research. Most commonly the collected data is summarized in the Hierarchical Value Map (HVM) illustrating the main attribute-consequence-value linkages. This paper argues that traditionally constructed HVM may give an erroneous impression of the results of a means-end study. To justify the arguments, an alternative procedure to (1) determine the dominant attribute-consequence-value linkages and (2) construct HVM in a precise manner is presented. The current approach makes a contribution to means-end analysis, allowing marketers to address a set of marketing problems, such as advertising strategy.

Keywords: Means-end chain analysis, Laddering, Hierarchical Value Map.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2791
6793 CSOLAP (Continuous Spatial On-Line Analytical Processing)

Authors: Taher Omran Ahmed, Abdullatif Mihdi Buras

Abstract:

Decision support systems are usually based on multidimensional structures which use the concept of hypercube. Dimensions are the axes on which facts are analyzed and form a space where a fact is located by a set of coordinates at the intersections of members of dimensions. Conventional multidimensional structures deal with discrete facts linked to discrete dimensions. However, when dealing with natural continuous phenomena the discrete representation is not adequate. There is a need to integrate spatiotemporal continuity within multidimensional structures to enable analysis and exploration of continuous field data. Research issues that lead to the integration of spatiotemporal continuity in multidimensional structures are numerous. In this paper, we discuss research issues related to the integration of continuity in multidimensional structures, present briefly a multidimensional model for continuous field data. We also define new aggregation operations. The model and the associated operations and measures are validated by a prototype.

Keywords: Continuous Data, Data warehousing, DecisionSupport, SOLAP

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1595
6792 A Study of Behavioral Phenomena Using ANN

Authors: Yudhajit Datta

Abstract:

Behavioral aspects of experience such as will power are rarely subjected to quantitative study owing to the numerous complexities involved. Will is a phenomenon that has puzzled humanity for a long time. It is a belief that will power of an individual affects the success achieved by them in life. It is also thought that a person endowed with great will power can overcome even the most crippling setbacks in life while a person with a weak will cannot make the most of life even the greatest assets. This study is an attempt to subject the phenomena of will to the test of an artificial neural network through a computational model. The claim being tested is that will power of an individual largely determines success achieved in life. It is proposed that data pertaining to success of individuals be obtained from an experiment and the phenomenon of will be incorporated into the model, through data generated recursively using a relation between will and success characteristic to the model. An artificial neural network trained using part of the data, could subsequently be used to make predictions regarding data points in the rest of the model. The procedure would be tried for different models and the model where the networks predictions are found to be in greatest agreement with the data would be selected; and used for studying the relation between success and will.

Keywords: Will Power, Success, ANN, Time Series Prediction, Sliding Window, Computational Model, Behavioral Phenomena.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1930
6791 Standard Languages for Creating a Database to Display Financial Statements on a Web Application

Authors: Vladimir Simovic, Matija Varga, Predrag Oreski

Abstract:

XHTML and XBRL are the standard languages for creating a database for the purpose of displaying financial statements on web applications. Today, XBRL is one of the most popular languages for business reporting. A large number of countries in the world recognize the role of XBRL language for financial reporting and the benefits that the reporting format provides in the collection, analysis, preparation, publication and the exchange of data (information) which is the positive side of this language. Here we present all advantages and opportunities that a company may have by using the XBRL format for business reporting. Also, this paper presents XBRL and other languages that are used for creating the database, such XML, XHTML, etc. The role of the AJAX complex model and technology will be explained in detail, and during the exchange of financial data between the web client and web server. Here will be mentioned basic layers of the network for data exchange via the web.

Keywords: XHTML, XBRL, XML, JavaScript, AJAX technology, data exchange.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1070
6790 Survey on Image Mining Using Genetic Algorithm

Authors: Jyoti Dua

Abstract:

One image is worth more than thousand words. Images if analyzed can reveal useful information. Low level image processing deals with the extraction of specific feature from a single image. Now the question arises: What technique should be used to extract patterns of very large and detailed image database? The answer of the question is: “Image Mining”. Image Mining deals with the extraction of image data relationship, implicit knowledge, and another pattern from the collection of images or image database. It is nothing but the extension of Data Mining. In the following paper, not only we are going to scrutinize the current techniques of image mining but also present a new technique for mining images using Genetic Algorithm.

Keywords: Image Mining, Data Mining, Genetic Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2445
6789 Landscape Visual Classification Using Land use and Contour Data for Tourism and Planning Decision Making in Cameron Highlands District

Authors: Hosni, N., Shinozaki, M.

Abstract:

Cameron Highlands is known for upland tourism area with vast natural wealth, mountainous landscape endowed with rich diverse species as well as people traditions and cultures. With these various resources, CH possesses an interesting visual and panorama that can be offered to the tourist. However this benefit may not be utilized without obtaining the understanding of existing landscape structure and visual. Given a limited data, this paper attempts to classify landscape visual of Cameron Highlands using land use and contour data. Visual points of view were determined from the given tourist attraction points in the CH Local Plan 2003-2015. The result shows landscape visual and structure categories offered in the study area. The result can be used for further analysis to determine the best alternative tourist trails for tourism planning and decision making using readily available data.

Keywords: Visibility, landscape visual, urban planning, GIS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2373
6788 Sampling of Variables in Discrete-Event Simulation using the Example of Inventory Evolutions in Job-Shop-Systems Based on Deterministic and Non-Deterministic Data

Authors: Bernd Scholz-Reiter, Christian Toonen, Jan Topi Tervo, Dennis Lappe

Abstract:

Time series analysis often requires data that represents the evolution of an observed variable in equidistant time steps. In order to collect this data sampling is applied. While continuous signals may be sampled, analyzed and reconstructed applying Shannon-s sampling theorem, time-discrete signals have to be dealt with differently. In this article we consider the discrete-event simulation (DES) of job-shop-systems and study the effects of different sampling rates on data quality regarding completeness and accuracy of reconstructed inventory evolutions. At this we discuss deterministic as well as non-deterministic behavior of system variables. Error curves are deployed to illustrate and discuss the sampling rate-s impact and to derive recommendations for its wellfounded choice.

Keywords: discrete-event simulation, job-shop-system, sampling rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827
6787 School Design and Energy Efficiency

Authors: B. Su

Abstract:

Auckland has a temperate climate with comfortable warm, dry summers and mild, wet winters. An Auckland school normally does not need air conditioning for cooling during the summer and only need heating during the winter. The space hating energy is the major portion of winter school energy consumption and the winter energy consumption is major portion of annual school energy consumption. School building thermal design should focus on the winter thermal performance for reducing the space heating energy. A number of Auckland schools- design data and energy consumption data are used for this study. This pilot study investigates the relationships between their energy consumption data and school building design data to improve future school design for energy efficiency.

Keywords: Building energy efficiency, building thermal performance, school building design, school energy consumption

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1883
6786 A Thought on Exotic Statistical Distributions

Authors: R K Sinha

Abstract:

The statistical distributions are modeled in explaining nature of various types of data sets. Although these distributions are mostly uni-modal, it is quite common to see multiple modes in the observed distribution of the underlying variables, which make the precise modeling unrealistic. The observed data do not exhibit smoothness not necessarily due to randomness, but could also be due to non-randomness resulting in zigzag curves, oscillations, humps etc. The present paper argues that trigonometric functions, which have not been used in probability functions of distributions so far, have the potential to take care of this, if incorporated in the distribution appropriately. A simple distribution (named as, Sinoform Distribution), involving trigonometric functions, is illustrated in the paper with a data set. The importance of trigonometric functions is demonstrated in the paper, which have the characteristics to make statistical distributions exotic. It is possible to have multiple modes, oscillations and zigzag curves in the density, which could be suitable to explain the underlying nature of select data set.

Keywords: Exotic Statistical Distributions, Kurtosis, Mixture Distributions, Multi-modal

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1626
6785 Data and Spatial Analysis for Economy and Education of 28 E.U. Member-States for 2014

Authors: Alexiou Dimitra, Fragkaki Maria

Abstract:

The objective of the paper is the study of geographic, economic and educational variables and their contribution to determine the position of each member-state among the EU-28 countries based on the values of seven variables as given by Eurostat. The Data Analysis methods of Multiple Factorial Correspondence Analysis (MFCA) Principal Component Analysis and Factor Analysis have been used. The cross tabulation tables of data consist of the values of seven variables for the 28 countries for 2014. The data are manipulated using the CHIC Analysis V 1.1 software package. The results of this program using MFCA and Ascending Hierarchical Classification are given in arithmetic and graphical form. For comparison reasons with the same data the Factor procedure of Statistical package IBM SPSS 20 has been used. The numerical and graphical results presented with tables and graphs, demonstrate the agreement between the two methods. The most important result is the study of the relation between the 28 countries and the position of each country in groups or clouds, which are formed according to the values of the corresponding variables.

Keywords: Multiple factorial correspondence analysis, principal component analysis, factor analysis, E.U.-28 countries, statistical package IBM SPSS 20, CHIC Analysis V 1.1 Software, Eurostat.eu statistics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1085
6784 Comparison between Higher-Order SVD and Third-order Orthogonal Tensor Product Expansion

Authors: Chiharu Okuma, Jun Murakami, Naoki Yamamoto

Abstract:

In digital signal processing it is important to approximate multi-dimensional data by the method called rank reduction, in which we reduce the rank of multi-dimensional data from higher to lower. For 2-dimennsional data, singular value decomposition (SVD) is one of the most known rank reduction techniques. Additional, outer product expansion expanded from SVD was proposed and implemented for multi-dimensional data, which has been widely applied to image processing and pattern recognition. However, the multi-dimensional outer product expansion has behavior of great computation complex and has not orthogonally between the expansion terms. Therefore we have proposed an alterative method, Third-order Orthogonal Tensor Product Expansion short for 3-OTPE. 3-OTPE uses the power method instead of nonlinear optimization method for decreasing at computing time. At the same time the group of B. D. Lathauwer proposed Higher-Order SVD (HOSVD) that is also developed with SVD extensions for multi-dimensional data. 3-OTPE and HOSVD are similarly on the rank reduction of multi-dimensional data. Using these two methods we can obtain computation results respectively, some ones are the same while some ones are slight different. In this paper, we compare 3-OTPE to HOSVD in accuracy of calculation and computing time of resolution, and clarify the difference between these two methods.

Keywords: Singular value decomposition (SVD), higher-order SVD (HOSVD), higher-order tensor, outer product expansion, power method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1562
6783 Electron-Impact Excitation of Kr 5s, 5p Levels

Authors: Alla A. Mityureva

Abstract:

The available data on the cross sections of electronimpact excitation of krypton 5s and 5p configuration levels out of the ground state are represented in convenient and compact form. The results are obtained by regression through all known published data related to this process.

Keywords: Cross section, electron excitation, krypton, regression

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1087
6782 Seamless Flow of Voluminous Data in High Speed Network without Congestion Using Feedback Mechanism

Authors: T.Sheela, Dr.J.Raja

Abstract:

Continuously growing needs for Internet applications that transmit massive amount of data have led to the emergence of high speed network. Data transfer must take place without any congestion and hence feedback parameters must be transferred from the receiver end to the sender end so as to restrict the sending rate in order to avoid congestion. Even though TCP tries to avoid congestion by restricting the sending rate and window size, it never announces the sender about the capacity of the data to be sent and also it reduces the window size by half at the time of congestion therefore resulting in the decrease of throughput, low utilization of the bandwidth and maximum delay. In this paper, XCP protocol is used and feedback parameters are calculated based on arrival rate, service rate, traffic rate and queue size and hence the receiver informs the sender about the throughput, capacity of the data to be sent and window size adjustment, resulting in no drastic decrease in window size, better increase in sending rate because of which there is a continuous flow of data without congestion. Therefore as a result of this, there is a maximum increase in throughput, high utilization of the bandwidth and minimum delay. The result of the proposed work is presented as a graph based on throughput, delay and window size. Thus in this paper, XCP protocol is well illustrated and the various parameters are thoroughly analyzed and adequately presented.

Keywords: Bandwidth-Delay Product, Congestion Control, Congestion Window, TCP/IP

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1488
6781 Eco-Connectivity: Sustainable Practices in Telecom Networks Using Big Data

Authors: Tharunika Sridhar

Abstract:

This paper addresses sustainable eco-connectivity within the telecommunications sector studying its importance to tackle the contemporary challenges and data regulation issues. The paper also investigates the role of Big Data and its integration in this context, specific to telecom industry. One of the major focus areas in this paper is studying and examining the pathways explored, that are state-of-the-art ecological infrastructure solutions and sector-led measures derived from expert analyses and reviews. Additionally, the paper analyses critical factors involving cost-effective route planning, and the development of green telecommunications infrastructure that adds qualitative reasoning to the research idea. Furthermore, the study discusses in detail a potential green roadmap towards sustainability by exploring green routing software, eco-friendly infrastructure and other eco-focused initiatives. The paper is also directed at the special linguistic needs of the telecommunications sector by focusing on targeted select range of telecom environment.

Keywords: Big Data, telecom, sustainable telecom sector, telecom networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 84
6780 Comparison of Irradiance Decomposition and Energy Production Methods in a Solar Photovoltaic System

Authors: Tisciane Perpetuo e Oliveira, Dante Inga Narvaez, Marcelo Gradella Villalva

Abstract:

Installations of solar photovoltaic systems have increased considerably in the last decade. Therefore, it has been noticed that monitoring of meteorological data (solar irradiance, air temperature, wind velocity, etc.) is important to predict the potential of a given geographical area in solar energy production. In this sense, the present work compares two computational tools that are capable of estimating the energy generation of a photovoltaic system through correlation analyzes of solar radiation data: PVsyst software and an algorithm based on the PVlib package implemented in MATLAB. In order to achieve the objective, it was necessary to obtain solar radiation data (measured and from a solarimetric database), analyze the decomposition of global solar irradiance in direct normal and horizontal diffuse components, as well as analyze the modeling of the devices of a photovoltaic system (solar modules and inverters) for energy production calculations. Simulated results were compared with experimental data in order to evaluate the performance of the studied methods. Errors in estimation of energy production were less than 30% for the MATLAB algorithm and less than 20% for the PVsyst software.

Keywords: Energy production, meteorological data, irradiance decomposition, solar photovoltaic system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 766
6779 Cloud Computing Cryptography "State-of-the-Art"

Authors: Omer K. Jasim, Safia Abbas, El-Sayed M. El-Horbaty, Abdel-Badeeh M. Salem

Abstract:

Cloud computing technology is very useful in present day to day life, it uses the internet and the central remote servers to provide and maintain data as well as applications. Such applications in turn can be used by the end users via the cloud communications without any installation. Moreover, the end users’ data files can be accessed and manipulated from any other computer using the internet services. Despite the flexibility of data and application accessing and usage that cloud computing environments provide, there are many questions still coming up on how to gain a trusted environment that protect data and applications in clouds from hackers and intruders. This paper surveys the “keys generation and management” mechanism and encryption/decryption algorithms used in cloud computing environments, we proposed new security architecture for cloud computing environment that considers the various security gaps as much as possible. A new cryptographic environment that implements quantum mechanics in order to gain more trusted with less computation cloud communications is given.

Keywords: Cloud Computing, Cloud Encryption Model, Quantum Key Distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4094
6778 Deep iCrawl: An Intelligent Vision-Based Deep Web Crawler

Authors: R.Anita, V.Ganga Bharani, N.Nityanandam, Pradeep Kumar Sahoo

Abstract:

The explosive growth of World Wide Web has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. Deep web pages are created dynamically as a result of queries posed to specific web databases. The structure of the deep web pages makes it impossible for traditional web crawlers to access deep web contents. This paper, Deep iCrawl, gives a novel and vision-based approach for extracting data from the deep web. Deep iCrawl splits the process into two phases. The first phase includes Query analysis and Query translation and the second covers vision-based extraction of data from the dynamically created deep web pages. There are several established approaches for the extraction of deep web pages but the proposed method aims at overcoming the inherent limitations of the former. This paper also aims at comparing the data items and presenting them in the required order.

Keywords: Crawler, Deep web, Web Database

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2156
6777 Searchable Encryption in Cloud Storage

Authors: Ren-Junn Hwang, Chung-Chien Lu, Jain-Shing Wu

Abstract:

Cloud outsource storage is one of important services in cloud computing. Cloud users upload data to cloud servers to reduce the cost of managing data and maintaining hardware and software. To ensure data confidentiality, users can encrypt their files before uploading them to a cloud system. However, retrieving the target file from the encrypted files exactly is difficult for cloud server. This study proposes a protocol for performing multikeyword searches for encrypted cloud data by applying k-nearest neighbor technology. The protocol ranks the relevance scores of encrypted files and keywords, and prevents cloud servers from learning search keywords submitted by a cloud user. To reduce the costs of file transfer communication, the cloud server returns encrypted files in order of relevance. Moreover, when a cloud user inputs an incorrect keyword and the number of wrong alphabet does not exceed a given threshold; the user still can retrieve the target files from cloud server. In addition, the proposed scheme satisfies security requirements for outsourced data storage.

Keywords: Fault-tolerance search, multi-keywords search, outsource storage, ranked search, searchable encryption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3081
6776 Unstructured-Data Content Search Based on Optimized EEG Signal Processing and Multi-Objective Feature Extraction

Authors: Qais M. Yousef, Yasmeen A. Alshaer

Abstract:

Over the last few years, the amount of data available on the globe has been increased rapidly. This came up with the emergence of recent concepts, such as the big data and the Internet of Things, which have furnished a suitable solution for the availability of data all over the world. However, managing this massive amount of data remains a challenge due to their large verity of types and distribution. Therefore, locating the required file particularly from the first trial turned to be a not easy task, due to the large similarities of names for different files distributed on the web. Consequently, the accuracy and speed of search have been negatively affected. This work presents a method using Electroencephalography signals to locate the files based on their contents. Giving the concept of natural mind waves processing, this work analyses the mind wave signals of different people, analyzing them and extracting their most appropriate features using multi-objective metaheuristic algorithm, and then classifying them using artificial neural network to distinguish among files with similar names. The aim of this work is to provide the ability to find the files based on their contents using human thoughts only. Implementing this approach and testing it on real people proved its ability to find the desired files accurately within noticeably shorter time and retrieve them as a first choice for the user.

Keywords: Artificial intelligence, data contents search, human active memory, mind wave, multi-objective optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 920