Publications | Computer and Information Engineering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4316

World Academy of Science, Engineering and Technology

[Computer and Information Engineering]

Online ISSN : 1307-6892

3716 Research on Load Balancing Technology for Web Service Mobile Host

Authors: Yao Lu, Xiuguo Zhang, Zhiying Cao

Abstract:

In this paper, Load Balancing idea is used in the Web service mobile host. The main idea of Load Balancing is to establish a one-to-many mapping mechanism: An entrance-mapping request to plurality of processing node in order to realize the dividing and assignment processing. Because the mobile host is a resource constrained environment, there are some Web services which cannot be completed on the mobile host. When the mobile host resource is not enough to complete the request, Load Balancing scheduler will divide the request into a plurality of sub-requests and transfer them to different auxiliary mobile hosts. Auxiliary mobile host executes sub-requests, and then, the results will be returned to the mobile host. Service request integrator receives results of sub-requests from the auxiliary mobile host, and integrates the sub-requests. In the end, the complete request is returned to the client. Experimental results show that this technology adopted in this paper can complete requests and have a higher efficiency.

Keywords: Dinic, load balancing, mobile host, web service.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1103
3715 Standard Languages for Creating a Database to Display Financial Statements on a Web Application

Authors: Vladimir Simovic, Matija Varga, Predrag Oreski

Abstract:

XHTML and XBRL are the standard languages for creating a database for the purpose of displaying financial statements on web applications. Today, XBRL is one of the most popular languages for business reporting. A large number of countries in the world recognize the role of XBRL language for financial reporting and the benefits that the reporting format provides in the collection, analysis, preparation, publication and the exchange of data (information) which is the positive side of this language. Here we present all advantages and opportunities that a company may have by using the XBRL format for business reporting. Also, this paper presents XBRL and other languages that are used for creating the database, such XML, XHTML, etc. The role of the AJAX complex model and technology will be explained in detail, and during the exchange of financial data between the web client and web server. Here will be mentioned basic layers of the network for data exchange via the web.

Keywords: XHTML, XBRL, XML, JavaScript, AJAX technology, data exchange.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1050
3714 Automated User Story Driven Approach for Web-Based Functional Testing

Authors: Mahawish Masud, Muhammad Iqbal, M. U. Khan, Farooque Azam

Abstract:

Manual writing of test cases from functional requirements is a time-consuming task. Such test cases are not only difficult to write but are also challenging to maintain. Test cases can be drawn from the functional requirements that are expressed in natural language. However, manual test case generation is inefficient and subject to errors.  In this paper, we have presented a systematic procedure that could automatically derive test cases from user stories. The user stories are specified in a restricted natural language using a well-defined template.  We have also presented a detailed methodology for writing our test ready user stories. Our tool “Test-o-Matic” automatically generates the test cases by processing the restricted user stories. The generated test cases are executed by using open source Selenium IDE.  We evaluate our approach on a case study, which is an open source web based application. Effectiveness of our approach is evaluated by seeding faults in the open source case study using known mutation operators.  Results show that the test case generation from restricted user stories is a viable approach for automated testing of web applications.

Keywords: Automated testing, natural language, user story modeling, software engineering, software testing, test case specification, transformation and automation, user story, web application testing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2892
3713 Evaluation of Sensor Pattern Noise Estimators for Source Camera Identification

Authors: Benjamin Anderson-Sackaney, Amr Abdel-Dayem

Abstract:

This paper presents a comprehensive survey of recent source camera identification (SCI) systems. Then, the performance of various sensor pattern noise (SPN) estimators was experimentally assessed, under common photo response non-uniformity (PRNU) frameworks. The experiments used 1350 natural and 900 flat-field images, captured by 18 individual cameras. 12 different experiments, grouped into three sets, were conducted. The results were analyzed using the receiver operator characteristic (ROC) curves. The experimental results demonstrated that combining the basic SPN estimator with a wavelet-based filtering scheme provides promising results. However, the phase SPN estimator fits better with both patch-based (BM3D) and anisotropic diffusion (AD) filtering schemes.

Keywords: Sensor pattern noise, source camera identification, photo response non-uniformity, anisotropic diffusion, peak to correlation energy ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1107
3712 Signal Processing Approach to Study Multifractality and Singularity of Solar Wind Speed Time Series

Authors: Tushnik Sarkar, Mofazzal H. Khondekar, Subrata Banerjee

Abstract:

This paper investigates the nature of the fluctuation of the daily average Solar wind speed time series collected over a period of 2492 days, from 1st January, 1997 to 28th October, 2003. The degree of self-similarity and scalability of the Solar Wind Speed signal has been explored to characterise the signal fluctuation. Multi-fractal Detrended Fluctuation Analysis (MFDFA) method has been implemented on the signal which is under investigation to perform this task. Furthermore, the singularity spectra of the signals have been also obtained to gauge the extent of the multifractality of the time series signal.

Keywords: Detrended fluctuation analysis, generalized Hurst exponent, holder exponents, multifractal exponent, multifractal spectrum, singularity spectrum, time series analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1281
3711 Multi-Level Meta-Modeling for Enabling Dynamic Subtyping for Industrial Automation

Authors: Zoltan Theisz, Gergely Mezei

Abstract:

Modern industrial automation relies on service oriented concepts of Internet of Things (IoT) device modeling in order to provide a flexible and extendable environment for service meta-repository. However, state-of-the-art meta-modeling techniques prefer design-time modeling, which results in a heavy usage of class sometimes unnecessary static subtyping. Although this approach benefits from clear-cut object-oriented design principles, it also seals the model repository for further dynamic extensions. In this paper, a dynamic multi-level modeling approach is introduced that enables dynamic subtyping through a more relaxed partial instantiation mechanism. The approach is demonstrated on a simple sensor network example.

Keywords: Meta-modeling, dynamic subtyping, DMLA, industrial automation, arrowhead.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1099
3710 Detecting Geographically Dispersed Overlay Communities Using Community Networks

Authors: Madhushi Bandara, Dharshana Kasthurirathna, Danaja Maldeniya, Mahendra Piraveenan

Abstract:

Community detection is an extremely useful technique in understanding the structure and function of a social network. Louvain algorithm, which is based on Newman-Girman modularity optimization technique, is extensively used as a computationally efficient method extract the communities in social networks. It has been suggested that the nodes that are in close geographical proximity have a higher tendency of forming communities. Variants of the Newman-Girman modularity measure such as dist-modularity try to normalize the effect of geographical proximity to extract geographically dispersed communities, at the expense of losing the information about the geographically proximate communities. In this work, we propose a method to extract geographically dispersed communities while preserving the information about the geographically proximate communities, by analyzing the ‘community network’, where the centroids of communities would be considered as network nodes. We suggest that the inter-community link strengths, which are normalized over the community sizes, may be used to identify and extract the ‘overlay communities’. The overlay communities would have relatively higher link strengths, despite being relatively apart in their spatial distribution. We apply this method to the Gowalla online social network, which contains the geographical signatures of its users, and identify the overlay communities within it.

Keywords: Social networks, community detection, modularity optimization, geographically dispersed communities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1271
3709 Enhanced Planar Pattern Tracking for an Outdoor Augmented Reality System

Authors: L. Yu, W. K. Li, S. K. Ong, A. Y. C. Nee

Abstract:

In this paper, a scalable augmented reality framework for handheld devices is presented. The presented framework is enabled by using a server-client data communication structure, in which the search for tracking targets among a database of images is performed on the server-side while pixel-wise 3D tracking is performed on the client-side, which, in this case, is a handheld mobile device. Image search on the server-side adopts a residual-enhanced image descriptors representation that gives the framework a scalability property. The tracking algorithm on the client-side is based on a gravity-aligned feature descriptor which takes the advantage of a sensor-equipped mobile device and an optimized intensity-based image alignment approach that ensures the accuracy of 3D tracking. Automatic content streaming is achieved by using a key-frame selection algorithm, client working phase monitoring and standardized rules for content communication between the server and client. The recognition accuracy test performed on a standard dataset shows that the method adopted in the presented framework outperforms the Bag-of-Words (BoW) method that has been used in some of the previous systems. Experimental test conducted on a set of video sequences indicated the real-time performance of the tracking system with a frame rate at 15-30 frames per second. The presented framework is exposed to be functional in practical situations with a demonstration application on a campus walk-around.

Keywords: Augmented reality framework, server-client model, vision-based tracking, image search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1109
3708 Prototype of an Interactive Toy from Lego Robotics Kits for Children with Autism

Authors: Ricardo A. Martins, Matheus S. da Silva, Gabriel H. F. Iarossi, Helen C. M. Senefonte, Cinthyan R. S. C. de Barbosa

Abstract:

This paper is the development of a concept of the man/robot interaction. More accurately in developing of an autistic child that have more troubles with interaction, here offers an efficient solution, even though simple; however, less studied for this public. This concept is based on code applied thought out the Lego NXT kit, built for the interpretation of the robot, thereby can create this interaction in a constructive way for children suffering with Autism.

Keywords: Lego NXT, autism, ANN (Artificial Neural Network), Backpropagation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 854
3707 A Simulated Scenario of WikiGIS to Support the Iteration and Traceability Management of the Geodesign Process

Authors: Wided Batita, Stéphane Roche, Claude Caron

Abstract:

Geodesign is an emergent term related to a new and complex process. Hence, it needs to rethink tools, technologies and platforms in order to efficiently achieve its goals. A few tools have emerged since 2010 such as CommunityViz, GeoPlanner, etc. In the era of Web 2.0 and collaboration, WikiGIS has been proposed as a new category of tools. In this paper, we present WikiGIS functionalities dealing mainly with the iteration and traceability management to support the collaboration of the Geodesign process. Actually, WikiGIS is built on GeoWeb 2.0 technologies —and primarily on wiki— and aims at managing the tracking of participants’ editing. This paper focuses on a simplified simulation to illustrate the strength of WikiGIS in the management of traceability and in the access to history in a Geodesign process. Indeed, a cartographic user interface has been implemented, and then a hypothetical use case has been imagined as proof of concept.

Keywords: Geodesign, history, traceability, tracking of participants’ editing, WikiGIS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 902
3706 A Description Logics Based Approach for Building Multi-Viewpoints Ontologies

Authors: M. Hemam, M. Djezzar, T. Djouad

Abstract:

We are interested in the problem of building an ontology in a heterogeneous organization, by taking into account different viewpoints and different terminologies of communities in the organization. Such ontology, that we call multi-viewpoint ontology, confers to the same universe of discourse, several partial descriptions, where each one is relative to a particular viewpoint. In addition, these partial descriptions share at global level, ontological elements constituent a consensus between the various viewpoints. In order to provide response elements to this problem we define a multi-viewpoints knowledge model based on viewpoint and ontology notions. The multi-viewpoints knowledge model is used to formalize the multi-viewpoints ontology in description logics language.

Keywords: Description logic, knowledge engineering, ontology, viewpoint.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 977
3705 Channels Splitting Strategy for Optical Local Area Networks of Passive Star Topology

Authors: Peristera Baziana

Abstract:

In this paper, we present a network configuration for a WDM LANs of passive star topology that assume that the set of data WDM channels is split into two separate sets of channels, with different access rights over them. Especially, a synchronous transmission WDMA access algorithm is adopted in order to increase the probability of successful transmission over the data channels and consequently to reduce the probability of data packets transmission cancellation in order to avoid the data channels collisions. Thus, a control pre-transmission access scheme is followed over a separate control channel. An analytical Markovian model is studied and the average throughput is mathematically derived. The performance is studied for several numbers of data channels and various values of control phase duration.

Keywords: Access algorithm, channels division, collisions avoidance, wavelength division multiplexing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 985
3704 Feature Vector Fusion for Image Based Human Age Estimation

Authors: D. Karthikeyan, G. Balakrishnan

Abstract:

Human faces, as important visual signals, express a significant amount of nonverbal info for usage in human-to-human communication. Age, specifically, is more significant among these properties. Human age estimation using facial image analysis as an automated method which has numerous potential real‐world applications. In this paper, an automated age estimation framework is presented. Support Vector Regression (SVR) strategy is utilized to investigate age prediction. This paper depicts a feature extraction taking into account Gray Level Co-occurrence Matrix (GLCM), which can be utilized for robust face recognition framework. It applies GLCM operation to remove the face's features images and Active Appearance Models (AAMs) to assess the human age based on image. A fused feature technique and SVR with GA optimization are proposed to lessen the error in age estimation.

Keywords: Support vector regression, feature extraction, gray level co-occurrence matrix, active appearance models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1286
3703 A POX Controller Module to Collect Web Traffic Statistics in SDN Environment

Authors: Wisam H. Muragaa, Kamaruzzaman Seman, Mohd Fadzli Marhusin

Abstract:

Software Defined Networking (SDN) is a new norm of networks. It is designed to facilitate the way of managing, measuring, debugging and controlling the network dynamically, and to make it suitable for the modern applications. Generally, measurement methods can be divided into two categories: Active and passive methods. Active measurement method is employed to inject test packets into the network in order to monitor their behaviour (ping tool as an example). Meanwhile the passive measurement method is used to monitor the traffic for the purpose of deriving measurement values. The measurement methods, both active and passive, are useful for the collection of traffic statistics, and monitoring of the network traffic. Although there has been a work focusing on measuring traffic statistics in SDN environment, it was only meant for measuring packets and bytes rates for non-web traffic. In this study, a feasible method will be designed to measure the number of packets and bytes in a certain time, and facilitate obtaining statistics for both web traffic and non-web traffic. Web traffic refers to HTTP requests that use application layer; while non-web traffic refers to ICMP and TCP requests. Thus, this work is going to be more comprehensive than previous works. With a developed module on POX OpenFlow controller, information will be collected from each active flow in the OpenFlow switch, and presented on Command Line Interface (CLI) and wireshark interface. Obviously, statistics that will be displayed on CLI and on wireshark interfaces include type of protocol, number of bytes and number of packets, among others. Besides, this module will show the number of flows added to the switch whenever traffic is generated from and to hosts in the same statistics list. In order to carry out this work effectively, our Python module will send a statistics request message to the switch requesting its current ports and flows statistics in every five seconds; while the switch will reply with the required information in a message called statistics reply message. Thus, POX controller will be notified and updated with any changes could happen in the entire network in a very short time. Therefore, our aim of this study is to prepare a list for the important statistics elements that are collected from the whole network, to be used for any further researches; particularly, those that are dealing with the detection of the network attacks that cause a sudden rise in the number of packets and bytes like Distributed Denial of Service (DDoS).

Keywords: Mininet, OpenFlow, POX controller, SDN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2931
3702 Supporting Embedded Medical Software Development with MDevSPICE® and Agile Practices

Authors: Surafel Demissie, Frank Keenan, Fergal McCaffery

Abstract:

Emerging medical devices are highly relying on embedded software that runs on the specific platform in real time. The development of embedded software is different from ordinary software development due to the hardware-software dependency. MDevSPICE® has been developed to provide guidance to support such development. To increase the flexibility of this framework agile practices have been introduced. This paper outlines the challenges for embedded medical device software development and the structure of MDevSPICE® and suggests a suitable combination of agile practices that will help to add flexibility and address corresponding challenges of embedded medical device software development.

Keywords: Agile practices, challenges, embedded software, MDevSPICE®, medical device.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1076
3701 An Improvement of Multi-Label Image Classification Method Based on Histogram of Oriented Gradient

Authors: Ziad Abdallah, Mohamad Oueidat, Ali El-Zaart

Abstract:

Image Multi-label Classification (IMC) assigns a label or a set of labels to an image. The big demand for image annotation and archiving in the web attracts the researchers to develop many algorithms for this application domain. The existing techniques for IMC have two drawbacks: The description of the elementary characteristics from the image and the correlation between labels are not taken into account. In this paper, we present an algorithm (MIML-HOGLPP), which simultaneously handles these limitations. The algorithm uses the histogram of gradients as feature descriptor. It applies the Label Priority Power-set as multi-label transformation to solve the problem of label correlation. The experiment shows that the results of MIML-HOGLPP are better in terms of some of the evaluation metrics comparing with the two existing techniques.

Keywords: Data mining, information retrieval system, multi-label, problem transformation, histogram of gradients.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1286
3700 Image Rotation Using an Augmented 2-Step Shear Transform

Authors: Hee-Choul Kwon, Heeyong Kwon

Abstract:

Image rotation is one of main pre-processing steps for image processing or image pattern recognition. It is implemented with a rotation matrix multiplication. It requires a lot of floating point arithmetic operations and trigonometric calculations, so it takes a long time to execute. Therefore, there has been a need for a high speed image rotation algorithm without two major time-consuming operations. However, the rotated image has a drawback, i.e. distortions. We solved the problem using an augmented two-step shear transform. We compare the presented algorithm with the conventional rotation with images of various sizes. Experimental results show that the presented algorithm is superior to the conventional rotation one.

Keywords: High speed rotation operation, image rotation, transform matrix, image processing, pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2623
3699 Computing Continuous Skyline Queries without Discriminating between Static and Dynamic Attributes

Authors: Ibrahim Gomaa, Hoda M. O. Mokhtar

Abstract:

Although most of the existing skyline queries algorithms focused basically on querying static points through static databases; with the expanding number of sensors, wireless communications and mobile applications, the demand for continuous skyline queries has increased. Unlike traditional skyline queries which only consider static attributes, continuous skyline queries include dynamic attributes, as well as the static ones. However, as skyline queries computation is based on checking the domination of skyline points over all dimensions, considering both the static and dynamic attributes without separation is required. In this paper, we present an efficient algorithm for computing continuous skyline queries without discriminating between static and dynamic attributes. Our algorithm in brief proceeds as follows: First, it excludes the points which will not be in the initial skyline result; this pruning phase reduces the required number of comparisons. Second, the association between the spatial positions of data points is examined; this phase gives an idea of where changes in the result might occur and consequently enables us to efficiently update the skyline result (continuous update) rather than computing the skyline from scratch. Finally, experimental evaluation is provided which demonstrates the accuracy, performance and efficiency of our algorithm over other existing approaches.

Keywords: Continuous query processing, dynamic database, moving object, skyline queries.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1219
3698 Coloured Petri Nets Model for Web Architectures of Web and Database Servers

Authors: Nidhi Gaur, Padmaja Joshi, Vijay Jain, Rajeev Srivastava

Abstract:

Web application architecture is important to achieve the desired performance for the application. Performance analysis studies are conducted to evaluate existing or planned systems. Web applications are used by hundreds of thousands of users simultaneously, which sometimes increases the risk of server failure in real time operations. We use Coloured Petri Net (CPN), a very powerful tool for modelling dynamic behaviour of a web application system. CPNs extend the vocabulary of ordinary Petri nets and add features that make them suitable for modelling large systems. The major focus of this work is on server side of web applications. The presented work focuses on modelling restructuring aspects, with major focus on concurrency and architecture, using CPN. It also focuses on bringing out the appropriate architecture for web and database servers given the number of concurrent users.

Keywords: Coloured petri nets, concurrent users, performance modelling, web application architecture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1261
3697 Compressed Suffix Arrays to Self-Indexes Based on Partitioned Elias-Fano

Authors: Guo Wenyu, Qu Youli

Abstract:

A practical and simple self-indexing data structure, Partitioned Elias-Fano (PEF) - Compressed Suffix Arrays (CSA), is built in linear time for the CSA based on PEF indexes. Moreover, the PEF-CSA is compared with two classical compressed indexing methods, Ferragina and Manzini implementation (FMI) and Sad-CSA on different type and size files in Pizza & Chili. The PEF-CSA performs better on the existing data in terms of the compression ratio, count, and locates time except for the evenly distributed data such as proteins data. The observations of the experiments are that the distribution of the φ is more important than the alphabet size on the compression ratio. Unevenly distributed data φ makes better compression effect, and the larger the size of the hit counts, the longer the count and locate time.

Keywords: Compressed suffix array, self-indexing, partitioned Elias-Fano, PEF-CSA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1053
3696 Comparative Study of Conventional and Satellite Based Agriculture Information System

Authors: Rafia Hassan, Ali Rizwan, Sadaf Farhan, Bushra Sabir

Abstract:

The purpose of this study is to compare the conventional crop monitoring system with the satellite based crop monitoring system in Pakistan. This study is conducted for SUPARCO (Space and Upper Atmosphere Research Commission). The study focused on the wheat crop, as it is the main cash crop of Pakistan and province of Punjab. This study will answer the following: Which system is better in terms of cost, time and man power? The man power calculated for Punjab CRS is: 1,418 personnel and for SUPARCO: 26 personnel. The total cost calculated for SUPARCO is almost 13.35 million and CRS is 47.705 million. The man hours calculated for CRS (Crop Reporting Service) are 1,543,200 hrs (136 days) and man hours for SUPARCO are 8, 320hrs (40 days). It means that SUPARCO workers finish their work 96 days earlier than CRS workers. The results show that the satellite based crop monitoring system is efficient in terms of manpower, cost and time as compared to the conventional system, and also generates early crop forecasts and estimations. The research instruments used included: Interviews, physical visits, group discussions, questionnaires, study of reports and work flows. A total of 93 employees were selected using Yamane’s formula for data collection, which is done with the help questionnaires and interviews. Comparative graphing is used for the analysis of data to formulate the results of the research. The research findings also demonstrate that although conventional methods have a strong impact still in Pakistan (for crop monitoring) but it is the time to bring a change through technology, so that our agriculture will also be developed along modern lines.

Keywords: Crop reporting service, SRS/GIS, satellite remote sensing/geographic information system, area frame, sample frame.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1289
3695 An Adaptive Dimensionality Reduction Approach for Hyperspectral Imagery Semantic Interpretation

Authors: Akrem Sellami, Imed Riadh Farah, Basel Solaiman

Abstract:

With the development of HyperSpectral Imagery (HSI) technology, the spectral resolution of HSI became denser, which resulted in large number of spectral bands, high correlation between neighboring, and high data redundancy. However, the semantic interpretation is a challenging task for HSI analysis due to the high dimensionality and the high correlation of the different spectral bands. In fact, this work presents a dimensionality reduction approach that allows to overcome the different issues improving the semantic interpretation of HSI. Therefore, in order to preserve the spatial information, the Tensor Locality Preserving Projection (TLPP) has been applied to transform the original HSI. In the second step, knowledge has been extracted based on the adjacency graph to describe the different pixels. Based on the transformation matrix using TLPP, a weighted matrix has been constructed to rank the different spectral bands based on their contribution score. Thus, the relevant bands have been adaptively selected based on the weighted matrix. The performance of the presented approach has been validated by implementing several experiments, and the obtained results demonstrate the efficiency of this approach compared to various existing dimensionality reduction techniques. Also, according to the experimental results, we can conclude that this approach can adaptively select the relevant spectral improving the semantic interpretation of HSI.

Keywords: Band selection, dimensionality reduction, feature extraction, hyperspectral imagery, semantic interpretation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1147
3694 JREM: An Approach for Formalising Models in the Requirements Phase with JSON and NoSQL Databases

Authors: Aitana Alonso-Nogueira, Helia Estévez-Fernández, Isaías García

Abstract:

This paper presents an approach to reduce some of its current flaws in the requirements phase inside the software development process. It takes the software requirements of an application, makes a conceptual modeling about it and formalizes it within JSON documents. This formal model is lodged in a NoSQL database which is document-oriented, that is, MongoDB, because of its advantages in flexibility and efficiency. In addition, this paper underlines the contributions of the detailed approach and shows some applications and benefits for the future work in the field of automatic code generation using model-driven engineering tools.

Keywords: Conceptual modeling, JSON, NoSQL databases, requirements engineering, software development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1047
3693 Collision Detection Algorithm Based on Data Parallelism

Authors: Zhen Peng, Baifeng Wu

Abstract:

Modern computing technology enters the era of parallel computing with the trend of sustainable and scalable parallelism. Single Instruction Multiple Data (SIMD) is an important way to go along with the trend. It is able to gather more and more computing ability by increasing the number of processor cores without the need of modifying the program. Meanwhile, in the field of scientific computing and engineering design, many computation intensive applications are facing the challenge of increasingly large amount of data. Data parallel computing will be an important way to further improve the performance of these applications. In this paper, we take the accurate collision detection in building information modeling as an example. We demonstrate a model for constructing a data parallel algorithm. According to the model, a complex object is decomposed into the sets of simple objects; collision detection among complex objects is converted into those among simple objects. The resulting algorithm is a typical SIMD algorithm, and its advantages in parallelism and scalability is unparalleled in respect to the traditional algorithms.

Keywords: Data parallelism, collision detection, single instruction multiple data, building information modeling, continuous scalability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1202
3692 Incremental Learning of Independent Topic Analysis

Authors: Takahiro Nishigaki, Katsumi Nitta, Takashi Onoda

Abstract:

In this paper, we present a method of applying Independent Topic Analysis (ITA) to increasing the number of document data. The number of document data has been increasing since the spread of the Internet. ITA was presented as one method to analyze the document data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis (ICA). ICA is a technique in the signal processing; however, it is difficult to apply the ITA to increasing number of document data. Because ITA must use the all document data so temporal and spatial cost is very high. Therefore, we present Incremental ITA which extracts the independent topics from increasing number of document data. Incremental ITA is a method of updating the independent topics when the document data is added after extracted the independent topics from a just previous the data. In addition, Incremental ITA updates the independent topics when the document data is added. And we show the result applied Incremental ITA to benchmark datasets.

Keywords: Text mining, topic extraction, independent, incremental, independent component analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1022
3691 Robust Control of a Dynamic Model of an F-16 Aircraft with Improved Damping through Linear Matrix Inequalities

Authors: J. P. P. Andrade, V. A. F. Campos

Abstract:

This work presents an application of Linear Matrix Inequalities (LMI) for the robust control of an F-16 aircraft through an algorithm ensuring the damping factor to the closed loop system. The results show that the zero and gain settings are sufficient to ensure robust performance and stability with respect to various operating points. The technique used is the pole placement, which aims to put the system in closed loop poles in a specific region of the complex plane. Test results using a dynamic model of the F-16 aircraft are presented and discussed.

Keywords: F-16 Aircraft, linear matrix inequalities, pole placement, robust control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1605
3690 A Programming Assessment Software Artefact Enhanced with the Help of Learners

Authors: Romeo A. Botes, Imelda Smit

Abstract:

The demands of an ever changing and complex higher education environment, along with the profile of modern learners challenge current approaches to assessment and feedback. More learners enter the education system every year. The younger generation expects immediate feedback. At the same time, feedback should be meaningful. The assessment of practical activities in programming poses a particular problem, since both lecturers and learners in the information and computer science discipline acknowledge that paper-based assessment for programming subjects lacks meaningful real-life testing. At the same time, feedback lacks promptness, consistency, comprehensiveness and individualisation. Most of these aspects may be addressed by modern, technology-assisted assessment. The focus of this paper is the continuous development of an artefact that is used to assist the lecturer in the assessment and feedback of practical programming activities in a senior database programming class. The artefact was developed using three Design Science Research cycles. The first implementation allowed one programming activity submission per assessment intervention. This pilot provided valuable insight into the obstacles regarding the implementation of this type of assessment tool. A second implementation improved the initial version to allow multiple programming activity submissions per assessment. The focus of this version is on providing scaffold feedback to the learner – allowing improvement with each subsequent submission. It also has a built-in capability to provide the lecturer with information regarding the key problem areas of each assessment intervention.

Keywords: Programming, computer-aided assessment, technology-assisted assessment, programming assessment software, design science research, mixed-method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 966
3689 Neuro-Fuzzy Based Model for Phrase Level Emotion Understanding

Authors: Vadivel Ayyasamy

Abstract:

The present approach deals with the identification of Emotions and classification of Emotional patterns at Phrase-level with respect to Positive and Negative Orientation. The proposed approach considers emotion triggered terms, its co-occurrence terms and also associated sentences for recognizing emotions. The proposed approach uses Part of Speech Tagging and Emotion Actifiers for classification. Here sentence patterns are broken into phrases and Neuro-Fuzzy model is used to classify which results in 16 patterns of emotional phrases. Suitable intensities are assigned for capturing the degree of emotion contents that exist in semantics of patterns. These emotional phrases are assigned weights which supports in deciding the Positive and Negative Orientation of emotions. The approach uses web documents for experimental purpose and the proposed classification approach performs well and achieves good F-Scores.

Keywords: Emotions, sentences, phrases, classification, patterns, fuzzy, positive orientation, negative orientation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1053
3688 Analyzing the Plausible Alternatives in Contracting the Societal Fissure Caused by Digital Divide in Sri Lanka

Authors: Manuela Nayantara Jeyaraj

Abstract:

'Digital Divide' is a concept that has existed in this paradigm ever since the discovery of the first-generation technologies. Before the turn of the century, it was basically used to describe the gap between those with telephone communication access and those without it. At present, it is plainly descriptive in itself to illustrate the cavity among those with Internet access and those without. Though the concept of digital divide has been merely lying in sight for as long as time itself, the friction it caused has not yet been fully realized to solve major crisis situations. Unlike well-developed countries, Sri Lanka is still in the verge of moving farther away from a developing country in the race towards reaching a developed state. Access to technological resources varies from region to region, even within the island itself, with one region having a considerable percentage of its community exposed to the Internet and its related technologies, and the other unaware of such. Thus, this paper intends to analyze the roots for the still-extant gap instigated based on the concept of ‘Digital Divide’ and explores the plausible potentials that could be brought about by narrowing this prevailing percentage among the population, specifically entrenching the advantages reaped towards an economic augmentation and culture or lifestyle revolution on the path towards development.

Keywords: Communication, digital divide, society, Sri Lanka.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1088
3687 Impact Analysis Based on Change Requirement Traceability in Object Oriented Software Systems

Authors: Sunil Tumkur Dakshinamurthy, Mamootil Zachariah Kurian

Abstract:

Change requirement traceability in object oriented software systems is one of the challenging areas in research. We know that the traces between links of different artifacts are to be automated or semi-automated in the software development life cycle (SDLC). The aim of this paper is discussing and implementing aspects of dynamically linking the artifacts such as requirements, high level design, code and test cases through the Extensible Markup Language (XML) or by dynamically generating Object Oriented (OO) metrics. Also, non-functional requirements (NFR) aspects such as stability, completeness, clarity, validity, feasibility and precision are discussed. We discuss this as a Fifth Taxonomy, which is a system vulnerability concern.

Keywords: Artifacts, NFRs, OO metrics, SDLC, XML.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1121