Search results for: Software quality Metrics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4910

Search results for: Software quality Metrics

4850 An Agent Oriented Approach to Operational Profile Management

Authors: Sunitha Ramanujam, Hany El Yamany, Miriam A. M. Capretz

Abstract:

Software reliability, defined as the probability of a software system or application functioning without failure or errors over a defined period of time, has been an important area of research for over three decades. Several research efforts aimed at developing models to improve reliability are currently underway. One of the most popular approaches to software reliability adopted by some of these research efforts involves the use of operational profiles to predict how software applications will be used. Operational profiles are a quantification of usage patterns for a software application. The research presented in this paper investigates an innovative multiagent framework for automatic creation and management of operational profiles for generic distributed systems after their release into the market. The architecture of the proposed Operational Profile MAS (Multi-Agent System) is presented along with detailed descriptions of the various models arrived at following the analysis and design phases of the proposed system. The operational profile in this paper is extended to comprise seven different profiles. Further, the criticality of operations is defined using a new composed metrics in order to organize the testing process as well as to decrease the time and cost involved in this process. A prototype implementation of the proposed MAS is included as proof-of-concept and the framework is considered as a step towards making distributed systems intelligent and self-managing.

Keywords: Software reliability, Software testing, Metrics, Distributed systems, Multi-agent systems

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1798
4849 Alternative Methods to Rank the Impact of Object Oriented Metrics in Fault Prediction Modeling using Neural Networks

Authors: Kamaldeep Kaur, Arvinder Kaur, Ruchika Malhotra

Abstract:

The aim of this paper is to rank the impact of Object Oriented(OO) metrics in fault prediction modeling using Artificial Neural Networks(ANNs). Past studies on empirical validation of object oriented metrics as fault predictors using ANNs have focused on the predictive quality of neural networks versus standard statistical techniques. In this empirical study we turn our attention to the capability of ANNs in ranking the impact of these explanatory metrics on fault proneness. In ANNs data analysis approach, there is no clear method of ranking the impact of individual metrics. Five ANN based techniques are studied which rank object oriented metrics in predicting fault proneness of classes. These techniques are i) overall connection weights method ii) Garson-s method iii) The partial derivatives methods iv) The Input Perturb method v) the classical stepwise methods. We develop and evaluate different prediction models based on the ranking of the metrics by the individual techniques. The models based on overall connection weights and partial derivatives methods have been found to be most accurate.

Keywords: Artificial Neural Networks (ANNS), Backpropagation, Fault Prediction Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1714
4848 A Critical Survey of Reusability Aspects for Component-Based Systems

Authors: Arun Sharma, Rajesh Kumar, P. S. Grover

Abstract:

The last decade has shown that object-oriented concept by itself is not that powerful to cope with the rapidly changing requirements of ongoing applications. Component-based systems achieve flexibility by clearly separating the stable parts of systems (i.e. the components) from the specification of their composition. In order to realize the reuse of components effectively in CBSD, it is required to measure the reusability of components. However, due to the black-box nature of components where the source code of these components are not available, it is difficult to use conventional metrics in Component-based Development as these metrics require analysis of source codes. In this paper, we survey few existing component-based reusability metrics. These metrics give a border view of component-s understandability, adaptability, and portability. It also describes the analysis, in terms of quality factors related to reusability, contained in an approach that aids significantly in assessing existing components for reusability.

Keywords: Components, Customizability, Reusability, and Observability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2410
4847 A Linear Use Case Based Software Cost Estimation Model

Authors: Hasan.O. Farahneh, Ayman A. Issa

Abstract:

Software development is moving towards agility with use cases and scenarios being used for requirements stories. Estimates of software costs are becoming even more important than before as effects of delays is much larger in successive short releases context of agile development. Thus, this paper reports on the development of new linear use case based software cost estimation model applicable in the very early stages of software development being based on simple metric. Evaluation showed that accuracy of estimates varies between 43% and 55% of actual effort of historical test projects. These results outperformed those of wellknown models when applied in the same context. Further work is being carried out to improve the performance of the proposed model when considering the effect of non-functional requirements.

Keywords: Metrics, Software Cost Estimation, Use Cases

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1962
4846 Identification of Reusable Software Modules in Function Oriented Software Systems using Neural Network Based Technique

Authors: Sonia Manhas, Parvinder S. Sandhu, Vinay Chopra, Nirvair Neeru

Abstract:

The cost of developing the software from scratch can be saved by identifying and extracting the reusable components from already developed and existing software systems or legacy systems [6]. But the issue of how to identify reusable components from existing systems has remained relatively unexplored. We have used metric based approach for characterizing a software module. In this present work, the metrics McCabe-s Cyclometric Complexity Measure for Complexity measurement, Regularity Metric, Halstead Software Science Indicator for Volume indication, Reuse Frequency metric and Coupling Metric values of the software component are used as input attributes to the different types of Neural Network system and reusability of the software component is calculated. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE).

Keywords: Software reusability, Neural Networks, MAE, RMSE, Accuracy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1828
4845 A Similarity Metric for Assessment of Image Fusion Algorithms

Authors: Nedeljko Cvejic, Artur Łoza, David Bull, Nishan Canagarajah

Abstract:

In this paper, we present a novel objective nonreference performance assessment algorithm for image fusion. It takes into account local measurements to estimate how well the important information in the source images is represented by the fused image. The metric is based on the Universal Image Quality Index and uses the similarity between blocks of pixels in the input images and the fused image as the weighting factors for the metrics. Experimental results confirm that the values of the proposed metrics correlate well with the subjective quality of the fused images, giving a significant improvement over standard measures based on mean squared error and mutual information.

Keywords: Fusion performance measures, image fusion, nonreferencequality measures, objective quality measures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2415
4844 A Growing Natural Gas Approach for Evaluating Quality of Software Modules

Authors: Parvinder S. Sandhu, Sandeep Khimta, Kiranpreet Kaur

Abstract:

The prediction of Software quality during development life cycle of software project helps the development organization to make efficient use of available resource to produce the product of highest quality. “Whether a module is faulty or not" approach can be used to predict quality of a software module. There are numbers of software quality prediction models described in the literature based upon genetic algorithms, artificial neural network and other data mining algorithms. One of the promising aspects for quality prediction is based on clustering techniques. Most quality prediction models that are based on clustering techniques make use of K-means, Mixture-of-Guassians, Self-Organizing Map, Neural Gas and fuzzy K-means algorithm for prediction. In all these techniques a predefined structure is required that is number of neurons or clusters should be known before we start clustering process. But in case of Growing Neural Gas there is no need of predetermining the quantity of neurons and the topology of the structure to be used and it starts with a minimal neurons structure that is incremented during training until it reaches a maximum number user defined limits for clusters. Hence, in this work we have used Growing Neural Gas as underlying cluster algorithm that produces the initial set of labeled cluster from training data set and thereafter this set of clusters is used to predict the quality of test data set of software modules. The best testing results shows 80% accuracy in evaluating the quality of software modules. Hence, the proposed technique can be used by programmers in evaluating the quality of modules during software development.

Keywords: Growing Neural Gas, data clustering, fault prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1821
4843 Towards a Suitable and Systematic Approach for Component Based Software Development

Authors: Kuljit Kaur, Parminder Kaur, Jaspreet Bedi, Hardeep Singh

Abstract:

Software crisis refers to the situation in which the developers are not able to complete the projects within time and budget constraints and moreover these overscheduled and over budget projects are of low quality as well. Several methodologies have been adopted form time to time to overcome this situation and now in the focus is component based software engineering. In this approach, emphasis is on reuse of already existing software artifacts. But the results can not be achieved just by preaching the principles; they need to be practiced as well. This paper highlights some of the very basic elements of this approach, which has to be in place to get the desired goals of high quality, low cost with shorter time-to-market software products.

Keywords: Component Model, Software Components, SoftwareRepository, Process Models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1725
4842 A Four Method Framework for Fighting Software Architecture Erosion

Authors: Sundus Ayyaz, Saad Rehman, Usman Qamar

Abstract:

Software Architecture is the basic structure of software that states the development and advancement of a software system. Software architecture is also considered as a significant tool for the construction of high quality software systems. A clean design leads to the control, value and beauty of software resulting in its longer life while a bad design is the cause of architectural erosion where a software evolution completely fails. This paper discusses the occurrence of software architecture erosion and presents a set of methods for the detection, declaration and prevention of architecture erosion. The causes and symptoms of architecture erosion are observed with the examples of prescriptive and descriptive architectures and the practices used to stop this erosion are also discussed by considering different types of software erosion and their affects. Consequently finding and devising the most suitable approach for fighting software architecture erosion and in some way reducing its affect is evaluated and tested on different scenarios.

Keywords: Software Architecture, Architecture Erosion, Prescriptive Architecture, Descriptive Architecture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2097
4841 Key Frames Extraction for Sign Language Video Analysis and Recognition

Authors: Jaroslav Polec, Petra Heribanová, Tomáš Hirner

Abstract:

In this paper we proposed a method for finding video frames representing one sign in the finger alphabet. The method is based on determining hands location, segmentation and the use of standard video quality evaluation metrics. Metric calculation is performed only in regions of interest. Sliding mechanism for finding local extrema and adaptive threshold based on local averaging is used for key frames selection. The success rate is evaluated by recall, precision and F1 measure. The method effectiveness is compared with metrics applied to all frames. Proposed method is fast, effective and relatively easy to realize by simple input video preprocessing and subsequent use of tools designed for video quality measuring.

Keywords: Key frame, video, quality, metric, MSE, MSAD, SSIM, VQM, sign language, finger alphabet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1984
4840 Communication and Quality in Distributed Agile Development: An Empirical Case Study

Authors: R. Green, T. Mazzuchi, S. Sarkani

Abstract:

Through inward perceptions, we intuitively expect distributed software development to increase the risks associated with achieving cost, schedule, and quality goals. To compound this problem, agile software development (ASD) insists one of the main ingredients of its success is cohesive communication attributed to collocation of the development team. The following study identified the degree of communication richness needed to achieve comparable software quality (reduce pre-release defects) between distributed and collocated teams. This paper explores the relevancy of communication richness in various development phases and its impact on quality. Through examination of a large distributed agile development project, this investigation seeks to understand the levels of communication required within each ASD phase to produce comparable quality results achieved by collocated teams. Obviously, a multitude of factors affects the outcome of software projects. However, within distributed agile software development teams, the mode of communication is one of the critical components required to achieve team cohesiveness and effectiveness. As such, this study constructs a distributed agile communication model (DAC-M) for potential application to similar distributed agile development efforts using the measurement of the suitable level of communication. The results of the study show that less rich communication methods, in the appropriate phase, might be satisfactory to achieve equivalent quality in distributed ASD efforts.

Keywords: agile software development (ASD), distributedsoftware teams, media richness theory, software development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2115
4839 Predicting the Impact of the Defect on the Overall Environment in Function Based Systems

Authors: Parvinder S. Sandhu, Urvashi Malhotra, E. Ardil

Abstract:

There is lot of work done in prediction of the fault proneness of the software systems. But, it is the severity of the faults that is more important than number of faults existing in the developed system as the major faults matters most for a developer and those major faults needs immediate attention. In this paper, we tried to predict the level of impact of the existing faults in software systems. Neuro-Fuzzy based predictor models is applied NASA-s public domain defect dataset coded in C programming language. As Correlation-based Feature Selection (CFS) evaluates the worth of a subset of attributes by considering the individual predictive ability of each feature along with the degree of redundancy between them. So, CFS is used for the selecting the best metrics that have highly correlated with level of severity of faults. The results are compared with the prediction results of Logistic Models (LMT) that was earlier quoted as the best technique in [17]. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provide a relatively better prediction accuracy as compared to other models and hence, can be used for the modeling of the level of impact of faults in function based systems.

Keywords: Software Metrics, Fuzzy, Neuro-Fuzzy, Software Faults, Accuracy, MAE, RMSE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1320
4838 Strongly Adequate Software Architecture

Authors: Pradip Peter Dey

Abstract:

Components of a software system may be related in a wide variety of ways. These relationships need to be represented in software architecture in order develop quality software. In practice, software architecture is immensely challenging, strikingly multifaceted, extravagantly domain based, perpetually changing, rarely cost-effective, and deceptively ambiguous. This paper analyses relations among the major components of software systems and argues for using several broad categories for software architecture for assessment purposes: strongly adequate, weakly adequate and functionally adequate software architectures among other categories. These categories are intended for formative assessments of architectural designs.

Keywords: Components, Model Driven Architecture, Graphical User Interfaces.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2015
4837 Effective Defect Prevention Approach in Software Process for Achieving Better Quality Levels

Authors: Suma. V., T. R. Gopalakrishnan Nair

Abstract:

Defect prevention is the most vital but habitually neglected facet of software quality assurance in any project. If functional at all stages of software development, it can condense the time, overheads and wherewithal entailed to engineer a high quality product. The key challenge of an IT industry is to engineer a software product with minimum post deployment defects. This effort is an analysis based on data obtained for five selected projects from leading software companies of varying software production competence. The main aim of this paper is to provide information on various methods and practices supporting defect detection and prevention leading to thriving software generation. The defect prevention technique unearths 99% of defects. Inspection is found to be an essential technique in generating ideal software generation in factories through enhanced methodologies of abetted and unaided inspection schedules. On an average 13 % to 15% of inspection and 25% - 30% of testing out of whole project effort time is required for 99% - 99.75% of defect elimination. A comparison of the end results for the five selected projects between the companies is also brought about throwing light on the possibility of a particular company to position itself with an appropriate complementary ratio of inspection testing.

Keywords: Defect Detection and Prevention, Inspections, Software Engineering, Software Process, Testing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487
4836 Determination of Water Pollution and Water Quality with Decision Trees

Authors: Çiğdem Bakır, Mecit Yüzkat

Abstract:

With the increasing emphasis on water quality worldwide, the search for and expanding the market for new and intelligent monitoring systems has increased. The current method is the laboratory process, where samples are taken from bodies of water, and tests are carried out in laboratories. This method is time-consuming, a waste of manpower and uneconomical. To solve this problem, we used machine learning methods to detect water pollution in our study. We created decision trees with the Orange3 software used in the study and tried to determine all the factors that cause water pollution. An automatic prediction model based on water quality was developed by taking many model inputs such as water temperature, pH, transparency, conductivity, dissolved oxygen, and ammonia nitrogen with machine learning methods. The proposed approach consists of three stages: Preprocessing of the data used, feature detection and classification. We tried to determine the success of our study with different accuracy metrics and the results were presented comparatively. In addition, we achieved approximately 98% success with the decision tree.

Keywords: Decision tree, water quality, water pollution, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 166
4835 Using Quality Models to Evaluate National ID systems: the Case of the UAE

Authors: Ali M. Al-Khouri

Abstract:

This paper presents findings from the evaluation study carried out to review the UAE national ID card software. The paper consults the relevant literature to explain many of the concepts and frameworks explained herein. The findings of the evaluation work that was primarily based on the ISO 9126 standard for system quality measurement highlighted many practical areas that if taken into account is argued to more likely increase the success chances of similar system implementation projects.

Keywords: National ID system, software quality, ISO 9126.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1471
4834 Information Quality Evaluation Framework: Extending ISO 25012 Data Quality Model

Authors: Irfan Rafique, Philip Lew, Maissom Qanber Abbasi, Zhang Li

Abstract:

The world wide web coupled with the ever-increasing sophistication of online technologies and software applications puts greater emphasis on the need of even more sophisticated and consistent quality requirements modeling than traditional software applications. Web sites and Web applications (WebApps) are becoming more information driven and content-oriented raising the concern about their information quality (InQ). The consistent and consolidated modeling of InQ requirements for WebApps at different stages of the life cycle still poses a challenge. This paper proposes an approach to specify InQ requirements for WebApps by reusing and extending the ISO 25012:2008(E) data quality model. We also discuss learnability aspect of information quality for the WebApps. The proposed ISO 25012 based InQ framework is a step towards a standardized approach to evaluate WebApps InQ.

Keywords: Data Quality Model, Information learnability, Information Quality, Web applications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5663
4833 Weigh-in-Motion Data Analysis Software for Developing Traffic Data for Mechanistic Empirical Pavement Design

Authors: M. A. Hasan, M. R. Islam, R. A. Tarefder

Abstract:

Currently, there are few user friendly Weigh-in- Motion (WIM) data analysis softwares available which can produce traffic input data for the recently developed AASHTOWare pavement Mechanistic-Empirical (ME) design software. However, these softwares have only rudimentary Quality Control (QC) processes. Therefore, they cannot properly deal with erroneous WIM data. As the pavement performance is highly sensible to the quality of WIM data, it is highly recommended to use more refined QC process on raw WIM data to get a good result. This study develops a userfriendly software, which can produce traffic input for the ME design software. This software takes the raw data (Class and Weight data) collected from the WIM station and processes it with a sophisticated QC procedure. Traffic data such as traffic volume, traffic distribution, axle load spectra, etc. can be obtained from this software; which can directly be used in the ME design software.

Keywords: Weigh-in-motion, software, axle load spectra, traffic distribution, AASHTOWare.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1850
4832 Further the Effectiveness of Software Testability Measure

Authors: Liang Zhao, Feng Wang, Bo Deng, Bo Yang

Abstract:

Software testability is proposed to address the problem of increasing cost of test and the quality of software. Testability measure provides a quantified way to denote the testability of software. Since 1990s, many testability measure models are proposed to address the problem. By discussing the contradiction between domain testability and domain range ratio (DRR), a new testability measure, semantic fault distance, is proposed. Its validity is discussed.

Keywords: Software testability, DRR, Domain testability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2002
4831 Bee Optimized Fuzzy Geographical Routing Protocol for VANET

Authors: P. Saravanan, T. Arunkumar

Abstract:

Vehicular Adhoc Network (VANET) is a new technology which aims to ensure intelligent inter-vehicle communications, seamless internet connectivity leading to improved road safety, essential alerts, and access to comfort and entertainment. VANET operations are hindered by mobile node’s (vehicles) uncertain mobility. Routing algorithms use metrics to evaluate which path is best for packets to travel. Metrics like path length (hop count), delay, reliability, bandwidth, and load determine optimal route. The proposed scheme exploits link quality, traffic density, and intersections as routing metrics to determine next hop. This study enhances Geographical Routing Protocol (GRP) using fuzzy controllers while rules are optimized with Bee Swarm Optimization (BSO). Simulations results are compared to conventional GRP.

Keywords: Bee Swarm Optimization (BSO), Geographical Routing Protocol (GRP), Vehicular Adhoc Network (VANET).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2410
4830 Software Industrialization in Systems Integration

Authors: Matthias Minich, B. Harriehausen-Muehlbauer, C. Wentzel

Abstract:

Today-s economy is in a permanent change, causing merger and acquisitions and co operations between enterprises. As a consequence, process adaptations and realignments result in systems integration and software development projects. Processes and procedures to execute such projects are still reliant on craftsman-ship of highly skilled workers. A generally accepted, industrialized production, characterized by high efficiency and quality, seems inevitable. In spite of this, current concepts of software industrialization are aimed at traditional software engineering and do not consider the characteristics of systems integration. The present work points out these particularities and discusses the applicability of existing industrial concepts in the systems integration domain. Consequently it defines further areas of research necessary to bring the field of systems integration closer to an industrialized production, allowing a higher efficiency, quality and return on investment.

Keywords: Software Industrialization, Systems Integration, Software Product Lines, Component Based Development, Model Driven Development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2481
4829 Identifying Mitigation Plans in Reducing Usability Risk Using Delphi Method

Authors: Jayaletchumi T. Sambantha Moorthy, Suhaimi bin Ibrahim, Mohd Naz’ri Mahrin

Abstract:

Most quality models have defined usability as a significant factor that leads to improving product acceptability, increasing user satisfaction, improving product reliability, and also financially benefitting companies. Usability is also the best factor that balances both the technical and human aspects of a software product, which is an important aspect in defining quality during software development process. A usability risk consist risk factors that could impact the usability of a software product thereby contributing to negative user experiences and causing a possible software product failure. Hence, it is important to mitigate and reduce usability risks in the software development process itself. By managing possible usability risks in software development process, failure of software product could be reduced. Therefore, this research uses the Delphi method to identify mitigation plans for reducing potential usability risks. The Delphi method is conducted with seven experts from the field of risk management and software development.

Keywords: Usability, Usability Risk, Risk Management, Risk Mitigation, Delphi Method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2184
4828 Defects in Open Source Software: The Role of Online Forums

Authors: Faheem Ahmed, Piers Campbell, Ahmad Jaffar, Luiz Capretz

Abstract:

Free and open source software is gaining popularity at an unprecedented rate of growth. Organizations despite some concerns about the quality have been using them for various purposes. One of the biggest concerns about free and open source software is post release software defects and their fixing. Many believe that there is no appropriate support available to fix the bugs. On the contrary some believe that due to the active involvement of internet user in online forums, they become a major source of communicating the identification and fixing of defects in open source software. The research model of this empirical investigation establishes and studies the relationship between open source software defects and online public forums. The results of this empirical study provide evidence about the realities of software defects myths of open source software. We used a dataset consist of 616 open source software projects covering a broad range of categories to study the research model of this investigation. The results of this investigation show that online forums play a significant role identifying and fixing the defects in open source software.

Keywords: About Open source software, software engineering, software defect management, empirical software engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1725
4827 A Cognitive Measurement of Complexity and Comprehension for Object-Oriented Code

Authors: Amit Kumar Jakhar, Kumar Rajnish

Abstract:

Inherited complexity is one of the difficult tasks in software engineering field. Further, it is said that there is no physical laws or standard guidelines suit for designing different types of software. Hence, to make the software engineering as a matured engineering discipline like others, it is necessary that it has its own theoretical frameworks and laws. Software designing and development is a human effort which takes a lot of time and considers various parameters for successful completion of the software. The cognitive informatics plays an important role for understanding the essential characteristics of the software. The aim of this work is to consider the fundamental characteristics of the source code of Object-Oriented software i.e. complexity and understandability. The complexity of the programs is analyzed with the help of extracted important attributes of the source code, which is further utilized to evaluate the understandability factor. The aforementioned characteristics are analyzed on the basis of 16 C++ programs by distributing them to forty MCA students. They all tried to understand the source code of the given program and mean time is taken as the actual time needed to understand the program. For validation of this work, Briand’s framework is used and the presented metric is also evaluated comparatively with existing metric which proves its robustness.

Keywords: Software metrics, object-oriented, complexity, cognitive weight, understandability, basic control structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1079
4826 Bayesian Belief Networks for Test Driven Development

Authors: Vijayalakshmy Periaswamy S., Kevin McDaid

Abstract:

Testing accounts for the major percentage of technical contribution in the software development process. Typically, it consumes more than 50 percent of the total cost of developing a piece of software. The selection of software tests is a very important activity within this process to ensure the software reliability requirements are met. Generally tests are run to achieve maximum coverage of the software code and very little attention is given to the achieved reliability of the software. Using an existing methodology, this paper describes how to use Bayesian Belief Networks (BBNs) to select unit tests based on their contribution to the reliability of the module under consideration. In particular the work examines how the approach can enhance test-first development by assessing the quality of test suites resulting from this development methodology and providing insight into additional tests that can significantly reduce the achieved reliability. In this way the method can produce an optimal selection of inputs and the order in which the tests are executed to maximize the software reliability. To illustrate this approach, a belief network is constructed for a modern software system incorporating the expert opinion, expressed through probabilities of the relative quality of the elements of the software, and the potential effectiveness of the software tests. The steps involved in constructing the Bayesian Network are explained as is a method to allow for the test suite resulting from test-driven development.

Keywords: Software testing, Test Driven Development, Bayesian Belief Networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1835
4825 Quantitative Quality Assessment of Microscopic Image Mosaicing

Authors: Alessandro Bevilacqua, Alessandro Gherardi, Filippo Piccinini

Abstract:

The mosaicing technique has been employed in more and more application fields, from entertainment to scientific ones. In the latter case, often the final evaluation is still left to human beings, that assess visually the quality of the mosaic. Many times, a lack of objective measurements in microscopic mosaicing may prevent the mosaic from being used as a starting image for further analysis. In this work we analyze three different metrics and indexes, in the domain of signal analysis, image analysis and visual quality, to measure the quality of different aspects of the mosaicing procedure, such as registration errors and visual quality. As the case study we consider the mosaicing algorithm we developed. The experiments have been carried out by considering mosaics with very different features: histological samples, that are made of detailed and contrasted images, and live stem cells, that show a very low contrast and low detail levels.

Keywords: Mosaicing, quality assessment, microscopy, stem cells.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2202
4824 Quantifying the Stability of Software Systems via Simulation in Dependency Networks

Authors: Weifeng Pan

Abstract:

The stability of a software system is one of the most important quality attributes affecting the maintenance effort. Many techniques have been proposed to support the analysis of software stability at the architecture, file, and class level of software systems, but little effort has been made for that at the feature (i.e., method and attribute) level. And the assumptions the existing techniques based on always do not meet the practice to a certain degree. Considering that, in this paper, we present a novel metric, Stability of Software (SoS), to measure the stability of object-oriented software systems by software change propagation analysis using a simulation way in software dependency networks at feature level. The approach is evaluated by case studies on eight open source Java programs using different software structures (one employs design patterns versus one does not) for the same object-oriented program. The results of the case studies validate the effectiveness of the proposed metric. The approach has been fully automated by a tool written in Java.

Keywords: Software stability, change propagation, design pattern, software maintenance, object-oriented (OO) software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634
4823 Performance Management Guide for Research and Development Process

Authors: Heejung Lee

Abstract:

Performance management seems to be essential in business area and is also an exciting topic. Despite significant and myriads of research efforts, performance management guide today as a rigorous approach is still in an immature state and metrics are often selected based on intuitive and heuristic approach. In R&D side, the difficulty to guide the proper performance management is even more increasing due to the natural characteristics of R&D such as unique or domain-specific problems. In our approach, we present R&D performance management guide considering various characteristics of R&D side: performance evaluation objectives, dimensions, metrics, and uncertainties of R&D sector.

Keywords: Performance management, R&D, metrics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1490
4822 An Architectural Model of Multi-Agent Systems for Student Evaluation in Collaborative Game Software

Authors: Monica Hoedltke Pietruchinski, Andrey Ricardo Pimentel

Abstract:

The teaching of computer programming for beginners has been generally considered as a difficult and challenging task. Several methodologies and research tools have been developed, however, the difficulty of teaching still remains. Our work integrates the state of the art in teaching programming with game software and further provides metrics for the evaluation of student performance in a collaborative activity of playing games. This paper aims to present a multi-agent system architecture to be incorporated to the educational collaborative game software for teaching programming that monitors, evaluates and encourages collaboration by the participants. A literature review has been made on the concepts of Collaborative Learning, Multi-agents systems, collaborative games and techniques to teach programming using these concepts simultaneously.

Keywords: Architecture of multi-agent systems, collaborative evaluation, collaboration assessment, gamifying educational software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1946
4821 A State-Of-The-Art Review on Web Services Adaptation

Authors: M. Velasco, D. While, P. Raju, J. Krasniewicz, A. Amini, L. Hernandez-Munoz

Abstract:

Web service adaptation involves the creation of adapters that solve Web services incompatibilities known as mismatches. Since the importance of Web services adaptation is increasing because of the frequent implementation and use of online Web services, this paper presents a literature review of web services to investigate the main methods of adaptation, their theoretical underpinnings and the metrics used to measure adapters performance. Eighteen publications were reviewed independently by two researchers. We found that adaptation techniques are needed to solve different types of problems that may arise due to incompatibilities in Web service interfaces, including protocols, messages, data and semantics that affect the interoperability of the services. Although adapters are non-invasive methods that can improve Web services interoperability and there are current approaches for service adaptation; there is, however, not yet one solution that fits all types of mismatches. Our results also show that only a few research projects incorporate theoretical frameworks and that metrics to measure adapters’ performance are very limited. We conclude that further research on software adaptation should improve current adaptation methods in different layers of the service interoperability and that an adaptation theoretical framework that incorporates a theoretical underpinning and measures of qualitative and quantitative performance needs to be created.

Keywords: Web services adapters, software adaptation, web services mismatches, web services interoperability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1816