Search results for: Metrics calibration
409 On Determining the Most Effective Technique Available in Software Testing
Authors: Qasim Zafar, Matthew Anderson, Esteban Garcia, Steven Drager
Abstract:
Software failures can present an enormous detriment to people's lives and cost millions of dollars to repair when they are unexpectedly encountered in the wild. Despite a significant portion of the software development lifecycle and resources are dedicated to testing, software failures are a relatively frequent occurrence. Nevertheless, the evaluation of testing effectiveness remains at the forefront of ensuring high-quality software and software metrics play a critical role in providing valuable insights into quantifiable objectives to assess the level of assurance and confidence in the system. As the selection of appropriate metrics can be an arduous process, the goal of this paper is to shed light on the significance of software metrics by examining a range of testing techniques and metrics as well as identifying key areas for improvement. In doing so, this paper presents a method to compare the effectiveness of testing techniques with heterogeneous output metrics. Additionally, through this investigation, readers will gain a deeper understanding of how metrics can help to drive informed decision-making on delivering high-quality software and facilitate continuous improvement in testing practices.
Keywords: Software testing, software metrics, testing effectiveness, black box testing, random testing, adaptive random testing, combinatorial testing, fuzz testing, equivalence partition, boundary value analysis, white box testings.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 66408 Fast Wavelength Calibration Algorithm for Optical Spectrum Analyzers
Authors: Thomas Fuhrmann
Abstract:
In this paper an algorithm for fast wavelength calibration of Optical Spectrum Analyzers (OSAs) using low power reference gas spectra is proposed. In existing OSAs a reference spectrum with low noise for precise detection of the reference extreme values is needed. To generate this spectrum costly hardware with high optical power is necessary. With this new wavelength calibration algorithm it is possible to use a noisy reference spectrum and therefore hardware costs can be cut. With this algorithm the reference spectrum is filtered and the key information is extracted by segmenting and finding the local minima and maxima. Afterwards slope and offset of a linear correction function for best matching the measured and theoretical spectra are found by correlating the measured with the stored minima. With this algorithm a reliable wavelength referencing of an OSA can be implemented on a microcontroller with a calculation time of less than one second.
Keywords: correlation, gas reference, optical spectrum analyzer, wavelength calibration
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1413407 Identifying the Kinematic Parameters of Hexapod Machine Tool
Authors: M. M. Agheli, M. J. Nategh
Abstract:
Hexapod Machine Tool (HMT) is a parallel robot mostly based on Stewart platform. Identification of kinematic parameters of HMT is an important step of calibration procedure. In this paper an algorithm is presented for identifying the kinematic parameters of HMT using inverse kinematics error model. Based on this algorithm, the calibration procedure is simulated. Measurement configurations with maximum observability are decided as the first step of this algorithm for a robust calibration. The errors occurring in various configurations are illustrated graphically. It has been shown that the boundaries of the workspace should be searched for the maximum observability of errors. The importance of using configurations with sufficient observability in calibrating hexapod machine tools is verified by trial calibration with two different groups of randomly selected configurations. One group is selected to have sufficient observability and the other is in disregard of the observability criterion. Simulation results confirm the validity of the proposed identification algorithm.Keywords: Calibration, Hexapod Machine Tool (HMT), InverseKinematics Error Model, Observability, Parallel Robot, ParameterIdentification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2367406 A Survey on Metric of Software Cognitive Complexity for OO design
Authors: A.Aloysius, L. Arockiam
Abstract:
In modern era, the biggest challenge facing the software industry is the upcoming of new technologies. So, the software engineers are gearing up themselves to meet and manage change in large software system. Also they find it difficult to deal with software cognitive complexities. In the last few years many metrics were proposed to measure the cognitive complexity of software. This paper aims at a comprehensive survey of the metric of software cognitive complexity. Some classic and efficient software cognitive complexity metrics, such as Class Complexity (CC), Weighted Class Complexity (WCC), Extended Weighted Class Complexity (EWCC), Class Complexity due to Inheritance (CCI) and Average Complexity of a program due to Inheritance (ACI), are discussed and analyzed. The comparison and the relationship of these metrics of software complexity are also presented.Keywords: Software Metrics, Software Complexity, Cognitive Informatics, Cognitive Complexity, Software measurement
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3026405 Method of Parameter Calibration for Error Term in Stochastic User Equilibrium Traffic Assignment Model
Authors: Xiang Zhang, David Rey, S. Travis Waller
Abstract:
Stochastic User Equilibrium (SUE) model is a widely used traffic assignment model in transportation planning, which is regarded more advanced than Deterministic User Equilibrium (DUE) model. However, a problem exists that the performance of the SUE model depends on its error term parameter. The objective of this paper is to propose a systematic method of determining the appropriate error term parameter value for the SUE model. First, the significance of the parameter is explored through a numerical example. Second, the parameter calibration method is developed based on the Logit-based route choice model. The calibration process is realized through multiple nonlinear regression, using sequential quadratic programming combined with least square method. Finally, case analysis is conducted to demonstrate the application of the calibration process and validate the better performance of the SUE model calibrated by the proposed method compared to the SUE models under other parameter values and the DUE model.
Keywords: Parameter calibration, sequential quadratic programming, Stochastic User Equilibrium, traffic assignment, transportation planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2127404 A Metric Framework for Analysis of Quality of Object Oriented Design
Authors: Amandeep Kaur, Satwinder Singh, Dr. K. S. Kahlon
Abstract:
The impact of OO design on software quality characteristics such as defect density and rework by mean of experimental validation. Encapsulation, inheritance, polymorphism, reusability, Data hiding and message-passing are the major attribute of an Object Oriented system. In order to evaluate the quality of an Object oriented system the above said attributes can act as indicators. The metrics are the well known quantifiable approach to express any attribute. Hence, in this paper we tried to formulate a framework of metrics representing the attributes of object oriented system. Empirical Data is collected from three different projects based on object oriented paradigms to calculate the metrics.Keywords: Object Oriented, Software metrics, Methods, Attributes, cohesion, coupling, Inheritance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1939403 Noise Performance Optimization of a Fast Wavelength Calibration Algorithm for OSAs
Authors: Thomas Fuhrmann
Abstract:
A new fast correlation algorithm for calibrating the wavelength of Optical Spectrum Analyzers (OSAs) was introduced in [1]. The minima of acetylene gas spectra were measured and correlated with saved theoretical data [2]. So it is possible to find the correct wavelength calibration data using a noisy reference spectrum. First tests showed good algorithmic performance for gas line spectra with high noise. In this article extensive performance tests were made to validate the noise resistance of this algorithm. The filter and correlation parameters of the algorithm were optimized for improved noise performance. With these parameters the performance of this wavelength calibration was simulated to predict the resulting wavelength error in real OSA systems. Long term simulations were made to evaluate the performance of the algorithm over the lifetime of a real OSA.Keywords: correlation, gas reference, optical spectrum analyzer, wavelength calibration
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1272402 Alternative Methods to Rank the Impact of Object Oriented Metrics in Fault Prediction Modeling using Neural Networks
Authors: Kamaldeep Kaur, Arvinder Kaur, Ruchika Malhotra
Abstract:
The aim of this paper is to rank the impact of Object Oriented(OO) metrics in fault prediction modeling using Artificial Neural Networks(ANNs). Past studies on empirical validation of object oriented metrics as fault predictors using ANNs have focused on the predictive quality of neural networks versus standard statistical techniques. In this empirical study we turn our attention to the capability of ANNs in ranking the impact of these explanatory metrics on fault proneness. In ANNs data analysis approach, there is no clear method of ranking the impact of individual metrics. Five ANN based techniques are studied which rank object oriented metrics in predicting fault proneness of classes. These techniques are i) overall connection weights method ii) Garson-s method iii) The partial derivatives methods iv) The Input Perturb method v) the classical stepwise methods. We develop and evaluate different prediction models based on the ranking of the metrics by the individual techniques. The models based on overall connection weights and partial derivatives methods have been found to be most accurate.Keywords: Artificial Neural Networks (ANNS), Backpropagation, Fault Prediction Modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1757401 Performance Management Guide for Research and Development Process
Authors: Heejung Lee
Abstract:
Performance management seems to be essential in business area and is also an exciting topic. Despite significant and myriads of research efforts, performance management guide today as a rigorous approach is still in an immature state and metrics are often selected based on intuitive and heuristic approach. In R&D side, the difficulty to guide the proper performance management is even more increasing due to the natural characteristics of R&D such as unique or domain-specific problems. In our approach, we present R&D performance management guide considering various characteristics of R&D side: performance evaluation objectives, dimensions, metrics, and uncertainties of R&D sector.Keywords: Performance management, R&D, metrics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1550400 Application of Artificial Neural Network for Predicting Maintainability Using Object-Oriented Metrics
Authors: K. K. Aggarwal, Yogesh Singh, Arvinder Kaur, Ruchika Malhotra
Abstract:
Importance of software quality is increasing leading to development of new sophisticated techniques, which can be used in constructing models for predicting quality attributes. One such technique is Artificial Neural Network (ANN). This paper examined the application of ANN for software quality prediction using Object- Oriented (OO) metrics. Quality estimation includes estimating maintainability of software. The dependent variable in our study was maintenance effort. The independent variables were principal components of eight OO metrics. The results showed that the Mean Absolute Relative Error (MARE) was 0.265 of ANN model. Thus we found that ANN method was useful in constructing software quality model.
Keywords: Software quality, Measurement, Metrics, Artificial neural network, Coupling, Cohesion, Inheritance, Principal component analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2573399 Optical Fiber Sensor for Detection of Carbon Nanotubes
Authors: C. I. L. Justino, A. C. Freitas, T. A. P. Rocha-Santos, A. C. Duarte
Abstract:
This work relates the development of an optical fiber (OF) sensor for the detection and quantification of single walled carbon nanotubes in aqueous solutions. The developed OF displays a compact design, it requires less expensive materials and equipment as well as low volume of sample (0.2 mL). This methodology was also validated by the comparison of its analytical performance with that of a standard methodology based on ultraviolet-visible spectroscopy. The developed OF sensor follows the general SDS calibration proposed for OF sensors as a more suitable calibration fitting compared with classical calibrations.Keywords: Optical fiber sensor, single-walled carbon nanotubes, SDS calibration model, UV-Vis spectroscopy
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1704398 A Reusability Evaluation Model for OO-Based Software Components
Authors: Parvinder S. Sandhu, Hardeep Singh
Abstract:
The requirement to improve software productivity has promoted the research on software metric technology. There are metrics for identifying the quality of reusable components but the function that makes use of these metrics to find reusability of software components is still not clear. These metrics if identified in the design phase or even in the coding phase can help us to reduce the rework by improving quality of reuse of the component and hence improve the productivity due to probabilistic increase in the reuse level. CK metric suit is most widely used metrics for the objectoriented (OO) software; we critically analyzed the CK metrics, tried to remove the inconsistencies and devised the framework of metrics to obtain the structural analysis of OO-based software components. Neural network can learn new relationships with new input data and can be used to refine fuzzy rules to create fuzzy adaptive system. Hence, Neuro-fuzzy inference engine can be used to evaluate the reusability of OO-based component using its structural attributes as inputs. In this paper, an algorithm has been proposed in which the inputs can be given to Neuro-fuzzy system in form of tuned WMC, DIT, NOC, CBO , LCOM values of the OO software component and output can be obtained in terms of reusability. The developed reusability model has produced high precision results as expected by the human experts.Keywords: CK-Metric, ID3, Neuro-fuzzy, Reusability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1819397 Map Matching Performance under Various Similarity Metrics for Heterogeneous Robot Teams
Authors: M. C. Akay, A. Aybakan, H. Temeltas
Abstract:
Aerial and ground robots have various advantages of usage in different missions. Aerial robots can move quickly and get a different sight of view of the area, but those vehicles cannot carry heavy payloads. On the other hand, unmanned ground vehicles (UGVs) are slow moving vehicles, since those can carry heavier payloads than unmanned aerial vehicles (UAVs). In this context, we investigate the performances of various Similarity Metrics to provide a common map for Heterogeneous Robot Team (HRT) in complex environments. Within the usage of Lidar Odometry and Octree Mapping technique, the local 3D maps of the environment are gathered. In order to obtain a common map for HRT, informative theoretic similarity metrics are exploited. All types of these similarity metrics gave adequate as allowable simulation time and accurate results that can be used in different types of applications. For the heterogeneous multi robot team, those methods can be used to match different types of maps.
Keywords: Common maps, heterogeneous robot team, map matching, informative theoretic similarity metrics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 900396 A Complexity Measure for Java Bean based Software Components
Authors: Sandeep Khimta, Parvinder S. Sandhu, Amanpreet Singh Brar
Abstract:
The traditional software product and process metrics are neither suitable nor sufficient in measuring the complexity of software components, which ultimately is necessary for quality and productivity improvement within organizations adopting CBSE. Researchers have proposed a wide range of complexity metrics for software systems. However, these metrics are not sufficient for components and component-based system and are restricted to the module-oriented systems and object-oriented systems. In this proposed study it is proposed to find the complexity of the JavaBean Software Components as a reflection of its quality and the component can be adopted accordingly to make it more reusable. The proposed metric involves only the design issues of the component and does not consider the packaging and the deployment complexity. In this way, the software components could be kept in certain limit which in turn help in enhancing the quality and productivity.Keywords: JavaBean Components, Complexity, Metrics, Validation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527395 A Systematic Method for Performance Analysis of SOA Applications
Authors: Marzieh Asgarnezhad, Ramin Nasiri, Abdollah Shahidi
Abstract:
The successful implementation of Service-Oriented Architecture (SOA) is not confined to Information Technology systems and required changes of the whole enterprise. In order to adapt IT and business, the enterprise requires adequate and measurable methods. The adoption of SOA creates new problem with regard to measuring and analysis the performance. In fact the enterprise should investigate to what extent the development of services will increase the value of business. It is required for every business to measure the extent of SOA adaptation with the goals of enterprise. Moreover, precise performance metrics and their combination with the advanced evaluation methodologies as a solution should be defined. The aim of this paper is to present a systematic methodology for designing a measurement system at the technical and business levels, so that: (1) it will determine measurement metrics precisely (2) the results will be analysed by mapping identified metrics to the measurement tools.
Keywords: Service-oriented architecture, metrics, performance, evaluation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1793394 A Critical Survey of Reusability Aspects for Component-Based Systems
Authors: Arun Sharma, Rajesh Kumar, P. S. Grover
Abstract:
The last decade has shown that object-oriented concept by itself is not that powerful to cope with the rapidly changing requirements of ongoing applications. Component-based systems achieve flexibility by clearly separating the stable parts of systems (i.e. the components) from the specification of their composition. In order to realize the reuse of components effectively in CBSD, it is required to measure the reusability of components. However, due to the black-box nature of components where the source code of these components are not available, it is difficult to use conventional metrics in Component-based Development as these metrics require analysis of source codes. In this paper, we survey few existing component-based reusability metrics. These metrics give a border view of component-s understandability, adaptability, and portability. It also describes the analysis, in terms of quality factors related to reusability, contained in an approach that aids significantly in assessing existing components for reusability.Keywords: Components, Customizability, Reusability, and Observability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2469393 A Comparative Analysis of Fuzzy, Neuro-Fuzzy and Fuzzy-GA Based Approaches for Software Reusability Evaluation
Authors: Parvinder Singh Sandhu, Dalwinder Singh Salaria, Hardeep Singh
Abstract:
Software Reusability is primary attribute of software quality. There are metrics for identifying the quality of reusable components but the function that makes use of these metrics to find reusability of software components is still not clear. These metrics if identified in the design phase or even in the coding phase can help us to reduce the rework by improving quality of reuse of the component and hence improve the productivity due to probabilistic increase in the reuse level. In this paper, we have devised the framework of metrics that uses McCabe-s Cyclometric Complexity Measure for Complexity measurement, Regularity Metric, Halstead Software Science Indicator for Volume indication, Reuse Frequency metric and Coupling Metric values of the software component as input attributes and calculated reusability of the software component. Here, comparative analysis of the fuzzy, Neuro-fuzzy and Fuzzy-GA approaches is performed to evaluate the reusability of software components and Fuzzy-GA results outperform the other used approaches. The developed reusability model has produced high precision results as expected by the human experts.Keywords: Software Reusability, Software Metrics, Neural Networks, Genetic Algorithm, Fuzzy Logic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1816392 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation
Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke
Abstract:
Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.Keywords: Automatic calibration framework, approximate Bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740391 Examining the Performance of Three Multiobjective Evolutionary Algorithms Based on Benchmarking Problems
Authors: Konstantinos Metaxiotis, Konstantinos Liagkouras
Abstract:
The objective of this study is to examine the performance of three well-known multiobjective evolutionary algorithms for solving optimization problems. The first algorithm is the Non-dominated Sorting Genetic Algorithm-II (NSGA-II), the second one is the Strength Pareto Evolutionary Algorithm 2 (SPEA-2), and the third one is the Multiobjective Evolutionary Algorithms based on decomposition (MOEA/D). The examined multiobjective algorithms are analyzed and tested on the ZDT set of test functions by three performance metrics. The results indicate that the NSGA-II performs better than the other two algorithms based on three performance metrics.
Keywords: MOEAs, Multiobjective optimization, ZDT test functions, performance metrics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 951390 Automated Process Quality Monitoring with Prediction of Fault Condition Using Measurement Data
Authors: Hyun-Woo Cho
Abstract:
Detection of incipient abnormal events is important to improve safety and reliability of machine operations and reduce losses caused by failures. Improper set-ups or aligning of parts often leads to severe problems in many machines. The construction of prediction models for predicting faulty conditions is quite essential in making decisions on when to perform machine maintenance. This paper presents a multivariate calibration monitoring approach based on the statistical analysis of machine measurement data. The calibration model is used to predict two faulty conditions from historical reference data. This approach utilizes genetic algorithms (GA) based variable selection, and we evaluate the predictive performance of several prediction methods using real data. The results shows that the calibration model based on supervised probabilistic principal component analysis (SPPCA) yielded best performance in this work. By adopting a proper variable selection scheme in calibration models, the prediction performance can be improved by excluding non-informative variables from their model building steps.Keywords: Prediction, operation monitoring, on-line data, nonlinear statistical methods, empirical model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658389 Evaluation of Video Quality Metrics and Performance Comparison on Contents Taken from Most Commonly Used Devices
Authors: Pratik Dhabal Deo, Manoj P.
Abstract:
With the increasing number of social media users, the amount of video content available has also significantly increased. Currently, the number of smartphone users is at its peak, and many are increasingly using their smartphones as their main photography and recording devices. There have been a lot of developments in the field of video quality assessment in since the past years and more research on various other aspects of video and image are being done. Datasets that contain a huge number of videos from different high-end devices make it difficult to analyze the performance of the metrics on the content from most used devices even if they contain contents taken in poor lighting conditions using lower-end devices. These devices face a lot of distortions due to various factors since the spectrum of contents recorded on these devices is huge. In this paper, we have presented an analysis of the objective Video Quality Analysis (VQA) metrics on contents taken only from most used devices and their performance on them, focusing on full-reference metrics. To carry out this research, we created a custom dataset containing a total of 90 videos that have been taken from three most commonly used devices, and Android smartphone, an iOS smartphone and a Digital Single-Lens Reflex (DSLR) camera. On the videos taken on each of these devices, the six most common types of distortions that users face have been applied in addition to already existing H.264 compression based on four reference videos. These six applied distortions have three levels of degradation each. A total of the five most popular VQA metrics have been evaluated on this dataset and the highest values and the lowest values of each of the metrics on the distortions have been recorded. Finally, it is found that blur is the artifact on which most of the metrics did not perform well. Thus, in order to understand the results better the amount of blur in the data set has been calculated and an additional evaluation of the metrics was done using High Efficiency Video Coding (HEVC) codec, which is the next version of H.264 compression, on the camera that proved to be the sharpest among the devices. The results have shown that as the resolution increases, the performance of the metrics tends to become more accurate and the best performing metric among them is VQM with very few inconsistencies and inaccurate results when the compression applied is H.264, but when the compression is applied is HEVC, Structural Similarity (SSIM) metric and Video Multimethod Assessment Fusion (VMAF) have performed significantly better.
Keywords: Distortion, metrics, recording, frame rate, video quality assessment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 366388 A Metric-Set and Model Suggestion for Better Software Project Cost Estimation
Authors: Murat Ayyıldız, Oya Kalıpsız, Sırma Yavuz
Abstract:
Software project effort estimation is frequently seen as complex and expensive for individual software engineers. Software production is in a crisis. It suffers from excessive costs. Software production is often out of control. It has been suggested that software production is out of control because we do not measure. You cannot control what you cannot measure. During last decade, a number of researches on cost estimation have been conducted. The metric-set selection has a vital role in software cost estimation studies; its importance has been ignored especially in neural network based studies. In this study we have explored the reasons of those disappointing results and implemented different neural network models using augmented new metrics. The results obtained are compared with previous studies using traditional metrics. To be able to make comparisons, two types of data have been used. The first part of the data is taken from the Constructive Cost Model (COCOMO'81) which is commonly used in previous studies and the second part is collected according to new metrics in a leading international company in Turkey. The accuracy of the selected metrics and the data samples are verified using statistical techniques. The model presented here is based on Multi-Layer Perceptron (MLP). Another difficulty associated with the cost estimation studies is the fact that the data collection requires time and care. To make a more thorough use of the samples collected, k-fold, cross validation method is also implemented. It is concluded that, as long as an accurate and quantifiable set of metrics are defined and measured correctly, neural networks can be applied in software cost estimation studies with successKeywords: Software Metrics, Software Cost Estimation, Neural Network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1957387 A Genetic Algorithm Based Classification Approach for Finding Fault Prone Classes
Authors: Parvinder S. Sandhu, Satish Kumar Dhiman, Anmol Goyal
Abstract:
Fault-proneness of a software module is the probability that the module contains faults. A correlation exists between the fault-proneness of the software and the measurable attributes of the code (i.e. the static metrics) and of the testing (i.e. the dynamic metrics). Early detection of fault-prone software components enables verification experts to concentrate their time and resources on the problem areas of the software system under development. This paper introduces Genetic Algorithm based software fault prediction models with Object-Oriented metrics. The contribution of this paper is that it has used Metric values of JEdit open source software for generation of the rules for the classification of software modules in the categories of Faulty and non faulty modules and thereafter empirically validation is performed. The results shows that Genetic algorithm approach can be used for finding the fault proneness in object oriented software components.Keywords: Genetic Algorithms, Software Fault, Classification, Object Oriented Metrics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2291386 Achieving Success in NPD Projects
Authors: Ankush Agrawal, Nadia Bhuiyan
Abstract:
The new product development (NPD) literature emphasizes the importance of introducing new products on the market for continuing business success. New products are responsible for employment, economic growth, technological progress, and high standards of living. Therefore, the study of NPD and the processes through which they emerge is important. The goal of our research is to propose a framework of critical success factors, metrics, and tools and techniques for implementing metrics for each stage of the new product development (NPD) process. An extensive literature review was undertaken to investigate decades of studies on NPD success and how it can be achieved. These studies were scanned for common factors for firms that enjoyed success of new products on the market. The paper summarizes NPD success factors, suggests metrics that should be used to measure these factors, and proposes tools and techniques to make use of these metrics. This was done for each stage of the NPD process, and brought together in a framework that the authors propose should be followed for complex NPD projects. While many studies have been conducted on critical success factors for NPD, these studies tend to be fragmented and focus on one or a few phases of the NPD process.
Keywords: New product development, performance, critical success factors, framework.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2447385 The Influence of Audio on Perceived Quality of Segmentation
Authors: Silvio R. R. Sanches, Bianca C. Barbosa, Beatriz R. Brum, Cléber G.Corrêa
Abstract:
In order to evaluate the quality of a segmentation algorithm, the researchers use subjective or objective metrics. Although subjective metrics are more accurate than objective ones, objective metrics do not require user feedback to test an algorithm. Objective metrics require subjective experiments only during their development. Subjective experiments typically display to users some videos (generated from frames with segmentation errors) that simulate the environment of an application domain. This user feedback is crucial information for metric definition. In the subjective experiments applied to develop some state-of-the-art metrics used to test segmentation algorithms, the videos displayed during the experiments did not contain audio. Audio is an essential component in applications such as videoconference and augmented reality. If the audio influences the user’s perception, using only videos without audio in subjective experiments can compromise the efficiency of an objective metric generated using data from these experiments. This work aims to identify if the audio influences the user’s perception of segmentation quality in background substitution applications with audio. The proposed approach used a subjective method based on formal video quality assessment methods. The results showed that audio influences the quality of segmentation perceived by a user.
Keywords: Background substitution, influence of audio, segmentation evaluation, segmentation quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 356384 Prediction of Reusability of Object Oriented Software Systems using Clustering Approach
Authors: Anju Shri, Parvinder S. Sandhu, Vikas Gupta, Sanyam Anand
Abstract:
In literature, there are metrics for identifying the quality of reusable components but the framework that makes use of these metrics to precisely predict reusability of software components is still need to be worked out. These reusability metrics if identified in the design phase or even in the coding phase can help us to reduce the rework by improving quality of reuse of the software component and hence improve the productivity due to probabilistic increase in the reuse level. As CK metric suit is most widely used metrics for extraction of structural features of an object oriented (OO) software; So, in this study, tuned CK metric suit i.e. WMC, DIT, NOC, CBO and LCOM, is used to obtain the structural analysis of OO-based software components. An algorithm has been proposed in which the inputs can be given to K-Means Clustering system in form of tuned values of the OO software component and decision tree is formed for the 10-fold cross validation of data to evaluate the in terms of linguistic reusability value of the component. The developed reusability model has produced high precision results as desired.Keywords: CK-Metric, Desicion Tree, Kmeans, Reusability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1913383 Auto-Calibration and Optimization of Large-Scale Water Resources Systems
Authors: Arash Parehkar, S. Jamshid Mousavi, Shoubo Bayazidi, Vahid Karami, Laleh Shahidi, Arash Azaranfar, Ali Moridi, M. Shabakhti, Tayebeh Ariyan, Mitra Tofigh, Kaveh Masoumi, Alireza Motahari
Abstract:
Water resource systems modeling has constantly been a challenge through history for human beings. As the innovative methodological development is evolving alongside computer sciences on one hand, researches are likely to confront more complex and larger water resources systems due to new challenges regarding increased water demands, climate change and human interventions, socio-economic concerns, and environment protection and sustainability. In this research, an automatic calibration scheme has been applied on the Gilan’s large-scale water resource model using mathematical programming. The water resource model’s calibration is developed in order to attune unknown water return flows from demand sites in the complex Sefidroud irrigation network and other related areas. The calibration procedure is validated by comparing several gauged river outflows from the system in the past with model results. The calibration results are pleasantly reasonable presenting a rational insight of the system. Subsequently, the unknown optimized parameters were used in a basin-scale linear optimization model with the ability to evaluate the system’s performance against a reduced inflow scenario in future. Results showed an acceptable match between predicted and observed outflows from the system at selected hydrometric stations. Moreover, an efficient operating policy was determined for Sefidroud dam leading to a minimum water shortage in the reduced inflow scenario.
Keywords: Auto-calibration, Gilan, Large-Scale Water Resources, Simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795382 Effect of Testing Device Calibration on Liquid Limit Assessment
Authors: M. O. Bayram, H. B. Gencdal, N. O. Fercan, B. Basbug
Abstract:
Liquid limit, which is used as a measure of soil strength, can be detected by Casagrande and fall-cone testing methods. The two methods majorly diverge from each other in terms of operator dependency. The Casagrande method that is applied according to ASTM D4318-17 standards may give misleading results, especially if the calibration process is not performed well. In this study, to reveal the effect of calibration for drop height and amount of soil paste placement in the Casagrande cup, a series of tests were carried out by multipoint method as it is specified in the ASTM standards. The tests include the combination of 6 mm, 8 mm, 10 mm, and 12 mm drop heights and under-filled, half-filled, and full-filled Casagrande cups by kaolin samples. It was observed that during successive tests, the drop height of the cup deteriorated; hence the device was recalibrated before and after each test to provide the accuracy of the results. Besides, the tests by under-filled and full-filled samples for higher drop heights revealed lower liquid limit values than the lower drop heights revealed. For the half-filled samples, it was clearly seen that the liquid limit values did not change at all as the drop height increased, and this explains the function of standard specifications.
Keywords: Calibration, Casagrande cup method, drop height, kaolin, liquid limit, placing form.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 360381 A Robust Salient Region Extraction Based on Color and Texture Features
Authors: Mingxin Zhang, Zhaogan Lu, Junyi Shen
Abstract:
In current common research reports, salient regions are usually defined as those regions that could present the main meaningful or semantic contents. However, there are no uniform saliency metrics that could describe the saliency of implicit image regions. Most common metrics take those regions as salient regions, which have many abrupt changes or some unpredictable characteristics. But, this metric will fail to detect those salient useful regions with flat textures. In fact, according to human semantic perceptions, color and texture distinctions are the main characteristics that could distinct different regions. Thus, we present a novel saliency metric coupled with color and texture features, and its corresponding salient region extraction methods. In order to evaluate the corresponding saliency values of implicit regions in one image, three main colors and multi-resolution Gabor features are respectively used for color and texture features. For each region, its saliency value is actually to evaluate the total sum of its Euclidean distances for other regions in the color and texture spaces. A special synthesized image and several practical images with main salient regions are used to evaluate the performance of the proposed saliency metric and other several common metrics, i.e., scale saliency, wavelet transform modulus maxima point density, and important index based metrics. Experiment results verified that the proposed saliency metric could achieve more robust performance than those common saliency metrics.Keywords: salient regions, color and texture features, image segmentation, saliency metric
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1567380 Designing Software Quality Measurement System for Telecommunication Industry Using Object-Oriented Technique
Authors: Nor Fazlina Iryani Abdul Hamid, Mohamad Khatim Hasan
Abstract:
Numbers of software quality measurement system have been implemented over the past few years, but none of them focuses on telecommunication industry. Software quality measurement system for telecommunication industry was a system that could calculate the quality value of the measured software that totally focused in telecommunication industry. Before designing a system, quality factors, quality attributes and quality metrics were identified based on literature review and survey. Then, using the identified quality factors, quality attributes and quality metrics, quality model for telecommunication industry was constructed. Each identified quality metrics had its own formula. Quality value for the system was measured based on the quality metrics and aggregated by referring to the quality model. It would classify the quality level of the software based on Net Satisfaction Index (NSI). The system was designed using object-oriented approach in web-based environment. Thus, existing of software quality measurement system was important to both developers and users in order to produce high quality software product for telecommunication industry.
Keywords: Software Quality, Quality Measurement, Object-oriented Approach, Net satisfaction Index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2451