Suitability of Requirements Abstraction Model (RAM) Requirements for High-Level System Testing
Authors: Naeem Muhammad, Yves Vandewoude, Yolande Berbers, Robert Feldt
Abstract:
The Requirements Abstraction Model (RAM) helps in managing abstraction in requirements by organizing them at four levels (product, feature, function and component). The RAM is adaptable and can be tailored to meet the needs of the various organizations. Because software requirements are an important source of information for developing high-level tests, organizations willing to adopt the RAM model need to know the suitability of the RAM requirements for developing high-level tests. To investigate this suitability, test cases from twenty randomly selected requirements were developed, analyzed and graded. Requirements were selected from the requirements document of a Course Management System, a web based software system that supports teachers and students in performing course related tasks. This paper describes the results of the requirements document analysis. The results show that requirements at lower levels in the RAM are suitable for developing executable tests whereas it is hard to develop from requirements at higher levels.
Keywords: Market-driven requirements engineering, requirements abstraction model, requirements abstraction, system testing.
Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1077251
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1976References:
[1] Pyhajarvi, M., Rautiainen, K., Itkonen, J.: Increasing Understanding of the Modern Testing Perspective in Software Development Projects. In: Proceedings of the 36th Annual Hawaii International Conference on System Sciences (HICSS'03), (2003).
[2] Offutt, A.J., Yiwei, Xiong, Shaoying, Liu: Criteria for Generating Specification-Based Tests. In: 5th International Conference on Engineering of Complex Computer Systems (ICECCS '99), pp. 119-129 (1999).
[3] Ryser, J., Berner, S., Glinz, M.: On the State of the Art in Requirementsbased Validation and Test of Software. Technical Report, Institut fur Informatik, University of Zurich (1998).
[4] Elfriede, D.: Effective Software Testing: 50 Specific Ways to Improve Your Testing. Addison Wesley Professional, (2002).
[5] Tahat, L.H., Vaysburg, B., Korel, B., Bader, A.J.: Requirement-based automated black-box test generation. In: 25th Annual International Computer Software and Applications Conference (COMPSAC'01), pp. 489-495 (2001).
[6] Gorschek, T., Svahnberg, M., Borg, A., Loconsole, A., Borstler, J., Sandahl, K., Eriksson, M.: A Controlled Empirical Evaluation of A Requirements Abstraction Model. In: Information and Software Technology, Vol. 49, pp. 790-80 (2007).
[7] Gorschek, T., Wohlin, C.: Requirements Abstraction Model. In: Requirements Engineering, Vol. 11, pp. 79-101 (2006).
[8] Karlsson, L., Dahlstedt, A.G., Natt och Dag, J., Regnell, B., Persson, A.: Requirements Engineering Challenges in Market-Driven Software Development - An Interview Study with Practitioners. In: Information and Software Technology, Vol.49, pp. 588-604 (2007).
[9] Gorschek, T., Wohlin, C., Garre, P.: A Model for Technology Transfer in Practice. In: IEEE Software, Vol. 23, pp. 88-96 (2006).
[10] Abdurazik, A., Ammann, P., Ding, W., Offutt, J.: Evaluation of Three Specification-based Testing Criteria. In: Sixth IEEE International Conference on Engineering of Complex Computer Systems (ICECCS '00), pp. 179-187 (2000).
[11] Svahnberg, M.: Course Management System RAM Requirements Document, Blekinge Tekniska Hogskola Sweden.
[12] IEEE, "IEEE-STD 829-1998: IEEE standard for software test documentation," IEEE,
[online]. Available: http://ieeexplore.ieee.org/servlet/opac?punumber=5976.
[Accessed: August 08, 2007].
[13] Gorschek, T., Garre, P., Larsson, S.B.M., Wohlin, C.: Industry Evaluation of the Requirements Abstraction Model. In: Requirements Engineering, Vol. 12, pp. 163-190 (2007).
[14] Tahat, L.H., Vaysburg, B., Korel, B., Bader, A.J.: Requirement-Based Automated Black-Box Test Generation. In: 25th Annual International Computer Software and Applications Conference (COMPSAC'01), pp. 489-495 (2001).
[15] Ryser; j., Glinz, M.: SCENT: A Method Employing Scenarios to Systematically Derive Test Cases for System Test. Universitt Zrich, Institut fr Informatik, Zrich, Berichte des Instituts fr Informatik, (online). Available: http://citeseer.ist.psu.edu/ryser00scent.html. Accessed: November 24, 2007.