Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 31903
Design Systems and the Need for a Usability Method: Assessing the Fitness of Components and Interaction Patterns in Design Systems Using Atmosphere Methodology

Authors: P. Johansson, S. Mardh


The present study proposes a usability test method, Atmosphere, to assess the fitness of components and interaction patterns of design systems. The method covers the user’s perception of the components of the system, the efficiency of the logic of the interaction patterns, perceived ease of use as well as the user’s understanding of the intended outcome of interactions. These aspects are assessed by combining measures of first impression, visual affordance and expectancy. The method was applied to a design system developed for the design of an electronic health record system. The study was conducted involving 15 healthcare personnel. It could be concluded that the Atmosphere method provides tangible data that enable human-computer interaction practitioners to analyze and categorize components and patterns based on perceived usability, success rate of identifying interactive components and success rate of understanding components and interaction patterns intended outcome.

Keywords: atomic design, atmosphere methodology, design system, expectancy testing, first impression testing, usability testing, visual affordance testing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 353


[1] Alexander, C., Ishikawa, S., & Silverstein, M. (1977). A Pattern Language: Towns, Buildings, Construction. Oxford University Press. ISBN 978-0195019193.
[2] Nudelman, G. (2013). Android Design Patterns. Edited by Wiley & Sons Inc. ISBN 978-1-118-39415-1.
[3] Tidwell, J. (1999). Common Ground: A Pattern Language for Human-Computer Interface Design. MolPharmacol, 61, 710–719. Retrieved from
[4] Wesson, J. L., Cowley, N. L. O., & Brooks, C. E. (2017). Extending a Mobile Prototyping Tool to Support User Interface Design Patterns and Reusability. Proceedings of the South African Institute of Computer Scientists and Information Technologists, SAICSIT 2017, Thaba Nchu, South Africa, September 26-28, 2017
[5] Sussmann, J. M., & Goodman, R. V. (1968). Implementing ICES module management unfer OS/360. Modular Programming: Proceedings of a National Symposium, 1968, 69-84.
[6] Google. (2014). Material Design is a unified system that combines theory, resources, and tools for crafting digital experiences. Material Design Website, (2014). Retrieved from
[7] Frost, B., (2016). Atomic Design. Pittsburgh: Brad Frost.
[8] Nielsen, (1994). Summary of Usability Inspection Methods. Retrieved from
[9] Hertzum, M., Jacobsen, N. E., & Molich, R. (2002). Usability inspections by groups of specialists: perceived agreement in spite of disparate observations. In CHI'02 extended abstracts on Human factors in computing systems, 662-663, ACM.
[10] Hertzum, M., Molich, R., & Jacobsen, N. E. (2014). What you get is what you see: revisiting the evaluator effect in usability tests. Behaviour & Information Technology, 33(2), 144-162.
[11] Molich, R., Bevan, N., Butler, S., Curson, I., Kindlund, E., Kirakowski, J., & Miller, D. (1998). Comparative evaluation of usability tests. Usability Professionals Association 1998 Conference, 22-26 June 1998 (Washington DC: Usability Professionals Association), pp. 189-200.
[12] Olmsted-Hawala, E. L., Murphy, E. D., Hawala, S., & Ashenfelter, K. T. (2010). Think-aloud protocols: A comparison of three think-aloud protocols for the use in testing data-dissemination web sites for usability. Proceedings of the 28th International Conference on Human Factors in Computing Systems, CHI 2010, Atlanta, Georgia, USA, 2381-2390.
[13] Ericsson, K. A., & Simon, H. A. (1996). Protocol Analysis: Verbal Reports As Data. (Revised ed.) MIT Press, Cambridge, MA, USA.
[14] Boren, T., & Ramey, J. (2000). Thinking aloud: Reconciling theory and practice. Transactions on Professional Communication, 43(3), 261-278.
[15] Dumas, J., & Redish, J. A. (1999). Practical Guide to Usability Testing. Intellect Press, Portland, OR, USA.
[16] Gronier, G. (2016). Measuring the first impression: Testing the Validity of the 5 Second Test. Journal of Usability Studies, 12 (1), 8-25.
[17] Lee, S., & Koubek, R. J. (2010). Understanding user preferences based on usability and aesthetics before and after actual use. Interacting with Computers, 22 (6), 530-543.
[18] Tractinsky, N., Katz, A. S., & Ikar, D. (2000). What is beautiful is usable. Interacting with computers, 13 (2), 127-145.
[19] Norman, D. A. (2002). Emotion and design: Attractive things work better. Interactions Magazine, ix (4), 36-42.
[20] Ilmberger, W., Held, T., & Schrepp, M. (2008). Cognitive processes causing the relationship between Aesthetics and Usability. In: HCI and usability for education and work. Heidelberg: Springer, 43-54.
[21] Liu, C., White, R. W., & Dumais, S. (2010). Understanding web browsing behaviors through Weibull analysis of dwell time. Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval - SIGIR ‘10 (pp. 379-386). New York, NY: ACM.
[22] Guo, F., Wang, X-S., Shao, H., Wang, X-R., & Liu, W-L. (2019). How User’s First Impression Forms on Mobile User Interfaces?: An ERPs Study, International Journal of Human-Computer Interaction, DOI: 10.1080/10447318.2019.1699745
[23] Grishin, J., & Gillan, D. J., (2019). Exploring the Boundary Conditions of the Effect of Aesthetics on Perceived Usability. Journal of Usability Studies, 14 (2), 76-104.
[24] Norman, D. A. (1988). The Psychology of Everyday Things. New York: Basic Books.
[25] Della Sala, S., Marchetti, C., & Spinnler, H. (1991). Right-sided anarchic (alien) hand: A longitudinal study. Neuropsychologia, 29(11), 1113–1127.
[26] Phillips, J. C., & Ward, R. (2002). S-R correspondence effects of irrelevant visual affordance: Time course and specificity of response activation. Visual Cognition, 9 (4/5), 540-558.
[27] Albert, W., & Dixon, E. (2003). Is this what you expected? The use of expectation measures in usability testing. Proceedings of Usability Professionals Association 2003 Conference, Scottsdale, AZ.
[28] Rich, A. & McGee, M. (2004). Expected Usability Magnitude Estimation. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 48(5), 912–916.
[29] Albert, B., & Tullis, T. (2013). Measuring the User Experience: Collecting, Analyzing and Presenting Usability Metrics. Second edition. Morgan Kaufmann, 2013.
[30] Van Ryzin, G. (2013). An experimental test of the expectancy-disconfirmation theory of citizen satisfaction. Journal of Policy Analysis and Management, 32 (3), 597-614.
[31] Oliver, R. L. (1996). Satisfaction: A behavioral perspective on the consumer. New York: McGraw Hill.
[32] Voss, G. B., Parasuraman, A., & Grewal, D. (1998). The roles of price performance, and expectations in determining satisfaction in service exchanges. Journal of Marketing, 62(4), 46-61.
[33] World Medical Association, World medical association declared of Helsinki: Ethical principles for medical research involving human subjects, Jama, 310 (2013), 2191-2194.
[34] Faulkner, L. (2003). Beyond the five-user assumption: Benefits of increased sample sizes in usability testing. Behavior Research Methods, Instruments & Computers, 35, 3, 379- 383.
[35] Nielsen, J., & Landauer, T. K. (1993). A mathematical model of the finding of usability problems. Proceedings of INTERCHI 1993, ACM Press, 206-213.
[36] Spool, J. & Schroeder, W. (2001). Testing web sites: five users is nowhere near enough. CHI ’01 Extended Abstracts on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 285–286. DOI:
[37] Brooke, J. (1996). SUS: A “quick and dirty” Usability Scale. Usability Evaluation in Industry, 189, 4-7.
[38] Bangor, A., Kortum, P. T., & Miller, J. T. (2008). An empirical evaluation of the System Usability Scale. International Journal of Human-Computer Interaction, 24(6), 574–594.
[39] Orfanou, K., Tselios, N., & Katsanos, C. (2015). Perceived usability evaluation of learning management systems: Empirical evaluation of the System Usability Scale. The International Review of Research in Open and Distributed Learning, 16(2).
[40] Sauro, J. (2011). A practical guide to the System Usability Scale (SUS): Background Benchmarks & Best Practices. Measuring Usability LLC, Denver, 2011.