Qualitative Research Coding & Mind Maps
Coding is a heuristic (from the Greek, meaning “to discover”) – an exploratory problem-‐solving technique without specific formulas to follow. -‐ Richards &
Morse, 2007, p137, via The Coding Manual For Qualitative Researchers One of the immense privileges of curating the treasure trove of articles we receive for The Testing Planet is staying ahead of the curve of developments in the software testing craft. Some time before the Christmas break, I had the opportunity to review John Stevenson's Qualitative Research Coding and Software Testing Article, and was immediately struck by how useful it would be in my own situation. What I was looking for was a way of conducting the UAT sessions using exploratory techniques. I wanted to build an approach that would help me to manage a large team of business users and record what they wanted to test. I also wanted to capture outcomes, issues, bugs and questions, while tracing coverage in some way. I had experienced a couple of disappointments while trialling Google Drive, Team Foundation Server, Microsoft Test Manager, and Trello (combined with Google Drive) as test management tools for some user acceptance testing I was facilitating. They really hadn’t worked out as I had planned. I was back at the drawing board... It was around this point that I received and perused John’s article. “Aha!” said the eager context driven tester inside. This is the approach you’ve been looking for!
The Coding Pyramid (from Chris Hann’s Techniques & Tips for Qualitative Researchers)
We scrapped Trello and moved into a purely exploratory and Qualitative Coding/Memoing approach for our User Acceptance Testing. Following the concepts learned in John’s article, we started to track our testing in terms of user stories -‐ sticking loosely to the areas we already knew needed covering, but allowing ourselves to be guided by our experience both with the technology and testing, and by the people who actually needed to use the system. Level one -‐ initial coding
Level one coding is, as the name suggests, the point at which you begin thinking about appropriate codes for your data. Our level one (in relation to the Coding Pyramid) codes ended up looking like the below, basically consisting of single line testing missions with some codes indicating what areas were being covered by our testing: As an EP admin I want to add assignments to an Education Programme and see who has completed those assignments (Initial code - level 1) | assignments | education programme | delegate | contact| As an EP admin I want to see whether a module has been completed successfully or not based on the related assignments (Initial code - level 1) | educational programme | events | booking | delegate | sessions |options | assignments | As an EP admin I want to be able to create a new Education Programme in CRM, along with all of the associated modules, workshops, assignments etc. (Initial code - level 1) | educational programme| events | booking | delegate | sessions | options | assignments | Hopefully the user stories themselves are self-‐explanatory, at least in their intent if not their implementation. The three missions above might form a single testing session spanning 2-‐3 hours and covering the (mainly functional) areas identified by the codes -‐ assignments | education programme | delegate | contact | sessions | events | booking and so on. During the testing we also identified other codes that I haven’t included in the examples, to keep them relatively straightforward. Para-‐functional aspects of the system like performance and security appeared elsewhere in my test plan. User experience (UX) was a code that appeared on a fairly regular basis though and one that would emerge further into the process as a theme to be acted upon. Level two -‐ focused coding Moving onto level two is where we started to bring mind mapping into play. It occurred to me that the codes could be mapped to branches on a mind-‐map, and that we could then use the missions as indicators of what our coverage was looking like. Based on the coding example above, I started with something like this:
Codes and testing missions, mind mapped
You might observe that there are some unused codes (or branches) on the mind map. I made a decision to map each mission to whichever code best represented the main objective, otherwise the mind map would have become very large, and riddled with duplicates. Taking this approach kept things simple, while still providing an indication of overall test coverage. At this point in the process the objective is to reflect and to see what the codes identified during the testing are telling you; to consider whether there are emergent patterns, categories, subcategories and themes, and whether any of the connections between them lead to useful information for you and your team. As such the process is entirely fluid, with no right or wrong way to do things. It is important that you document your thinking though, and so we move onto level three. Level three -‐ thematic coding Level three coding is where we really started to dig into the results of our reflections and again, referring to the coding pyramid -‐ begin to “develop highly refined themes.” Having developed our initial coding framework, we continued to test using the same approach, gathering further data and applying codes as we did so, creating additional codes where needed. Based upon the codes that we gathered during further testing, how we categorised them and mapped our testing missions against those categories, we ended up with a mind map like the one below.
Categories started to emerge from our analysis of the corpus (the collated notes from our testing – I encouraged the use of Evernote for note-‐taking). These emerging categories were as follows:
• Whether or not we’re happy with the outcome of specific testing missions • Whether further investigation is required • Whether a mission is not yet complete (or failing) • Whether a mission needs re-‐testing • Whether there are some usability concerns • Whether I’ve got any important information to add (notes)
XMind provides a set of built in icons that readily map to those categories and more besides, as can be seen in the Key below.
This technique (though I didn’t realise it at the time) -‐ mixing qualitative and quantitative coding approaches, is referred to within the Qualitative Research literature as Magnitude Coding. “Sometimes words say it best; sometimes numbers do; and sometimes both can work in concert to compose a richer answer and corroborate each other.” – Johnny Saldana, The Coding Manual for Qualitative Researchers. Referring back to the coding steps mentioned in John’s article (from Heather Ford’s Qualitative codes and Coding)
1. Decide which types of coding are most relevant 2. Start coding! 3. Create a start list of codes 4. Generate categories (pattern codes) 5. Test these categories against new data (start with contrasting data early on!) 6. Write about categories/pattern codes in a memo to explain their significance
It became apparent that we’d covered off steps 1-‐4 and that we’d also more or less completed step 5 (in mind map form) having successfully mapped further test missions to the established categories. We were ready to start on step 6, or level four of the coding pyramid. Level four -‐ emerging themes This is the level where one should reflect upon what the corpus (in this case our testing notes and resulting mind map) is telling us, and start looking to identify emerging properties and issues.
While creating the mind map above I came up with a list of issues, questions, concerns and a few bugs. I’ve had to sanitise the list a bit in order to publish this article, but it hopefully demonstrates the kind of ideas and actions that may occur to you as you go through the process.
I’ve labelled some of the notes for clarity (though I also do this as a matter of course). Note that the emerging user experience theme [UX] has been documented with a corresponding action. The missions associated with the “needs further testing” category also require some testing effort. This has been captured under the “assignments need further testing” umbrella #action. Go code! Clearly much of what I’ve written above is context specific, but I think it’s a pretty flexible approach. You just need to follow the steps below to implement it:
• Start exploring • Capture tests using coding technique -‐ "as a user I want to..." • Codify the tests according to areas covered -‐ delegates, events, bookings etc
in my context • Create a mindmap with branches according to the codes • Map tests to branches • Reflect upon what the mindmap is telling you, about coverage, quality,
relationships, issues etc. -‐ use icons for clarity • Record the observations and ideas that occurred to you while following the
process • Reflect upon what you have learnt • Repeat as required
Of course, it doesn’t have to be a mind map -‐ you could do all of this in a spreadsheet if you wanted to, but I think mind maps provide a much more visceral picture, particularly with the addition of icons, notes and if you really want to -‐ screenshots. They also make the work of moving things around from code to category, to theme very easy. It’s worth noting that the process of analysing your work can actually be much more complex than I’ve made it seem in this article. It should be a time of deep reflection. I’ve only scraped the surface of the kind of information we could be drilling into. You may want to think about the following should you decide to implement this kind of an approach:
• Your relationship with the users and the testing activity • The overarching goals of the testing • Your code choices and their definitions • The relationships between your codes, categories and themes • In-‐depth analysis of your emerging themes, concepts and theories • Problems encountered during the testing • Personal and ethical dilemmas you may have encountered during the testing • Future direction(s) for the testing
If you’re really keen, you can go Meta, and memo on your memos; but I was some way away from this point at the time of writing. Extended memos can also be incorporated into a final report. In the process above, I would consider the mind map as a kind of report, though it may not be suitable for all audiences. My reflections however are quite likely to end up in a written report (a status update for example) and for that reason, tend to be kept brief. Your mileage may vary.
You may also find that this approach feels entirely natural, particularly when working in an agile environment where an iterative, exploratory approach to development and testing is the norm. In which case, this example and John’s earlier paper might serve as a useful framework to guide your thoughts. Thanks to John Stevenson and Mike Talks for reviewing this article and providing valuable feedback. Simon Knight works with teams of all shapes and sizes as a test lead, manager & facilitator, helping to deliver great software by bulding quality into every stage of the development process. Follow him on Twitter (@sjpknight) or read his blog http://sjpknight.com.