Hi folks, I am wondering what the best practices are for tracking QA handoffs/workflows for Scrum teams.
Like any typical team, our squad pushes changes through several QA handoffs:
- Code review
- Visual qa
- Functional qa
- UAT
Currently these are set up as columns within our scrum board, but I've noticed that oftentimes our changes don't go through these linearly (waterfall), and will often parallelize (specifically vis and func qa). As such, folks don't use the jira board consistently and it can be difficult to understand the status of things.
Is it against best practice to move steps that could be parallelized, like visual qa and functional qa to "sub-tasks"? What are the risks in doing that?
While the workflow makes more sense to me in parallel, I fear we will then lose the visibility into how long each status takes. However, if folks aren't using this linearly anyways, then we don't even have that data.
Thoughts?
Hi @Mara Julin - The first question I always ask is whether your sprint flow is designed for these activities to complete within the same sprint as development.
If the answer is yes, then sub-tasks may make sense to give you visibility into the various team members that may work against that story in a given sprint.
If the answer is no, I recommend separate issues for that activity and making use of links. In this scenario, they can still be on the same sprint, but their sprint goals will be unique compared to the dev team. Sub-Tasks would otherwise hold the story hostage until QA is complete.
UAT is an entirely different animal. I try to coordinate a "release sprint" that is devoted to it where team's primary sprint goal is addressing any critical that come out of it and having enough lower priority items in the backlog that they can work on if UAT feedback is light.
Thank you for this! Yes all of these activities would be expected to be completed within the sprint. They are also attached to the epic or project ticket since we can't launch without it so it wouldn't be an issue if it holds back a story since we can't release anyways.
My concern with going towards subtasks instead of statuses is that ideally I'd like to track how long each one of these steps are taking. I've seen various add-ons or automations about showing time spent based on the QA statuses, and since subtasks follow a different workflow, we wouldn't be able to. Do you have suggestions for that?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
If you're using sub-tasks, you'll want to start with a simplified workflow because you're removing the need for all of the extra steps by having them split out. I typically go with something like To Do >> In Progress >> In Review >> Done.
From here, I would leverage a component or custom field to establish the type of sub-task so you can create filters specifically for QA sub-tasks. Then you can use any number of dashboard gadgets to get you the information you need like Resolution Time to track QA performance.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Gotcha, okay and thinking through pointing effort. Currently we point Functional QA to understand their capacity for a given week. I understand that you can point in sub-tasks, but it isn't counted in your velocity report. Does it roll-up to the parent task at all?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Story pointing on individuals or even team subsets is not agile best practice so that's why Jira does not support it.
To maximize efficiency, user story pointing should be based upon the entire team's estimation (every contributing team member should have a say in estimation poker). Points are, by definition, about the "Story" and what it will take to achieve the definition of done for that story, whether that be development, QA, documentation, iterative releases from dev to staging, etc. I've found that tracking at the individual or sub-team level just doesn't work:
Culture aside, points were simply designed as a way to simplify and speed up how teams can estimate issues. If poker is done properly, they are only relative to the other stories in the sprint and once the sprint has completed, serve zero value other than the aggregate velocity to provide a basic trend analysis during retro ("We were higher this sprint... Great job everyone!", "We were lower this sprint - Oh yeah Timmy was out sick a couple days and we underestimated ABC-123")
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi @Mara Julin ,
If you are opened to using marketplace apps, I can possibly propose another approach. In order to be completely transparent, we are the maker of the Checklist for Jira server app.
You could alternatively, combine the visual qa and functional qa status together under the qa status. You then use a the checklist app and have it automatically populated with the items Visual qa and Functional qa. Each item can have their individual statuses (in progress, blocked, etc), something like:
This way, you can have both activities in parallel. You can even add a Checklist Workflow Validator to ensure that the checklist is completed before allowing the issue to move to the UAT status.
The only drawback would be that it will possible to find how much time the issue was in the QA status but not in each Visual QA and Functional QA state.
Hope that helps,
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Atlassian Government Cloud has achieved FedRAMP Authorization at the Moderate level! Join our webinar to learn how you can accelerate mission success and move work forward faster in cloud, all while ensuring your critical data is secure.
Register NowOnline forums and learning are now in one easy-to-use experience.
By continuing, you accept the updated Community Terms of Use and acknowledge the Privacy Policy. Your public name, photo, and achievements may be publicly visible and available in search engines.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.