Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Rovo's Readiness Checker - not getting the Auditability check

Jacques
Contributor
March 21, 2025

Hey community,

I'm trying Rovo's Readiness Checker. It checks for the completeness, clarity, auditability and estimations in a particular work item(=issue).

Whatever I do, I never seem to get the "Auditability" check to turn green. I have linked Confluence pages with the relevant information, documentation on an external website, but it always comes up with some sort of reason why it's not enough. Here are a few of the rationales it uses to NOT have it pass the Auditability in the various cycles that I've tried:

The columns in the table below are as follows:

  1. Confluence/external links: refers to the links that I added in the issue. Either to Confluence or external website links.
  2. Emoji Rating Score: is the icon that the Readiness Checker adds in the result table
  3. Rationale from Agent: is the explanation text from the Checker for the reason why it didn't pass the Auditability.
  4. What I expect: are my comments on the result generated by the Checker.
 Confluence /External links Emoji Rating Score Rationale from Agent What I expected
 No links whatsoever to documentation or external content 🔴No links to internal or external documentation are provided.Makes sense
Confluence links to documentation, including product description, definition of done, user manual.🔴No links to external documentation or key sources of information are provided.Makes sense, although not sure what kind of external documentation is necessary if this were for an internal software development project with no external documentation available.
Confluence links to documentation, including product description, definition of done, user manual and links to external documentation🟡While the issue is clear, it lacks links to external documentation or sources for further reference.I do not understand this. I have added external  documentation.

I have tried this for hours, and there is no clear indication on what the Auditability check requires for it to turn green.

The Atlassian documentation (use-case or out of the box agents) does not shed any light on the details of this agent.

Can anyone help? Does anyone have experience with this agent, or has ever had all the checks turn green?

Thanks,

--- Jacques.

3 answers

1 accepted

1 vote
Answer accepted
devpartisan
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
May 21, 2025

@Jacques

I love what @Dr Valeri Colon _Connect Centric_ has suggested here in terms of "just read the instructions". That's the best way to understand what the agent is trying to do. Valeri was kind about "evolving logic". I would agree it comes down to poor agent instructions. Because that's an out-of-the-box agent, I could go so far to say it's a bug. That said, I typically explain those agents as "off the rack" and they should always be tailored to suit a customer's needs. They were written with a mix of intent, including helping others to learn how to write agent instructions. For this particular case, I don't think "auditability" is a good criterion because it doesn't line up with the practices I've learned about writing good user stories. Also, I'd call that "traceability" but maybe I'm old-school.

Despite my disagreement, let's look closer to understand how we can fix it. There's only 1 line associated with Auditability, "Verify if key sources of information are linked to the issue, including internal and external documentation." As a human, I don't think I could score that concept either. What's a key source? Is the fact of a link sufficient when some links might not be the "source" of an issue? In contrast, the other "soft concepts" like completeness and clarity have examples that give the LLM enough "clues" that it can form an opinion about good vs bad.

How can we fix it? I'd try one or more of the following.

  • Explain the concept of auditability. Defining terms can help when there are many possible meanings, or our meaning is unique to our company or domain.
  • Provide examples, either good or bad help. Both are better.
  • Provide a step-by-step logic for assessment. In this case, I'd tune the instructions in "Generate Score" so they are less "open to interpretation".

I've been collecting some principles and practices for agent instructions (ie prompt engineering) in a GitHub repo. Feel free to use as a jumping off point:

https://github.com/ibuchanan/forge-rovo-metaprompting/tree/main/prompts

There's also a technique you could try based on the table you built. Like what I'm working on above, it's "meta prompting" so you don't have to use Rovo to do this:

I'm trying to improve this prompt:

[agent instructions]

This is the result I got:

[actual agent result]

This is the result I expected:

[expected agent result]

How can I improve the prompt so it generates a result more like what I want?

Sometimes I do that across Rovo and a couple other LLMs as a kind of "set-based design" so I can combine the best parts of each.

0 votes
Jacques
Contributor
May 19, 2025

 

.

0 votes
Dr Valeri Colon _Connect Centric_
Community Champion
May 16, 2025

Even with linked Confluence and external docs, Rovo may still flag auditability if the links don’t clearly support traceability or decision-making. The exact criteria aren't fully documented yet, so results can be inconsistent. 

Jacques
Contributor
May 19, 2025

Then the question arises: what's the use of an agent that doesn't provide consistent results? It's not improving my work... it's making it more stressful.

Dr Valeri Colon _Connect Centric_
Community Champion
May 19, 2025

Hi @Jacques I received an email with your other comment. Ideally yes, 100% the person that wrote the question would be the one to accept the answer, unfortunately people rarely do.

I'm attempting to mark off questions [see screenshot of my view] we already addressed to (1) improve response time (2) track questions to inform resource creation and (3) I'm about to direct participants from our enablement course to post their questions here. Apologies for deviating from the process.

Screenshot 2025-05-19 045656.png

 

Dr Valeri Colon _Connect Centric_
Community Champion
May 19, 2025

I hear you though—and you're not alone in feeling that way. Consistency is key when integrating tools into your workflow, and when expectations aren’t met, it can definitely add stress.

You can view the agent’s instructions by clicking on the three dots (…) next to the agent name, then "View Agent". If you’d like more control, you can also duplicate the agent and add your own custom instructions—for example, defining what “readiness” looks like in your specific context. Then you can test it and adjust as needed.

I believe the intent behind the agent is to yield consistent results, but since the underlying logic is still evolving, there's room for refinement—especially with community feedback like yours. Thank you for sharing your thoughts on this.

Screenshot 2025-05-19 050455.pngScreenshot 2025-05-19 050430.png

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events