Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Struggles with building a prompt and getting good output for Agents even with some help from ChatGPT

Laura Campbell
Community Champion
July 25, 2025

I'm hoping to get a bit of a discussion going about other people's experience building prompts and getting good output from Rovo agents.

I'm trying to build Rovo agents that are coaches, asking people for some input, probably referencing some internal playbooks or guidelines, and then giving them recommendations or giving feedback on their drafts.

I found the conversational experience for Rovo agents would summarize too much the conversation and not keep key information, so I started using Rovo chat and then asking it to generate a prompt that I would copy & paste (similar approach I use when building a custom GPT).

However Rovo chat doesn't seem to be able to output a prompt in markdown (or rather says "Here it is!" but it's not actually in markdown 😭)

So, regardless, I am testing side by side the same prompt for a Rovo agent and a custom GPT, and the Rovo agent spits out a bunch of recommandations in a long text, whereas the custom GPT asks for some input and waits to give more recommendations.

I tried modifying the prompt to stipulate it should wait for input before giving more instructions or recommendations, but I guess I need to improve how I phrased that. 

My prompt instructions specifically reference two playbooks that are Confluence pages, but I did not end up limiting the knowledge the agent as access to. But then I just tested the agent and it gave me an answer that completely ignores all the instructions in the prompt (asking questions, giving some feedback).

I've also noticed that ChatGPT can use emojis during it's discussions, which help signal moving from one step to another, or that a new topic or element is being discussed. I have yet to see an emoji in a Rovo agent chat. Has anyone gotten emoji's in their output? Does it work if I add to the prompt "Use emojis to signal changing topics" or something similar?

Ideally I would like the coach at the end to help the user generate some Jira work items and a Confluence project overview page; I'm going to assume it's not a good idea to include that in the main prompt. Can I just add those as actions to my agent, or do I have to be more specific with a workflow, treating the output of the agent kind of like smart values that I would use to generate Jira work items.

Should I perhaps be splitting the prompt into multiple agents? Like Agent 1 asks some foundational questions, agent 2 asks what has been already done, agent 3 gives some feedback and improvement ideas, and agent 4 comes up with the final Jira work items?

1 answer

0 votes
Rebekka Heilmann (viadee)
Community Champion
July 27, 2025

Hi @Laura Campbell 

i made similar experiences with Rovo not asking questions even though it's prompted to do so.

Conversation starters in the likes of: Give me information about a page. Ask me what page first. Just don't work. Rovo *always* starts analyzing random content. Also regardless the conversation, Rovo *always* seems to re-analyze the original content rather than only reading new information. Yes - the whole chat history is sent as part of the request but if the chat has evolved to something completely different, Rovo shouldn't just keep reading that initial page.

We are very annoyed by the orchestration layer that Atlassian has implemented - at the moment, they seem to mosty pick cheap models resulting in bad and inconsistent results. I don't really get it form a marketing point of view: nobody really seems to be convinced by Rovo so why should we start paying for it once they introduce the credits? Wouldn't it be much smarter to provide a GREAT solution (by using a good model all the time), collecting data about use cases, quality etc. before then introducing new models to limit costs... anyway.

I am working with several of my AI colleagues on Rovo and so far nobody is happy with it. There are some great concepts but done badly...

 

To get back to your question: Generally speaking, splitting up Agents in single tasks makes sense - at least once we can connect agents in pipelines. So in practice, I wouldn't know how to connect these 4 agents your suggesting without the user having to manually copy paste resutlts into a new agent chat.

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events