Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Prompt engineering is now a team sport

 

Hey Atlassian Community! 👋

I’m Shivi, a content designer at Atlassian, and I’ve been deep in the world of prompt engineering - writing, shaping, and refining prompts for Atlassian apps powered by Rovo and Atlassian Intelligence. This post is my humble attempt to demystify some aspects of prompt engineering and help anyone curious (or perhaps even a little confused!) about it understand what actually goes into it. Let’s dive right in!

 


What do we write prompts for?

First, let’s break down the basics, minus the jargon:

  • AI is the big umbrella term for all the smart stuff happening behind the scenes.

  • LLMs (Large Language Models) are just one kind of AI.

  • Models are the actual tools we’re writing prompts for, usually an LLM, aiming to get the best responses possible by understanding what these tools are good (or not so good) at.

What's "prompt engineering" exactly?

Prompt engineering (PE) is simply about writing clear instructions (prompts) to guide AI to respond in helpful, accurate, and meaningful ways. You can approach this in two ways:

  • First, writing prompts to use AI tools by crafting intuitive, simple, LLM-friendly instructions to get the best responses possible.

  • Second, engineering prompts to build AI tools, embedding prompts technically into products - a step that’s historically been owned and managed by developers or machine learning (ML) engineers.

How has my understanding of PE evolved?

I always thought this term had a strong technical aspect because:

  • "Prompt engineer" is still commonly seen as a tech role.

  • On many teams I know, only devs and ML engineers have managed PE e2e.

My knowledge of this whole thing was pretty half-baked, I must admit.

Who makes a good prompt engineer?

If you’re a language expert, you likely are already one! 

I'm beginning to see that prompt engineering shouldn't be viewed as purely technical anymore. Even when prompts require technical embedding, the actual prompt-writing remains straightforward because it's heavily language-focused (after all, we're directly dealing with large language models). It’s strategic and user-centered. Anyone comfortable with clear language can craft effective prompts by collaborating closely with tech teams. And if words aren't your strong suit, content designers on your team are excellent allies who can help you find the right phrasing and tone.

How can teams across different roles approach prompting?

When crafting prompts, teams across content, product, UX, and technical roles should typically work together to:

  • Develop and test system prompts that clearly define the AI model’s persona, tone, and interaction guidelines.

  • Shape clear, intuitive user-facing prompts (such as conversation starters) that lead to useful AI responses.

  • Define fallback scenarios and clarifying interactions to help guide AI behavior during uncertain situations.

  • Identify and manage edge cases proactively, ensuring the model responds appropriately and effectively.

  • Continuously refine prompts based on feedback and real-world outcomes, ensuring ongoing improvement.

Each of us contributes something crucial that AI models often lack: user empathy, clarity, tone, and precision. Without that, even the most powerful LLMs will produce garbage.

How's this different from what tech teams do?

Tech teams handle the behind-the-scenes stuff - the coding and setup that make prompts actually appear where users see them. They help the AI "remember" conversations, add user details dynamically, select when prompts should pop up, and so on.

Think of it this way: Tech teams build the car engine; non-tech teams steer the car exactly where users need it to go. If you’re good with language, you’re already equipped to engineer prompts. But if you need support, your in-house content designers are perfectly positioned to help you shape prompts that truly resonate. Both sides are equally essential for a smooth ride.

Interested in getting a bit more hands-on?

Good news - there's plenty of room to grow in the AI world, even if you aren’t a techie. Learning a few basics about how AI systems use your prompts (like APIs or backend stuff) can make you feel even more confident. And hey, if you're curious, nothing’s stopping you from trying out some simple code yourself.

Bottom line: When it comes to prompt engineering, there’s no strict barrier anymore between content, product, UX, and tech - just lots of opportunities to explore, learn, and collaborate.

Non-tech folks, it’s your turn now!

What are you currently doing with prompt engineering to level up your work? Or, if you’re just getting started, what would your dream scenario look like?

Share your experiences, questions, or ideal vision for using prompt engineering in your role!

1 comment

Justin Townsend
Contributor
June 27, 2025

Hi @Shivi Sivasubramanian

I like your take on this and agree wholeheartedly. No one person will have 100% the answer for the magical prompt that unlocks the LLM behaviour you need.

Important to keep productivity and transparency in mind here, as teams try to "anticipate" each others' sentences and sentiment, attributing contribution remains really important. I guess that's where Atlassian Open Dev Ops tools can play a part.

Cheers,

Justin

Like • Shivi Sivasubramanian likes this

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events