Just a heads up: On March 24, 2025, starting at 4:30pm CDT / 19:30 UTC, the site will be undergoing scheduled maintenance for a few hours. During this time, the site might be unavailable for a short while. Thanks for your patience.
×We have the desire to move all the data in Field A (custom field) to Field B (system field). It looks to be in excess of 50,000 issues using data in Field A.
One plan I had considered was writing one base Automation Rule, and then splitting it up into pieces to run in [~15] minute intervals, each with their own JQL to limit the number of issues closer to 2,000 issues.
There are other configurations (Screens, Boards, Dashboards, Automations) that I need to make as part of this switchover.
My thought is making 50,000 edits is the biggest challenge of the project, and if I could get those to fire overnight when our users are not working and outside the bounds of other Automations, it allows me to do the hardest part in the middle of the night (without actually being awake), and I can make my two sets of configuration changes the night before and first thing in the morning.
Thoughts? Has anyone done this? My backup is creating a number of import files but that makes me nervous and I'd likely have to sit at my computer fiddling my thumbs for large periods of time watching each import.
You would have to divide your rules up in no more then 1000 issues, if you go above 1000 issues in your rule it will be throttled. You can learn more about the service limits here.
I have done this in the past where I had to update about 20k issues, and used a JQL that returned no more then 1000 issues.
Hi @Alex Hall
Adding to Mikael's suggestions:
Here is the cloud version of the automation limits and packaging guidelines:
A rule like this needs caution regarding at least the Issue Searched limit and the Daily Processing Time per day limit.
First thing: I recommend adding a global rule to your instance, triggered on the Service Limit Breached trigger, and which sends you an email as the limit approaches. With that, the site admins can proactively log in and disable the rule if something goes wrong. https://support.atlassian.com/cloud-automation/docs/jira-automation-triggers/#Service-limit-breached
It is probably a good idea to leave this rule in place at all times, including the smart values of {{breachedSummary}} and {{breachedRules}} in the email alert, to watch for problems in usage and rule execution.
Next, I recommend running a scheduled trigger rule with JQL to test for a value in your Field B (i.e., destination value), and so the rule will stop processing issues when all of the issues are updated, and then the rule may be disabled.
One key is the scheduling frequency to prevent throttling. To check this, I recommend:
Kind regards,
Bill
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thank you! This was incredibly helpful. I setup the Service Limit Breached rule, which hopefully we'll never see work.
I love all your tips as well as @Mikael Sandbergs.
Running all 50,000 in one night sounds particularly difficult to orchestrate. I've built the rule and started triggering it on some smaller counts on closed projects. It hits the 1,000 issue limit, as you said it would.
I'm going to whittle away at this to see how small I can get the number without affected active work and see about a final set of JQLs to tackle only ~5,000 for the switchover. Spreading this out over a week should prevent me from hitting the 60 minute processing time.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.