Jira Data Generator asks the user how many projects, issues ... etc to create and how many fields are to be populated for each issue. The thing is DG selects which fields to populate for each issue individually. This produces an output where each issue in the project has different fields populated.
This is far from representing an actual use case. Even if the system has a few hundred projects and a few hundred custom fields, usually issues in any given project use the same set of custom fields.
I am trying to create a large Jira instance to performance test my plugin. The "large instance" definition given by Atlassian (as of today) has 1500+ projects 1400+ custom fields 1M+ issues, etc.. By default all custom fields created by Data Generator has global contexts but a production instance this large will definitely have its custom field contexts customized and narrowed down to projects that actually use those custom fields.
After I created the required number of projects, custom fields, issues; I tried to use the recently introduced custom field optimizer but since custom fields are randomly selected per issue while generating data, all custom fields are already used in all projects and optimizer can't optimize anything.
This should be a problem for evenybody, including Atlassian. How can I create a large test instance with this data generator?
I tried to create an issue in JDG project of ecosystem.atlassian.net about this but I get an error saying "the default assignee does not have assignable user permission".
Long story short, I strongly believe that Data Generator must be updated to pick the given number of custom fields per project and populate the same set of custom fields for each issue in that project. Does anybody know of a workaround I can use. How do you guys create large instances?
Hi,
Did you able to find a solution for this?
Not yet. To make things worse, Data Generator has a bug that prevents it from creating Security Level Schemes in new Jira versions.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
were you able to find a solution? I am in the same situation where I would like to use a large data set to create a test environment locally. At the moment there is a Jira performance framework that provides a large data set, but there is a bit of a learning curve when it comes to writing tests for this framework. If you found a solution and have a data set that you are willing to share that has depersonalized information please let me know.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Online forums and learning are now in one easy-to-use experience.
By continuing, you accept the updated Community Terms of Use and acknowledge the Privacy Policy. Your public name, photo, and achievements may be publicly visible and available in search engines.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.