Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Pipelines - metrics over time

James Cheese
Contributor
October 8, 2024

Hello! I posted a question a while back on a related topic - and done a bit of followup searching... Couldn't find anything great to start with, so I figured I'd just throw out a discussion about it - tracking metrics over time.

We've got some obvious metrics that would be nice to capture/compare in a long-term manner - eg: to help us see the progress we're making working through old tech debt in the form of fewer linter warnings, improved code coverage and the like...

We've got a couple of ways of doing this in mind so far:

  • Write a service for it
    • Obvious enough, just putting together something in your chosen FAAS provider that accepts and stores. Probably using something off-the-shelf reporting-wise, but the world's your oyster here.
    • Pros: behaviour that's exactly what you want, integration with preferred tools, pick a visualization
    • Cons: maintenance, infrastructure cost, up-front cost in time, usual proprietary concerns - security, maintenance, etc.
  • Shell script to drop rows into a cloud provider
    • Use some standard DB provider hosted in the cloud (or a cloud-specific one like Azure Table Storage, your preference) and write a shell script that invokes a CLI tool to store the relevant data. Again probably using an off-the-shelf tool for reporting.
    • Pros: relatively limited up-front work, fewer moving parts, pick a visualization
    • Cons: "feels icky", mindset issues around data validation and injection
  • Third party tools
    • Simplest option here - picking up a third party solution and run with it. Won't name any specifics here, but feel free to mention any particularly good options :)
    • Pros: easy to implement, likely going to be a higher quality solution
    • Cons: ongoing cost

Just wondering how much I've missed - something in Pipelines that can store this information (I'd seen mention of Metrics, but not as something functional). It'd be great to hear from anyone who's had some success to get some ideas (and warnings of what not to do!)

For context - my old post about failing builds for a "no new eslint warnings" quality gate - got there in the end, but the specific solution we used won't transfer to all types of metric:

https://community.atlassian.com/t5/Bitbucket-questions/How-can-we-fail-builds-to-prevent-linter-warnings-increasing/qaq-p/2577642

1 answer

0 votes
James Cheese
Contributor
October 10, 2024

Just to follow up on this, I think we're likely to go with a shell-scripted option - probably Azure Table Storage due to already having Azure CLI hooked up for these builds (though AWS DynamoDB looked like another perfectly good option). Rough script is below (though it's not tested so probably won't work as-is!)

- step: &linter
script:
# Insert some scripts here that run a linter
- echo 10>metrics/lintWarnings
- echo 0>metrics/lintErrors
artifacts:
- metrics/**
- step: &metrics
name: Store Code Metrics
image: mcr.microsoft.com/azure-cli
  script:
    - az login --service-principal --username ${AZURE_APP_ID} --password ${AZURE_PASSWORD} --tenant ${AZURE_TENANT_ID}
    - az storage entity insert --account-name vlqrdevstorage --table-name CIBuildMetrics --entity PartitionKey=${$BITBUCKET_REPO_SLUG} RowId=${BITBUCKET_COMMIT} LintWarnings=$(cat metrics/lintWarnings) LintErrors=$(cat metrics/lintErrors)

That can then give us a central drop-point for build-info, which we can then either link into a reporting tool, or definitely-not-a-reporting-tool-but-it'll-do-for-now (ie: Excel).

I'm aware the cross-step state handling isn't perfect - but just left in as an example.

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events