Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve DAG validation for pipelines with hundreds of tasks #5421

Merged
merged 1 commit into from
Sep 5, 2022

Conversation

rafalbigaj
Copy link
Contributor

@rafalbigaj rafalbigaj commented Sep 2, 2022

DAG validation rewritten using Kahn's algorithm to find cycles in task dependencies.

Original implementation, as pointed at #5420 is a root cause of poor validation webhook performance, which fails on default timeout (10s).

Changes

Submitter Checklist

As the author of this PR, please check off the items in this checklist:

  • Has Tests included if any functionality added or changed
  • Follows the commit message standard
  • Meets the Tekton contributor standards (including
    functionality, content, code)
  • Has a kind label. You can add one by adding a comment on this PR that contains /kind <type>. Valid types are bug, cleanup, design, documentation, feature, flake, misc, question, tep
  • Release notes block below has been updated with any user facing changes (API changes, bug fixes, changes requiring upgrade notices or deprecation warnings)

/kind bug

Release Notes

bug fixes:
- https://github.com/tektoncd/pipeline/issues/5420 - Improve DAG validation for pipelines with hundreds of tasks (validation wehbook performance)

@tekton-robot tekton-robot added kind/bug Categorizes issue or PR as related to a bug. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Sep 2, 2022
@tekton-robot tekton-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Sep 2, 2022
@tekton-robot
Copy link
Collaborator

Hi @rafalbigaj. Thanks for your PR.

I'm waiting for a tektoncd member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

pkg/reconciler/pipeline/dag/dag.go Show resolved Hide resolved
Comment on lines +142 to +147
if len(deps[dep]) == 0 {
independentTasks.Insert(dep)
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see it's an optimization originating from the fact that no topo-sort order is actually built or printed (as "independent" tasks that are also "final" ones --- are not even put into the independentTasks set).

pkg/reconciler/pipeline/dag/dag.go Show resolved Hide resolved
pkg/reconciler/pipeline/dag/dag.go Outdated Show resolved Hide resolved
@@ -549,6 +550,78 @@ func TestBuild_InvalidDAG(t *testing.T) {
}
}

func TestBuildGraphWithHundredsOfTasks_Success(t *testing.T) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe some performance check too?

Copy link
Member

@pritidesai pritidesai Sep 2, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes please, so that we do not run into it in future, thanks!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with performance checks is that they could be inherently flaky because of the variable performance of the test nodes. If we do add performance checks, I would suggest for now to only log the execution time.
We can collect such timings over time for some time and then decide on an acceptable bar for the execution time.

Copy link
Member

@pritidesai pritidesai Sep 2, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a great idea @afrittoli, where do we store those timings?

I was running the test you had introduced #3524 (a huge thanks to you for introducing such test):

func buildPipelineStateWithLargeDepencyGraph(t *testing.T) PipelineRunState {

I ran it locally yesterday multiple times which took 60 seconds 😲 (without this PR).

At the time introducing this test, it took less than the default timeout 30 seconds based on the PR description (if I am reading it right):

This changes adds a unit test that reproduces the issue in https://github.com/tektoncd/pipeline/issues/3521, which
used to fail (with timeout 30s) and now succeedes for pipelines roughly
up to 120 tasks / 120 links. On my laptop, going beyond 120 tasks/links
takes longer than 30s, so I left the unit test at 80 to avoid
introducing a flaky test in CI. There is still work to do to improve
this further, some profiling / tracing work might help.

There has been many changes introduced since then, we really need a way to flag us (nightly performance tests) when we introduce any delay.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nightly performance test or like you are suggesting to collect timings over time. I am fine with either option.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. The TestBuildGraphWithHundredsOfTasks_Success in a way is a kind of performance test (like the one I added at the time), because if things slow down significantly, the tests will eventually timeout.
If we collected test execution times and graphed them over time - or if we had some dedicated nightly performance test, we would be able to see a change in execution time sooner than just waiting for the tests to timeout.

That is something that we would need to setup as part of the CI infrastructure. Would you like to create an issue about that?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yup definitely, we had a PR from @guillaumerose #4378 which didn't materialize but at least most of us including @vdemeester and @imjasonh were on board with the idea of running nightly performance tests.

@vdemeester
Copy link
Member

/ok-to-test

@tekton-robot tekton-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Sep 2, 2022
@tekton-robot
Copy link
Collaborator

The following is the coverage report on the affected files.
Say /test pull-tekton-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/reconciler/pipeline/dag/dag.go 98.8% 97.9% -0.8

@tekton-robot
Copy link
Collaborator

The following is the coverage report on the affected files.
Say /test pull-tekton-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/reconciler/pipeline/dag/dag.go 98.8% 99.0% 0.3

@tekton-robot
Copy link
Collaborator

The following is the coverage report on the affected files.
Say /test pull-tekton-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/reconciler/pipeline/dag/dag.go 98.8% 99.0% 0.3

@tekton-robot
Copy link
Collaborator

The following is the coverage report on the affected files.
Say /test pull-tekton-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/reconciler/pipeline/dag/dag.go 98.8% 99.0% 0.3

Copy link

@Udiknedormin Udiknedormin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks ok to me now.

@@ -0,0 +1,21 @@
/*
Copyright 2019 The Tekton Authors
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2022?

@tekton-robot tekton-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Sep 2, 2022
@afrittoli
Copy link
Member

Thanks @rafalbigaj for this! Do you have any number about the improvement in performance with the new algorithm?
This change adds complexity to the code, which is fine as long as it is justified by the performance improvement.

DAG validation rewritten using Kahn's algorithm to find cycles in task dependencies.

Original implementation, as pointed at tektoncd#5420 is a root cause of poor validation webhook performance, which fails on default timeout (10s).
@rafalbigaj
Copy link
Contributor Author

@afrittoli let me provide as an example results from TestBuildGraphWithHundredsOfTasks_Success test in both implementations:

  • 0.04s - new implementation based on cycles detection using Kahn's algorithm (findCyclesInDependencies)
  • did not finish in 1h! - old implementation based on cycle detection on every link addition (lookForNode in linkPipelineTasks)

Benchmarked on: 2,3 GHz 8-Core Intel Core i9; 16 GB 2400 MHz DDR4

Copy link
Member

@afrittoli afrittoli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @rafalbigaj for this. The code and test coverage looks good to me.
Just one possible NIT, but nothing blocking for this PR.
/approve


// exports for tests

var FindCyclesInDependencies = findCyclesInDependencies
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a policy of testing exported functions only (generally), but I think in this case it makes sense to test findCyclesInDependencies directly!

NIT: I wonder if instead of exporting for tests, we could have the test for findCyclesInDependencies in a dedicated test module in the dag package?
Not asking to change this yet, I'd like to see what others think - and we could also change this in a different PR in case.

@tektoncd/core-maintainers wdyt?

@tekton-robot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: afrittoli, Udiknedormin

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@tekton-robot tekton-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 2, 2022
@afrittoli afrittoli added this to the Pipelines v0.40 milestone Sep 2, 2022
@@ -163,9 +200,7 @@ func addLink(pt string, previousTask string, nodes map[string]*Node) error {
return fmt.Errorf("task %s depends on %s but %s wasn't present in Pipeline", pt, previousTask, previousTask)
}
next := nodes[pt]
if err := linkPipelineTasks(prev, next); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had troubleshooted till this point, thanks for taking it further, appreciate your efforts 🙏

@@ -163,9 +200,7 @@ func addLink(pt string, previousTask string, nodes map[string]*Node) error {
return fmt.Errorf("task %s depends on %s but %s wasn't present in Pipeline", pt, previousTask, previousTask)
}
next := nodes[pt]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is holding a huge struct (object) in real world applications. It is set to the entire pipelineTask specification along with a list of params, when expressions, the entire specification when taskSpec is specified, etc. The pipelineTask specification is not required at this point since all the dependencies (runAfter, task results in params and when expressions, and from) are calculated before calling dag.Build. All we need is just the pipelineTask.Name i.e. HashKey(). We have a room of improvement in future to avoid passing the entire blob around.

@afrittoli afrittoli added the needs-cherry-pick Indicates a PR needs to be cherry-pick to a release branch label Sep 2, 2022
@afrittoli
Copy link
Member

@tektoncd/core-maintainers I added the "needs-cherry-pick" label for this item. While there was no explicit functional regression, there has been a performance degradation over time already since v0.36.x, which is significantly eased (solved) by this PR. I would propose doing a new series of minor releases to include this.

@abayer
Copy link
Contributor

abayer commented Sep 5, 2022

/lgtm

@tekton-robot tekton-robot added the lgtm Indicates that a PR is ready to be merged. label Sep 5, 2022
@tekton-robot tekton-robot merged commit 7b00bd2 into tektoncd:main Sep 5, 2022
@afrittoli
Copy link
Member

/cherrypick release-v0.36.x

@tekton-robot
Copy link
Collaborator

@afrittoli: #5421 failed to apply on top of branch "release-v0.36.x":

Applying: Improve DAG validation for pipelines with hundreds of tasks
Using index info to reconstruct a base tree...
A	pkg/apis/pipeline/v1/pipeline_validation_test.go
M	pkg/reconciler/pipeline/dag/dag_test.go
Falling back to patching base and 3-way merge...
Auto-merging pkg/reconciler/pipeline/dag/dag_test.go
Auto-merging pkg/apis/pipeline/v1beta1/pipeline_validation_test.go
CONFLICT (content): Merge conflict in pkg/apis/pipeline/v1beta1/pipeline_validation_test.go
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0001 Improve DAG validation for pipelines with hundreds of tasks
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

In response to this:

/cherrypick release-v0.36.x

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@afrittoli
Copy link
Member

/cherrypick release-v0.37.x

@tekton-robot
Copy link
Collaborator

@afrittoli: new pull request created: #5430

In response to this:

/cherrypick release-v0.37.x

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@afrittoli
Copy link
Member

/cherrypick release-v0.38.x

@afrittoli
Copy link
Member

/cherrypick release-v0.39.x

@tekton-robot
Copy link
Collaborator

@afrittoli: new pull request created: #5431

In response to this:

/cherrypick release-v0.38.x

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@tekton-robot
Copy link
Collaborator

@afrittoli: new pull request created: #5432

In response to this:

/cherrypick release-v0.39.x

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/performance Issues or PRs that are related to performance aspects. kind/bug Categorizes issue or PR as related to a bug. lgtm Indicates that a PR is ready to be merged. needs-cherry-pick Indicates a PR needs to be cherry-pick to a release branch ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants