To run tests:
Unit tests and build tests (those run by presubmits) run against your Pipelines clone:
# Unit tests
go test ./...
# Build tests
./test/presubmit-tests.sh --build-testsE2E tests run test cases in your local Pipelines clone
against the Pipelines installation on your current kube cluster.
To ensure your local changes are reflected on your cluster, you must first build
and install them with ko apply -R -f ./config/.
# Integration tests
go test -v -count=1 -tags=e2e -timeout=20m ./test
# Conformance tests
go test -v -count=1 -tags=conformance -timeout=10m ./testBy running the commands above, you start the tests on the cluster of current-context
in local kubeconfig file (~/.kube/config by default) in you local machine.
Sometimes local tests pass but presubmit tests fail, one possible reason is the difference of running environments. The envs that our presubmit test uses are stored in ./*.env files. Specifically,
- e2e-tests-kind-prow-alpha.env for
pull-tekton-pipeline-alpha-integration-tests- e2e-tests-kind-prow-beta.env for [
pull-tekton-pipeline-beta-integration-tests] (TODO: tektoncd#6048 Add permanent link after plumbing setup for prow)- e2e-tests-kind-prow.env for
pull-tekton-pipeline-integration-tests
Unit tests live side by side with the code they are testing and can be run with:
go test ./...By default go test will not run the end to end tests,
which need -tags=e2e to be enabled.
Kubernetes client-go provides a number of fake clients and objects for unit testing. The ones we are using are:
- Fake Kubernetes client: Provides a fake REST interface to interact with Kubernetes API
- Fake pipeline client : Provides a fake REST PipelineClient Interface to interact with Pipeline CRDs.
You can create a fake PipelineClient for the Controller under test like this:
import (
fakepipelineclientset "github.com/tektoncd/pipeline/pkg/client/clientset/versioned/fake
)
pipelineClient := fakepipelineclientset.NewSimpleClientset()This
pipelineClient
is initialized with no runtime objects. You can also initialize the client with
Kubernetes objects and can interact with them using the
pipelineClient.Pipeline()
import (
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
obj := &v1alpha1.PipelineRun {
ObjectMeta: metav1.ObjectMeta {
Name: "name",
Namespace: "namespace",
},
Spec: v1alpha1.PipelineRunSpec {
PipelineRef: v1alpha1.PipelineRef {
Name: "test-pipeline",
APIVersion: "a1",
},
}
}
pipelineClient := fakepipelineclientset.NewSimpleClientset(obj)
objs := pipelineClient.Pipeline().PipelineRuns("namespace").List(v1.ListOptions{})
// You can verify if List was called in your test like this
action := pipelineClient.Actions()[0]
if action.GetVerb() != "list" {
t.Errorf("expected list to be called, found %s", action.GetVerb())
}To test the Controller of CRD (CustomResourceDefinitions), you need to add the CRD to the informers so that the listers can get the access.
For example, the following code will test PipelineRun
pipelineClient := fakepipelineclientset.NewSimpleClientset()
sharedInfomer := informers.NewSharedInformerFactory(pipelineClient, 0)
pipelineRunsInformer := sharedInfomer.Pipeline().V1alpha1().PipelineRuns()
obj := &v1alpha1.PipelineRun {
ObjectMeta: metav1.ObjectMeta {
Name: "name",
Namespace: "namespace",
},
Spec: v1alpha1.PipelineRunSpec {
PipelineRef: v1alpha1.PipelineRef {
Name: "test-pipeline",
APIVersion: "a1",
},
}
}
pipelineRunsInformer.Informer().GetIndexer().Add(obj)Environment variables used by end to end tests:
-
KO_DOCKER_REPO- Set this to an image registry your tests can push images to -
GCP_SERVICE_ACCOUNT_KEY_PATH- Tests that need to interact with GCS buckets will use the json credentials at this path to authenticate with GCS. -
SYSTEM_NAMESPACE- Set this to your Tekton deployment namespace liketekton-pipelines. Without this setting, the E2E test will useknative-testingas default namespace. -
In Kaniko e2e test, setting
GCP_SERVICE_ACCOUNT_KEY_PATHas the path of the GCP service account JSON key which has permissions to push to the registry specified inKO_DOCKER_REPOwill enable Kaniko to use those credentials when pushing an image. -
In GCS taskrun test, GCP service account JSON key file at path
GCP_SERVICE_ACCOUNT_KEY_PATH, if present, is used to generate Kubernetes secret to access GCS bucket. -
In Storage artifact bucket test, the
GCP_SERVICE_ACCOUNT_KEY_PATHJSON key is used to create/delete a bucket which will be used for output to input linking by thePipelineRuncontroller.
To create a service account usable in the e2e tests:
PROJECT_ID=your-gcp-project
ACCOUNT_NAME=service-account-name
# gcloud configure project
gcloud config set project $PROJECT_ID
# create the service account
gcloud iam service-accounts create $ACCOUNT_NAME --display-name $ACCOUNT_NAME
EMAIL=$(gcloud iam service-accounts list | grep $ACCOUNT_NAME | awk '{print $2}')
# add the storage.admin policy to the account so it can push containers
gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:$EMAIL --role roles/storage.admin
# create the JSON key
gcloud iam service-accounts keys create config.json --iam-account=$EMAIL
export GCP_SERVICE_ACCOUNT_KEY_PATH="$PWD/config.json"
export SYSTEM_NAMESPACE=tekton-pipelinesEnd to end tests live in this directory. To run these tests, you must provide
go with -tags=e2e. By default the tests run against your current kubeconfig
context, but you can change that and other settings with the flags:
go test -v -count=1 -tags=e2e -timeout=20m ./test
go test -v -count=1 -tags=e2e -timeout=20m ./test --kubeconfig ~/special/kubeconfig --cluster myspecialclusterIf tests are applied to the cluster with hardware architecture different to the base one
(for instance go test starts on amd64 architecture and --kubeconfig points to s390x Kubernetes cluster),
use TEST_RUNTIME_ARCH environment variable to specify the target hardware architecture(amd64, s390x, ppc64le, arm, arm64, etc)
You can also use
all of flags defined in knative/pkg/test.
To include tests for Windows, you need to specify the windows_e2e build tag. For example:
go test -v -count=1 -tags=e2e,windows_e2e -timeout=20m ./testPlease note that in order to run Windows tests there must be at least one Windows node available in the target Kubernetes cluster.
- By default the e2e tests against the current cluster in
~/.kube/configusing the environment specified in your environment variables. - Since these tests are fairly slow, running them with logging enabled is
recommended (
-v). - Using
--logverboseto see the verbose log output from test as well as from k8s libraries. - Using
-count=1is the idiomatic way to disable test caching. - The end to end tests take a long time to run so a value like
-timeout=20mcan be useful depending on what you're running - TestKanikoTaskRun requires containers to run with root user. Using
-skipRootUserTests=trueskips it.
You can use test flags to control the environment your tests run against, i.e. override your environment variables:
go test -v -tags=e2e -count=1 ./test --kubeconfig ~/special/kubeconfig --cluster myspecialclusterTests importing github.com/tektoncd/pipeline/test
recognize the
flags added by knative/pkg/test.
Tests are run in a new random namespace prefixed with the word arendelle-.
Unless you set the TEST_KEEP_NAMESPACES environment variable they will get
automatically cleaned up after running the test.
To run all the test cases with their names starting with the same letters, e.g.
TestTaskRun, use
the -run flag with go test:
go test -v -tags=e2e -count=1 ./test -run ^TestTaskRunTo run the YAML e2e tests, run the following command:
go test -v -count=1 -tags=examples -timeout=20m ./test/To limit parallelism of tests, use -parallel=n where n is the number of
tests to run in parallel.
There are two scenarios in upgrade tests. One is to install the previous release, upgrade to the current release, and validate whether the Tekton pipeline works. The other is to install the previous release, create the pipelines and tasks, upgrade to the current release, and validate whether the Tekton pipeline works.
- Set up the cluster
-
Running against a fresh kind cluster
- export SKIP_INITIALIZE=true
-
Running against a GKE cluster
- export PROJECT_ID=<my_gcp_project>
- install kubetest
-
To run the upgrade tests, run the following command:
./test/e2e-tests-upgrade.shIn the test dir you will find several libraries in the test
package you can use in your tests.
This library exists partially in this directory and partially in
knative/pkg/test.
The libs in this dir can:
init_test.goinitializes anything needed globally be the tests- Get access to client objects
- Generate random names
- Poll Pipeline resources
All integration tests must be marked with the e2e
build constraint so that go test ./... can
be used to run only the unit tests, i.e.:
// +build e2eTo initialize client objects use the command line flags which describe the environment:
func setup(t *testing.T) *test.Clients {
clients, err := test.NewClients(kubeconfig, cluster, namespaceName)
if err != nil {
t.Fatalf("Couldn't initialize clients: %v", err)
}
return clients
}The Clients struct contains initialized clients for accessing:
- Kubernetes objects
Pipelines
For example, to create a Pipeline:
_, err = clients.v1PipelineClient.Pipelines.Create(test.Route(namespaceName, pipelineName))And you can use the client to clean up resources created by your test (e.g. in your test cleanup):
func tearDown(clients *test.Clients) {
if clients != nil {
clients.Delete([]string{routeName}, []string{configName})
}
}See clients.go.
You can use the function GenerateName() to append a random string for crds
or anything else, so that your tests can use unique names each time they run.
import "github.com/tektoncd/pipeline/pkg/names"
namespace := names.SimpleNameGenerator.GenerateName("arendelle")After creating Pipeline resources or making changes to them, you will need to wait for the system to realize those changes. You can use polling methods to check the resources reach the desired state.
The WaitFor* functions use the Kubernetes
wait package. For
polling they use
PollImmediate
behind the scene. And the callback function is
ConditionFunc,
which returns a bool to indicate if the function should stop, and an error
to indicate if there was an error.
For example, you can poll a TaskRun until having a Status.Condition:
err = WaitForTaskRunState(c, hwTaskRunName, func(tr *v1alpha1.TaskRun) (bool, error) {
if len(tr.Status.Conditions) > 0 {
return true, nil
}
return false, nil
}, "TaskRunHasCondition", v1Version)Metrics will be emitted
for these Wait methods tracking how long test poll for.
Conformance tests live in this directory. These tests are used to check API specs
of Pipelines. To run these tests, you must provide go with -tags=conformance. By default, the tests
run against your current kubeconfig context, but you can change that and other settings with the flags like
the end to end tests:
go test -v -count=1 -tags=conformace -timeout=10m ./test
go test -v -count=1 -tags=conformace -timeout=10m ./test --kubeconfig ~/special/kubeconfig --cluster myspecialclusterFlags that could be set in conformance tests are exactly the same as flags in end to end tests.
Just note that the build tags should be -tags=conformance.
presubmit-tests.sh is the entry point for all tests
run on presubmit by Prow.
You can run this locally with:
test/presubmit-tests.sh
test/presubmit-tests.sh --build-tests
test/presubmit-tests.sh --unit-testsProw is configured in
the knative config.yaml in tektoncd/plumbing
via the sections for tektoncd/pipeline.
The presubmit integration tests entrypoint will run:
- The integration tests
- A test of our example CRDs
When run using Prow, integration tests will try to get a new cluster using
boskos and
these hardcoded GKE projects,
which only
the tektoncd/plumbing OWNERS
have access to.
If you would like to run the integration tests against your cluster, you can use
the current context in your kubeconfig, provide KO_DOCKER_REPO (as specified
in the DEVELOPMENT.md), use
e2e-tests.sh directly and provide the --run-tests argument:
export KO_DOCKER_REPO=gcr.io/my_docker_repo
test/e2e-tests.sh --run-testsOr you can set $PROJECT_ID to a GCP project and rely on
kubetest to
setup a cluster for you:
export PROJECT_ID=my_gcp_project
test/presubmit-tests.sh --integration-testsPer-feature flag tests verify that the combinations of feature flags work together correctly, ensuring that individual flags don't interfere with each other's functionality and that overall outcomes remain consistent. Per TEP0138, minimum end-to-end tests for stable features are utilized, mocking stable, beta, and alpha stability levels within different test environments.
To run these tests, you must provide go with -tags=featureflags. By default, the tests
run against your current kubeconfig context, but you can change that and other settings with the flags like
the end to end tests:
go test -v -count=1 -tags=featureflags -timeout=60m ./test -run ^TestPerFeatureFlagFlags that could be set in featureflags tests are exactly the same as flags in end to end tests.
Just note that the build tags should be -tags=featureflags.
The e2e test suite implements a categorization system that allows tests to run in parallel or serial mode, optimizing test execution time while ensuring safety for tests that modify shared cluster state.
Tests are categorized using comment annotations in the source code:
- Parallel tests: Safe to run concurrently, don't modify shared cluster state
- Serial tests: Must run sequentially because they modify ConfigMaps in
system.Namespace()
The test runner (TestMain in init_test.go) parses these annotations and orchestrates test execution accordingly.
Tests use structured Go comments to declare their execution mode:
// @test:execution=parallel
func TestMyParallelTest(t *testing.T) {
// Test implementation
}
// @test:execution=serial
// @test:reason=modifies results-from field in feature-flags ConfigMap
// @test:tags=artifacts,featureflags,stateful
func TestMySerialTest(t *testing.T) {
// Test implementation
}Annotation fields:
@test:execution- Required: eitherparallelorserial@test:reason- Recommended for serial tests: explains why serial execution is needed@test:tags- Optional: comma-separated tags for categorization and filtering
Use the -category flag to control which tests run:
# Run only parallel tests (fast, safe for concurrent execution)
go test -tags=e2e -category=parallel -timeout=20m ./test
# Run only serial tests (slower, sequential execution)
go test -tags=e2e -category=serial -timeout=20m ./test
# Run all tests with proper ordering (serial → parallel → unknown)
go test -tags=e2e -category=all -timeout=30m ./test
# Show test categorization without running tests
go test -tags=e2e -show-tests ./testWhen running with -category=all, tests execute in this order:
- Serial tests run first, sequentially, with fail-fast behavior
- Parallel tests run next, concurrently
- Unknown tests (unannotated) run last
This ensures that tests modifying shared state complete before parallel execution begins.
A test MUST be marked serial if it:
- Modifies ConfigMaps in
system.Namespace()(typicallytekton-pipelines) - Changes feature flags in the
feature-flagsConfigMap - Modifies any other shared cluster-wide configuration
Common examples:
- Tests changing
enable-api-fields,results-from,coschedulefeature flags - Tests modifying
trusted-resources-verification-no-match-policy - Any test calling
updateConfigMap(ctx, c.KubeClient, system.Namespace(), ...)
A test can be marked parallel if it:
- Only creates/modifies resources in its own test namespace
- Doesn't modify
system.Namespace()ConfigMaps - Can safely run concurrently with other tests
This includes most conformance tests, resolver tests, workspace tests, etc.
The categorization system is implemented via:
-
TestMain (
init_test.go): Parses annotations from source files using Go's AST parser, categorizes tests, and routes execution based on the-categoryflag. -
Annotation Parser: Scans
*_test.gofiles, extracts@test:*comments, and builds a manifest of test metadata. -
Test Filtering: Uses
flag.Set("test.run", pattern)to filter which tests execute in each run.