diff --git a/linkerd.io/content/2-edge/tasks/configuring-per-route-policy.md b/linkerd.io/content/2-edge/tasks/configuring-per-route-policy.md index 2c495e1f20..a62c218b47 100644 --- a/linkerd.io/content/2-edge/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2-edge/tasks/configuring-per-route-policy.md @@ -30,7 +30,7 @@ haven't already done this. Inject and install the Books demo application: ```bash -$ kubectl create ns booksapp && \ +kubectl create ns booksapp && \ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp.yml \ | linkerd inject - \ | kubectl -n booksapp apply -f - @@ -44,21 +44,21 @@ run in the `booksapp` namespace. Confirm that the Linkerd data plane was injected successfully: ```bash -$ linkerd check -n booksapp --proxy -o short +linkerd check -n booksapp --proxy -o short ``` You can take a quick look at all the components that were added to your cluster by running: ```bash -$ kubectl -n booksapp get all +kubectl -n booksapp get all ``` Once the rollout has completed successfully, you can access the app itself by port-forwarding `webapp` locally: ```bash -$ kubectl -n booksapp port-forward svc/webapp 7000 & +kubectl -n booksapp port-forward svc/webapp 7000 & ``` Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the diff --git a/linkerd.io/content/2-edge/tasks/managing-egress-traffic.md b/linkerd.io/content/2-edge/tasks/managing-egress-traffic.md index a43eadb61a..5db5143577 100644 --- a/linkerd.io/content/2-edge/tasks/managing-egress-traffic.md +++ b/linkerd.io/content/2-edge/tasks/managing-egress-traffic.md @@ -235,7 +235,7 @@ Interestingly enough though, if we go back to our client shell and we try to initiate HTTPS traffic to the same service, it will not be allowed: ```bash -~ $ curl -v https://httpbin.org/get +$ curl -v https://httpbin.org/get curl: (35) TLS connect error: error:00000000:lib(0)::reason(0) ``` diff --git a/linkerd.io/content/2-edge/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2-edge/tasks/multicluster-using-statefulsets.md index 81969979a0..2aafd97635 100644 --- a/linkerd.io/content/2-edge/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2-edge/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:linkerd/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:linkerd/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -185,7 +185,7 @@ If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +$ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -255,12 +255,12 @@ NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 23m # exec and curl -$ kubectl --context=k3d-east exec curl-56dc7d945d-96r6p -it -c curl -- sh +kubectl --context=k3d-east exec curl-56dc7d945d-96r6p -it -c curl -- sh # we want to curl the same hostname we see in the endpoints object above. # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-k3d-west.default.svc.east.cluster.local +$ curl nginx-set-0.nginx-svc-k3d-west.default.svc.east.cluster.local diff --git a/linkerd.io/content/2-edge/tasks/restricting-access.md b/linkerd.io/content/2-edge/tasks/restricting-access.md index 5654518600..a5787cf354 100644 --- a/linkerd.io/content/2-edge/tasks/restricting-access.md +++ b/linkerd.io/content/2-edge/tasks/restricting-access.md @@ -68,7 +68,7 @@ of requests coming to the voting service and see that all incoming requests to the voting-grpc server are currently unauthorized: ```bash -> linkerd viz authz -n emojivoto deploy/voting +$ linkerd viz authz -n emojivoto deploy/voting ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 100.00% 0.1rps 1ms 1ms 1ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.2rps 1ms 1ms 1ms @@ -112,7 +112,7 @@ the `linkerd viz auth` command queries over a time-window, you may see some UNAUTHORIZED requests displayed for a short amount of time. ```bash -> linkerd viz authz -n emojivoto deploy/voting +$ linkerd viz authz -n emojivoto deploy/voting ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 100.00% 0.1rps 1ms 1ms 1ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.2rps 1ms 1ms 1ms @@ -123,7 +123,7 @@ We can also test that request from other pods will be rejected by creating a `grpcurl` pod and attempting to access the Voting service from it: ```bash -> kubectl run grpcurl --rm -it --image=networld/grpcurl --restart=Never --command -- ./grpcurl -plaintext voting-svc.emojivoto:8080 emojivoto.v1.VotingService/VoteDog +$ kubectl run grpcurl --rm -it --image=networld/grpcurl --restart=Never --command -- ./grpcurl -plaintext voting-svc.emojivoto:8080 emojivoto.v1.VotingService/VoteDog Error invoking method "emojivoto.v1.VotingService/VoteDog": failed to query for service descriptor "emojivoto.v1.VotingService": rpc error: code = PermissionDenied desc = pod "grpcurl" deleted pod default/grpcurl terminated (Error) @@ -153,7 +153,7 @@ following logic when deciding whether to allow a request: We can set the default policy to `deny` using the `linkerd upgrade` command: ```bash -> linkerd upgrade --default-inbound-policy deny | kubectl apply -f - +linkerd upgrade --default-inbound-policy deny | kubectl apply -f - ``` Alternatively, default policies can be set on individual workloads or namespaces diff --git a/linkerd.io/content/2-edge/tasks/troubleshooting.md b/linkerd.io/content/2-edge/tasks/troubleshooting.md index baaa71e206..0aceee1234 100644 --- a/linkerd.io/content/2-edge/tasks/troubleshooting.md +++ b/linkerd.io/content/2-edge/tasks/troubleshooting.md @@ -1789,12 +1789,12 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the prometheus related resources are present and running correctly. ```bash -❯ kubectl -n linkerd-viz get deploy,cm | grep prometheus +$ kubectl -n linkerd-viz get deploy,cm | grep prometheus deployment.apps/prometheus 1/1 1 1 3m18s configmap/prometheus-config 1 3m18s -❯ kubectl get clusterRoleBindings | grep prometheus +$ kubectl get clusterRoleBindings | grep prometheus linkerd-linkerd-viz-prometheus ClusterRole/linkerd-linkerd-viz-prometheus 3m37s -❯ kubectl get clusterRoles | grep prometheus +$ kubectl get clusterRoles | grep prometheus linkerd-linkerd-viz-prometheus 2021-02-26T06:03:11Zh ``` @@ -1810,7 +1810,7 @@ Example failure: Verify that the metrics API pod is running correctly ```bash -❯ kubectl -n linkerd-viz get pods +$ kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE metrics-api-7bb8cb8489-cbq4m 2/2 Running 0 4m58s tap-injector-6b9bc6fc4-cgbr4 2/2 Running 0 4m56s @@ -2166,7 +2166,7 @@ Agent version: v0.4.4 To update to the latest version: ```bash -linkerd-buoyant install | kubectl apply -f - +$ linkerd-buoyant install | kubectl apply -f - ``` ### √ buoyant-cloud-agent Deployment is running a single pod diff --git a/linkerd.io/content/2.11/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2.11/tasks/multicluster-using-statefulsets.md index 9d8730b5b0..bf17be188e 100644 --- a/linkerd.io/content/2.11/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2.11/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -186,7 +186,7 @@ If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +$ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -261,7 +261,7 @@ $ kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/s # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local +$ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local diff --git a/linkerd.io/content/2.12/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.12/tasks/configuring-per-route-policy.md index 24606035be..8cada41de0 100644 --- a/linkerd.io/content/2.12/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.12/tasks/configuring-per-route-policy.md @@ -30,7 +30,7 @@ haven't already done this. Inject and install the Books demo application: ```bash -$ kubectl create ns booksapp && \ +kubectl create ns booksapp && \ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp.yml \ | linkerd inject - \ | kubectl -n booksapp apply -f - @@ -44,21 +44,21 @@ run in the `booksapp` namespace. Confirm that the Linkerd data plane was injected successfully: ```bash -$ linkerd check -n booksapp --proxy -o short +linkerd check -n booksapp --proxy -o short ``` You can take a quick look at all the components that were added to your cluster by running: ```bash -$ kubectl -n booksapp get all +kubectl -n booksapp get all ``` Once the rollout has completed successfully, you can access the app itself by port-forwarding `webapp` locally: ```bash -$ kubectl -n booksapp port-forward svc/webapp 7000 & +kubectl -n booksapp port-forward svc/webapp 7000 & ``` Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the diff --git a/linkerd.io/content/2.12/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2.12/tasks/multicluster-using-statefulsets.md index 9d8730b5b0..bf17be188e 100644 --- a/linkerd.io/content/2.12/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2.12/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -186,7 +186,7 @@ If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +$ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -261,7 +261,7 @@ $ kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/s # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local +$ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local diff --git a/linkerd.io/content/2.12/tasks/upgrade.md b/linkerd.io/content/2.12/tasks/upgrade.md index 7e608fd341..7edc1d56a9 100644 --- a/linkerd.io/content/2.12/tasks/upgrade.md +++ b/linkerd.io/content/2.12/tasks/upgrade.md @@ -375,7 +375,7 @@ remove the Helm release config for the old `linkerd2` chart (assuming you used the "Secret" storage backend, which is the default): ```bash -$ kubectl -n default delete secret \ +kubectl -n default delete secret \ --field-selector type=helm.sh/release.v1 \ -l name=linkerd,owner=helm ``` @@ -555,7 +555,7 @@ chart or installing the Linkerd-Viz chart. See below for a complete list of values which have moved. ```bash -helm repo update +helm repo up # Upgrade the control plane (this will remove viz components). helm upgrade linkerd2 linkerd/linkerd2 --reset-values -f values.yaml --atomic # Install the Linkerd-Viz extension to restore viz functionality. diff --git a/linkerd.io/content/2.13/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.13/tasks/configuring-per-route-policy.md index 018f3a706a..c5cefa6861 100644 --- a/linkerd.io/content/2.13/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.13/tasks/configuring-per-route-policy.md @@ -30,7 +30,7 @@ haven't already done this. Inject and install the Books demo application: ```bash -$ kubectl create ns booksapp && \ +kubectl create ns booksapp && \ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp.yml \ | linkerd inject - \ | kubectl -n booksapp apply -f - @@ -44,21 +44,21 @@ run in the `booksapp` namespace. Confirm that the Linkerd data plane was injected successfully: ```bash -$ linkerd check -n booksapp --proxy -o short +linkerd check -n booksapp --proxy -o short ``` You can take a quick look at all the components that were added to your cluster by running: ```bash -$ kubectl -n booksapp get all +kubectl -n booksapp get all ``` Once the rollout has completed successfully, you can access the app itself by port-forwarding `webapp` locally: ```bash -$ kubectl -n booksapp port-forward svc/webapp 7000 & +kubectl -n booksapp port-forward svc/webapp 7000 & ``` Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the diff --git a/linkerd.io/content/2.13/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2.13/tasks/multicluster-using-statefulsets.md index 9d8730b5b0..bf17be188e 100644 --- a/linkerd.io/content/2.13/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2.13/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -186,7 +186,7 @@ If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +$ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -261,7 +261,7 @@ $ kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/s # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local +$ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local diff --git a/linkerd.io/content/2.13/tasks/troubleshooting.md b/linkerd.io/content/2.13/tasks/troubleshooting.md index 7ec6896a2d..7ed6aaf5d1 100644 --- a/linkerd.io/content/2.13/tasks/troubleshooting.md +++ b/linkerd.io/content/2.13/tasks/troubleshooting.md @@ -1721,12 +1721,12 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the prometheus related resources are present and running correctly. ```bash -❯ kubectl -n linkerd-viz get deploy,cm | grep prometheus +$ kubectl -n linkerd-viz get deploy,cm | grep prometheus deployment.apps/prometheus 1/1 1 1 3m18s configmap/prometheus-config 1 3m18s -❯ kubectl get clusterRoleBindings | grep prometheus +$ kubectl get clusterRoleBindings | grep prometheus linkerd-linkerd-viz-prometheus ClusterRole/linkerd-linkerd-viz-prometheus 3m37s -❯ kubectl get clusterRoles | grep prometheus +$ kubectl get clusterRoles | grep prometheus linkerd-linkerd-viz-prometheus 2021-02-26T06:03:11Zh ``` @@ -1742,7 +1742,7 @@ Example failure: Verify that the metrics API pod is running correctly ```bash -❯ kubectl -n linkerd-viz get pods +$ kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE metrics-api-7bb8cb8489-cbq4m 2/2 Running 0 4m58s tap-injector-6b9bc6fc4-cgbr4 2/2 Running 0 4m56s diff --git a/linkerd.io/content/2.13/tasks/upgrade.md b/linkerd.io/content/2.13/tasks/upgrade.md index 08c3e70a35..c09193921d 100644 --- a/linkerd.io/content/2.13/tasks/upgrade.md +++ b/linkerd.io/content/2.13/tasks/upgrade.md @@ -388,7 +388,7 @@ remove the Helm release config for the old `linkerd2` chart (assuming you used the "Secret" storage backend, which is the default): ```bash -$ kubectl -n default delete secret \ +kubectl -n default delete secret \ --field-selector type=helm.sh/release.v1 \ -l name=linkerd,owner=helm ``` diff --git a/linkerd.io/content/2.14/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.14/tasks/configuring-per-route-policy.md index a5c8b5c2ef..ea27641a9c 100644 --- a/linkerd.io/content/2.14/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.14/tasks/configuring-per-route-policy.md @@ -30,7 +30,7 @@ haven't already done this. Inject and install the Books demo application: ```bash -$ kubectl create ns booksapp && \ +kubectl create ns booksapp && \ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp.yml \ | linkerd inject - \ | kubectl -n booksapp apply -f - @@ -44,21 +44,21 @@ run in the `booksapp` namespace. Confirm that the Linkerd data plane was injected successfully: ```bash -$ linkerd check -n booksapp --proxy -o short +linkerd check -n booksapp --proxy -o short ``` You can take a quick look at all the components that were added to your cluster by running: ```bash -$ kubectl -n booksapp get all +kubectl -n booksapp get all ``` Once the rollout has completed successfully, you can access the app itself by port-forwarding `webapp` locally: ```bash -$ kubectl -n booksapp port-forward svc/webapp 7000 & +kubectl -n booksapp port-forward svc/webapp 7000 & ``` Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the diff --git a/linkerd.io/content/2.14/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2.14/tasks/multicluster-using-statefulsets.md index 9d8730b5b0..bf17be188e 100644 --- a/linkerd.io/content/2.14/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2.14/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -186,7 +186,7 @@ If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +$ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -261,7 +261,7 @@ $ kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/s # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local +$ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local diff --git a/linkerd.io/content/2.14/tasks/restricting-access.md b/linkerd.io/content/2.14/tasks/restricting-access.md index 0b0b0c94b7..af25ce411e 100644 --- a/linkerd.io/content/2.14/tasks/restricting-access.md +++ b/linkerd.io/content/2.14/tasks/restricting-access.md @@ -68,7 +68,7 @@ of requests coming to the voting service and see that all incoming requests to the voting-grpc server are currently unauthorized: ```bash -> linkerd viz authz -n emojivoto deploy/voting +$ linkerd viz authz -n emojivoto deploy/voting ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 100.00% 0.1rps 1ms 1ms 1ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.2rps 1ms 1ms 1ms @@ -112,7 +112,7 @@ the `linkerd viz auth` command queries over a time-window, you may see some UNAUTHORIZED requests displayed for a short amount of time. ```bash -> linkerd viz authz -n emojivoto deploy/voting +$ linkerd viz authz -n emojivoto deploy/voting ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 100.00% 0.1rps 1ms 1ms 1ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.2rps 1ms 1ms 1ms @@ -123,7 +123,7 @@ We can also test that request from other pods will be rejected by creating a `grpcurl` pod and attempting to access the Voting service from it: ```bash -> kubectl run grpcurl --rm -it --image=networld/grpcurl --restart=Never --command -- ./grpcurl -plaintext voting-svc.emojivoto:8080 emojivoto.v1.VotingService/VoteDog +$ kubectl run grpcurl --rm -it --image=networld/grpcurl --restart=Never --command -- ./grpcurl -plaintext voting-svc.emojivoto:8080 emojivoto.v1.VotingService/VoteDog Error invoking method "emojivoto.v1.VotingService/VoteDog": failed to query for service descriptor "emojivoto.v1.VotingService": rpc error: code = PermissionDenied desc = pod "grpcurl" deleted pod default/grpcurl terminated (Error) @@ -153,7 +153,7 @@ following logic when deciding whether to allow a request: We can set the default policy to `deny` using the `linkerd upgrade` command: ```bash -> linkerd upgrade --default-inbound-policy deny | kubectl apply -f - +linkerd upgrade --default-inbound-policy deny | kubectl apply -f - ``` Alternatively, default policies can be set on individual workloads or namespaces diff --git a/linkerd.io/content/2.14/tasks/troubleshooting.md b/linkerd.io/content/2.14/tasks/troubleshooting.md index 7ec6896a2d..7ed6aaf5d1 100644 --- a/linkerd.io/content/2.14/tasks/troubleshooting.md +++ b/linkerd.io/content/2.14/tasks/troubleshooting.md @@ -1721,12 +1721,12 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the prometheus related resources are present and running correctly. ```bash -❯ kubectl -n linkerd-viz get deploy,cm | grep prometheus +$ kubectl -n linkerd-viz get deploy,cm | grep prometheus deployment.apps/prometheus 1/1 1 1 3m18s configmap/prometheus-config 1 3m18s -❯ kubectl get clusterRoleBindings | grep prometheus +$ kubectl get clusterRoleBindings | grep prometheus linkerd-linkerd-viz-prometheus ClusterRole/linkerd-linkerd-viz-prometheus 3m37s -❯ kubectl get clusterRoles | grep prometheus +$ kubectl get clusterRoles | grep prometheus linkerd-linkerd-viz-prometheus 2021-02-26T06:03:11Zh ``` @@ -1742,7 +1742,7 @@ Example failure: Verify that the metrics API pod is running correctly ```bash -❯ kubectl -n linkerd-viz get pods +$ kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE metrics-api-7bb8cb8489-cbq4m 2/2 Running 0 4m58s tap-injector-6b9bc6fc4-cgbr4 2/2 Running 0 4m56s diff --git a/linkerd.io/content/2.14/tasks/upgrade.md b/linkerd.io/content/2.14/tasks/upgrade.md index 32f921e829..2b4761befb 100644 --- a/linkerd.io/content/2.14/tasks/upgrade.md +++ b/linkerd.io/content/2.14/tasks/upgrade.md @@ -402,7 +402,7 @@ remove the Helm release config for the old `linkerd2` chart (assuming you used the "Secret" storage backend, which is the default): ```bash -$ kubectl -n default delete secret \ +kubectl -n default delete secret \ --field-selector type=helm.sh/release.v1 \ -l name=linkerd,owner=helm ``` diff --git a/linkerd.io/content/2.15/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.15/tasks/configuring-per-route-policy.md index a5c8b5c2ef..ea27641a9c 100644 --- a/linkerd.io/content/2.15/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.15/tasks/configuring-per-route-policy.md @@ -30,7 +30,7 @@ haven't already done this. Inject and install the Books demo application: ```bash -$ kubectl create ns booksapp && \ +kubectl create ns booksapp && \ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp.yml \ | linkerd inject - \ | kubectl -n booksapp apply -f - @@ -44,21 +44,21 @@ run in the `booksapp` namespace. Confirm that the Linkerd data plane was injected successfully: ```bash -$ linkerd check -n booksapp --proxy -o short +linkerd check -n booksapp --proxy -o short ``` You can take a quick look at all the components that were added to your cluster by running: ```bash -$ kubectl -n booksapp get all +kubectl -n booksapp get all ``` Once the rollout has completed successfully, you can access the app itself by port-forwarding `webapp` locally: ```bash -$ kubectl -n booksapp port-forward svc/webapp 7000 & +kubectl -n booksapp port-forward svc/webapp 7000 & ``` Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the diff --git a/linkerd.io/content/2.15/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2.15/tasks/multicluster-using-statefulsets.md index 9d8730b5b0..bf17be188e 100644 --- a/linkerd.io/content/2.15/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2.15/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -186,7 +186,7 @@ If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +$ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -261,7 +261,7 @@ $ kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/s # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local +$ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local diff --git a/linkerd.io/content/2.15/tasks/restricting-access.md b/linkerd.io/content/2.15/tasks/restricting-access.md index 0b0b0c94b7..af25ce411e 100644 --- a/linkerd.io/content/2.15/tasks/restricting-access.md +++ b/linkerd.io/content/2.15/tasks/restricting-access.md @@ -68,7 +68,7 @@ of requests coming to the voting service and see that all incoming requests to the voting-grpc server are currently unauthorized: ```bash -> linkerd viz authz -n emojivoto deploy/voting +$ linkerd viz authz -n emojivoto deploy/voting ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 100.00% 0.1rps 1ms 1ms 1ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.2rps 1ms 1ms 1ms @@ -112,7 +112,7 @@ the `linkerd viz auth` command queries over a time-window, you may see some UNAUTHORIZED requests displayed for a short amount of time. ```bash -> linkerd viz authz -n emojivoto deploy/voting +$ linkerd viz authz -n emojivoto deploy/voting ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 100.00% 0.1rps 1ms 1ms 1ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.2rps 1ms 1ms 1ms @@ -123,7 +123,7 @@ We can also test that request from other pods will be rejected by creating a `grpcurl` pod and attempting to access the Voting service from it: ```bash -> kubectl run grpcurl --rm -it --image=networld/grpcurl --restart=Never --command -- ./grpcurl -plaintext voting-svc.emojivoto:8080 emojivoto.v1.VotingService/VoteDog +$ kubectl run grpcurl --rm -it --image=networld/grpcurl --restart=Never --command -- ./grpcurl -plaintext voting-svc.emojivoto:8080 emojivoto.v1.VotingService/VoteDog Error invoking method "emojivoto.v1.VotingService/VoteDog": failed to query for service descriptor "emojivoto.v1.VotingService": rpc error: code = PermissionDenied desc = pod "grpcurl" deleted pod default/grpcurl terminated (Error) @@ -153,7 +153,7 @@ following logic when deciding whether to allow a request: We can set the default policy to `deny` using the `linkerd upgrade` command: ```bash -> linkerd upgrade --default-inbound-policy deny | kubectl apply -f - +linkerd upgrade --default-inbound-policy deny | kubectl apply -f - ``` Alternatively, default policies can be set on individual workloads or namespaces diff --git a/linkerd.io/content/2.15/tasks/troubleshooting.md b/linkerd.io/content/2.15/tasks/troubleshooting.md index bc58809cf8..250d30d24e 100644 --- a/linkerd.io/content/2.15/tasks/troubleshooting.md +++ b/linkerd.io/content/2.15/tasks/troubleshooting.md @@ -1736,12 +1736,12 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the prometheus related resources are present and running correctly. ```bash -❯ kubectl -n linkerd-viz get deploy,cm | grep prometheus +$ kubectl -n linkerd-viz get deploy,cm | grep prometheus deployment.apps/prometheus 1/1 1 1 3m18s configmap/prometheus-config 1 3m18s -❯ kubectl get clusterRoleBindings | grep prometheus +$ kubectl get clusterRoleBindings | grep prometheus linkerd-linkerd-viz-prometheus ClusterRole/linkerd-linkerd-viz-prometheus 3m37s -❯ kubectl get clusterRoles | grep prometheus +$ kubectl get clusterRoles | grep prometheus linkerd-linkerd-viz-prometheus 2021-02-26T06:03:11Zh ``` @@ -1757,7 +1757,7 @@ Example failure: Verify that the metrics API pod is running correctly ```bash -❯ kubectl -n linkerd-viz get pods +$ kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE metrics-api-7bb8cb8489-cbq4m 2/2 Running 0 4m58s tap-injector-6b9bc6fc4-cgbr4 2/2 Running 0 4m56s diff --git a/linkerd.io/content/2.15/tasks/upgrade.md b/linkerd.io/content/2.15/tasks/upgrade.md index 23547217a4..38cd784e47 100644 --- a/linkerd.io/content/2.15/tasks/upgrade.md +++ b/linkerd.io/content/2.15/tasks/upgrade.md @@ -464,7 +464,7 @@ remove the Helm release config for the old `linkerd2` chart (assuming you used the "Secret" storage backend, which is the default): ```bash -$ kubectl -n default delete secret \ +kubectl -n default delete secret \ --field-selector type=helm.sh/release.v1 \ -l name=linkerd,owner=helm ``` diff --git a/linkerd.io/content/2.16/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.16/tasks/configuring-per-route-policy.md index a5c8b5c2ef..ea27641a9c 100644 --- a/linkerd.io/content/2.16/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.16/tasks/configuring-per-route-policy.md @@ -30,7 +30,7 @@ haven't already done this. Inject and install the Books demo application: ```bash -$ kubectl create ns booksapp && \ +kubectl create ns booksapp && \ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp.yml \ | linkerd inject - \ | kubectl -n booksapp apply -f - @@ -44,21 +44,21 @@ run in the `booksapp` namespace. Confirm that the Linkerd data plane was injected successfully: ```bash -$ linkerd check -n booksapp --proxy -o short +linkerd check -n booksapp --proxy -o short ``` You can take a quick look at all the components that were added to your cluster by running: ```bash -$ kubectl -n booksapp get all +kubectl -n booksapp get all ``` Once the rollout has completed successfully, you can access the app itself by port-forwarding `webapp` locally: ```bash -$ kubectl -n booksapp port-forward svc/webapp 7000 & +kubectl -n booksapp port-forward svc/webapp 7000 & ``` Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the diff --git a/linkerd.io/content/2.16/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2.16/tasks/multicluster-using-statefulsets.md index 912241d181..b64d8882da 100644 --- a/linkerd.io/content/2.16/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2.16/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -186,7 +186,7 @@ If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +$ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -261,7 +261,7 @@ $ kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/s # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local +$ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local diff --git a/linkerd.io/content/2.16/tasks/restricting-access.md b/linkerd.io/content/2.16/tasks/restricting-access.md index 5654518600..a5787cf354 100644 --- a/linkerd.io/content/2.16/tasks/restricting-access.md +++ b/linkerd.io/content/2.16/tasks/restricting-access.md @@ -68,7 +68,7 @@ of requests coming to the voting service and see that all incoming requests to the voting-grpc server are currently unauthorized: ```bash -> linkerd viz authz -n emojivoto deploy/voting +$ linkerd viz authz -n emojivoto deploy/voting ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 100.00% 0.1rps 1ms 1ms 1ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.2rps 1ms 1ms 1ms @@ -112,7 +112,7 @@ the `linkerd viz auth` command queries over a time-window, you may see some UNAUTHORIZED requests displayed for a short amount of time. ```bash -> linkerd viz authz -n emojivoto deploy/voting +$ linkerd viz authz -n emojivoto deploy/voting ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 100.00% 0.1rps 1ms 1ms 1ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.2rps 1ms 1ms 1ms @@ -123,7 +123,7 @@ We can also test that request from other pods will be rejected by creating a `grpcurl` pod and attempting to access the Voting service from it: ```bash -> kubectl run grpcurl --rm -it --image=networld/grpcurl --restart=Never --command -- ./grpcurl -plaintext voting-svc.emojivoto:8080 emojivoto.v1.VotingService/VoteDog +$ kubectl run grpcurl --rm -it --image=networld/grpcurl --restart=Never --command -- ./grpcurl -plaintext voting-svc.emojivoto:8080 emojivoto.v1.VotingService/VoteDog Error invoking method "emojivoto.v1.VotingService/VoteDog": failed to query for service descriptor "emojivoto.v1.VotingService": rpc error: code = PermissionDenied desc = pod "grpcurl" deleted pod default/grpcurl terminated (Error) @@ -153,7 +153,7 @@ following logic when deciding whether to allow a request: We can set the default policy to `deny` using the `linkerd upgrade` command: ```bash -> linkerd upgrade --default-inbound-policy deny | kubectl apply -f - +linkerd upgrade --default-inbound-policy deny | kubectl apply -f - ``` Alternatively, default policies can be set on individual workloads or namespaces diff --git a/linkerd.io/content/2.16/tasks/troubleshooting.md b/linkerd.io/content/2.16/tasks/troubleshooting.md index bc58809cf8..250d30d24e 100644 --- a/linkerd.io/content/2.16/tasks/troubleshooting.md +++ b/linkerd.io/content/2.16/tasks/troubleshooting.md @@ -1736,12 +1736,12 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the prometheus related resources are present and running correctly. ```bash -❯ kubectl -n linkerd-viz get deploy,cm | grep prometheus +$ kubectl -n linkerd-viz get deploy,cm | grep prometheus deployment.apps/prometheus 1/1 1 1 3m18s configmap/prometheus-config 1 3m18s -❯ kubectl get clusterRoleBindings | grep prometheus +$ kubectl get clusterRoleBindings | grep prometheus linkerd-linkerd-viz-prometheus ClusterRole/linkerd-linkerd-viz-prometheus 3m37s -❯ kubectl get clusterRoles | grep prometheus +$ kubectl get clusterRoles | grep prometheus linkerd-linkerd-viz-prometheus 2021-02-26T06:03:11Zh ``` @@ -1757,7 +1757,7 @@ Example failure: Verify that the metrics API pod is running correctly ```bash -❯ kubectl -n linkerd-viz get pods +$ kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE metrics-api-7bb8cb8489-cbq4m 2/2 Running 0 4m58s tap-injector-6b9bc6fc4-cgbr4 2/2 Running 0 4m56s diff --git a/linkerd.io/content/2.16/tasks/upgrade.md b/linkerd.io/content/2.16/tasks/upgrade.md index 23547217a4..38cd784e47 100644 --- a/linkerd.io/content/2.16/tasks/upgrade.md +++ b/linkerd.io/content/2.16/tasks/upgrade.md @@ -464,7 +464,7 @@ remove the Helm release config for the old `linkerd2` chart (assuming you used the "Secret" storage backend, which is the default): ```bash -$ kubectl -n default delete secret \ +kubectl -n default delete secret \ --field-selector type=helm.sh/release.v1 \ -l name=linkerd,owner=helm ``` diff --git a/linkerd.io/content/2.17/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.17/tasks/configuring-per-route-policy.md index a5c8b5c2ef..ea27641a9c 100644 --- a/linkerd.io/content/2.17/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.17/tasks/configuring-per-route-policy.md @@ -30,7 +30,7 @@ haven't already done this. Inject and install the Books demo application: ```bash -$ kubectl create ns booksapp && \ +kubectl create ns booksapp && \ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp.yml \ | linkerd inject - \ | kubectl -n booksapp apply -f - @@ -44,21 +44,21 @@ run in the `booksapp` namespace. Confirm that the Linkerd data plane was injected successfully: ```bash -$ linkerd check -n booksapp --proxy -o short +linkerd check -n booksapp --proxy -o short ``` You can take a quick look at all the components that were added to your cluster by running: ```bash -$ kubectl -n booksapp get all +kubectl -n booksapp get all ``` Once the rollout has completed successfully, you can access the app itself by port-forwarding `webapp` locally: ```bash -$ kubectl -n booksapp port-forward svc/webapp 7000 & +kubectl -n booksapp port-forward svc/webapp 7000 & ``` Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the diff --git a/linkerd.io/content/2.17/tasks/managing-egress-traffic.md b/linkerd.io/content/2.17/tasks/managing-egress-traffic.md index d77f290917..6571a4dd42 100644 --- a/linkerd.io/content/2.17/tasks/managing-egress-traffic.md +++ b/linkerd.io/content/2.17/tasks/managing-egress-traffic.md @@ -76,7 +76,7 @@ In a separate shell, you can use the Linkerd diagnostics command to visualize the traffic. ```bash -linkerd dg proxy-metrics -n egress-test po/client | grep outbound_http_route_request_statuses_total +$ linkerd dg proxy-metrics -n egress-test po/client | grep outbound_http_route_request_statuses_total outbound_http_route_request_statuses_total{ parent_group="policy.linkerd.io", @@ -190,7 +190,7 @@ Interestingly enough though, if we go back to our client shell and we try to initiate HTTPS traffic to the same service, it will not be allowed: ```bash -~ $ curl -v https://httpbin.org/get +$ curl -v https://httpbin.org/get curl: (35) TLS connect error: error:00000000:lib(0)::reason(0) ``` @@ -226,7 +226,7 @@ This fixes the problem and we can see HTTPS requests to the external service succeeding reflected in the metrics: ```bash -linkerd dg proxy-metrics -n egress-test po/client | grep outbound_tls_route_open_total +$ linkerd dg proxy-metrics -n egress-test po/client | grep outbound_tls_route_open_total outbound_tls_route_open_total{ parent_group="policy.linkerd.io", @@ -251,7 +251,7 @@ our client, we will see the proxy eagerly closing the connection because it is not forbidden by our current policy configuration: ```bash -linkerd dg proxy-metrics -n egress-test po/client | grep outbound_tls_route_close_total +$ linkerd dg proxy-metrics -n egress-test po/client | grep outbound_tls_route_close_total outbound_tls_route_close_total{ parent_group="policy.linkerd.io", diff --git a/linkerd.io/content/2.17/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2.17/tasks/multicluster-using-statefulsets.md index 9d8730b5b0..bf17be188e 100644 --- a/linkerd.io/content/2.17/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2.17/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -186,7 +186,7 @@ If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +$ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -261,7 +261,7 @@ $ kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/s # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local +$ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local diff --git a/linkerd.io/content/2.17/tasks/restricting-access.md b/linkerd.io/content/2.17/tasks/restricting-access.md index 5654518600..a5787cf354 100644 --- a/linkerd.io/content/2.17/tasks/restricting-access.md +++ b/linkerd.io/content/2.17/tasks/restricting-access.md @@ -68,7 +68,7 @@ of requests coming to the voting service and see that all incoming requests to the voting-grpc server are currently unauthorized: ```bash -> linkerd viz authz -n emojivoto deploy/voting +$ linkerd viz authz -n emojivoto deploy/voting ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 100.00% 0.1rps 1ms 1ms 1ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.2rps 1ms 1ms 1ms @@ -112,7 +112,7 @@ the `linkerd viz auth` command queries over a time-window, you may see some UNAUTHORIZED requests displayed for a short amount of time. ```bash -> linkerd viz authz -n emojivoto deploy/voting +$ linkerd viz authz -n emojivoto deploy/voting ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 100.00% 0.1rps 1ms 1ms 1ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.2rps 1ms 1ms 1ms @@ -123,7 +123,7 @@ We can also test that request from other pods will be rejected by creating a `grpcurl` pod and attempting to access the Voting service from it: ```bash -> kubectl run grpcurl --rm -it --image=networld/grpcurl --restart=Never --command -- ./grpcurl -plaintext voting-svc.emojivoto:8080 emojivoto.v1.VotingService/VoteDog +$ kubectl run grpcurl --rm -it --image=networld/grpcurl --restart=Never --command -- ./grpcurl -plaintext voting-svc.emojivoto:8080 emojivoto.v1.VotingService/VoteDog Error invoking method "emojivoto.v1.VotingService/VoteDog": failed to query for service descriptor "emojivoto.v1.VotingService": rpc error: code = PermissionDenied desc = pod "grpcurl" deleted pod default/grpcurl terminated (Error) @@ -153,7 +153,7 @@ following logic when deciding whether to allow a request: We can set the default policy to `deny` using the `linkerd upgrade` command: ```bash -> linkerd upgrade --default-inbound-policy deny | kubectl apply -f - +linkerd upgrade --default-inbound-policy deny | kubectl apply -f - ``` Alternatively, default policies can be set on individual workloads or namespaces diff --git a/linkerd.io/content/2.17/tasks/troubleshooting.md b/linkerd.io/content/2.17/tasks/troubleshooting.md index a9efbc7ec1..e5ff32e5d9 100644 --- a/linkerd.io/content/2.17/tasks/troubleshooting.md +++ b/linkerd.io/content/2.17/tasks/troubleshooting.md @@ -1761,12 +1761,12 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the prometheus related resources are present and running correctly. ```bash -❯ kubectl -n linkerd-viz get deploy,cm | grep prometheus +$ kubectl -n linkerd-viz get deploy,cm | grep prometheus deployment.apps/prometheus 1/1 1 1 3m18s configmap/prometheus-config 1 3m18s -❯ kubectl get clusterRoleBindings | grep prometheus +$ kubectl get clusterRoleBindings | grep prometheus linkerd-linkerd-viz-prometheus ClusterRole/linkerd-linkerd-viz-prometheus 3m37s -❯ kubectl get clusterRoles | grep prometheus +$ kubectl get clusterRoles | grep prometheus linkerd-linkerd-viz-prometheus 2021-02-26T06:03:11Zh ``` @@ -1782,7 +1782,7 @@ Example failure: Verify that the metrics API pod is running correctly ```bash -❯ kubectl -n linkerd-viz get pods +$ kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE metrics-api-7bb8cb8489-cbq4m 2/2 Running 0 4m58s tap-injector-6b9bc6fc4-cgbr4 2/2 Running 0 4m56s diff --git a/linkerd.io/content/2.17/tasks/upgrade.md b/linkerd.io/content/2.17/tasks/upgrade.md index 23547217a4..38cd784e47 100644 --- a/linkerd.io/content/2.17/tasks/upgrade.md +++ b/linkerd.io/content/2.17/tasks/upgrade.md @@ -464,7 +464,7 @@ remove the Helm release config for the old `linkerd2` chart (assuming you used the "Secret" storage backend, which is the default): ```bash -$ kubectl -n default delete secret \ +kubectl -n default delete secret \ --field-selector type=helm.sh/release.v1 \ -l name=linkerd,owner=helm ``` diff --git a/linkerd.io/content/2.18/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.18/tasks/configuring-per-route-policy.md index 011c10ff9e..1887c303ff 100644 --- a/linkerd.io/content/2.18/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.18/tasks/configuring-per-route-policy.md @@ -30,7 +30,7 @@ haven't already done this. Inject and install the Books demo application: ```bash -$ kubectl create ns booksapp && \ +kubectl create ns booksapp && \ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp.yml \ | linkerd inject - \ | kubectl -n booksapp apply -f - @@ -44,21 +44,21 @@ run in the `booksapp` namespace. Confirm that the Linkerd data plane was injected successfully: ```bash -$ linkerd check -n booksapp --proxy -o short +linkerd check -n booksapp --proxy -o short ``` You can take a quick look at all the components that were added to your cluster by running: ```bash -$ kubectl -n booksapp get all +kubectl -n booksapp get all ``` Once the rollout has completed successfully, you can access the app itself by port-forwarding `webapp` locally: ```bash -$ kubectl -n booksapp port-forward svc/webapp 7000 & +kubectl -n booksapp port-forward svc/webapp 7000 & ``` Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the diff --git a/linkerd.io/content/2.18/tasks/managing-egress-traffic.md b/linkerd.io/content/2.18/tasks/managing-egress-traffic.md index a43eadb61a..f828a92edf 100644 --- a/linkerd.io/content/2.18/tasks/managing-egress-traffic.md +++ b/linkerd.io/content/2.18/tasks/managing-egress-traffic.md @@ -77,7 +77,7 @@ In a separate shell, you can use the Linkerd diagnostics command to visualize the traffic. ```bash -linkerd dg proxy-metrics -n egress-test po/client | grep outbound_http_route_request_statuses_total +$ linkerd dg proxy-metrics -n egress-test po/client | grep outbound_http_route_request_statuses_total outbound_http_route_request_statuses_total{ parent_group="policy.linkerd.io", @@ -129,7 +129,7 @@ Hostname metrics can also be enabled cluster-wide through the values in ```bash # With a single value -linkerd install --set proxy.metrics.hostnameLabels=true | kubectl apply -f - +$ linkerd install --set proxy.metrics.hostnameLabels=true | kubectl apply -f - # Or ith a values.yaml file # @@ -138,7 +138,7 @@ proxy: metrics: hostnameLabels: true -linkerd install --values=values.yaml | kubectl apply -f - +$ linkerd install --values=values.yaml | kubectl apply -f - ``` {{< note >}} @@ -235,7 +235,7 @@ Interestingly enough though, if we go back to our client shell and we try to initiate HTTPS traffic to the same service, it will not be allowed: ```bash -~ $ curl -v https://httpbin.org/get +$ curl -v https://httpbin.org/get curl: (35) TLS connect error: error:00000000:lib(0)::reason(0) ``` @@ -271,7 +271,7 @@ This fixes the problem and we can see HTTPS requests to the external service succeeding reflected in the metrics: ```bash -linkerd dg proxy-metrics -n egress-test po/client | grep outbound_tls_route_open_total +$ linkerd dg proxy-metrics -n egress-test po/client | grep outbound_tls_route_open_total outbound_tls_route_open_total{ parent_group="policy.linkerd.io", @@ -285,7 +285,7 @@ outbound_tls_route_open_total{ route_namespace="egress-test", route_name="tls-egress", hostname="httpbin.org" -} 2 +} ``` This configuration allows traffic to `httpbin.org` only. In order to apply @@ -296,7 +296,7 @@ our client, we will see the proxy eagerly closing the connection because it is not forbidden by our current policy configuration: ```bash -linkerd dg proxy-metrics -n egress-test po/client | grep outbound_tls_route_close_total +$ linkerd dg proxy-metrics -n egress-test po/client | grep outbound_tls_route_close_total outbound_tls_route_close_total{ parent_group="policy.linkerd.io", diff --git a/linkerd.io/content/2.18/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2.18/tasks/multicluster-using-statefulsets.md index 81969979a0..2f5a04c073 100644 --- a/linkerd.io/content/2.18/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2.18/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:linkerd/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:linkerd/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -185,7 +185,7 @@ If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +$ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -260,7 +260,7 @@ $ kubectl --context=k3d-east exec curl-56dc7d945d-96r6p -it -c curl -- sh # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-k3d-west.default.svc.east.cluster.local +$ curl nginx-set-0.nginx-svc-k3d-west.default.svc.east.cluster.local diff --git a/linkerd.io/content/2.18/tasks/restricting-access.md b/linkerd.io/content/2.18/tasks/restricting-access.md index 5654518600..a5787cf354 100644 --- a/linkerd.io/content/2.18/tasks/restricting-access.md +++ b/linkerd.io/content/2.18/tasks/restricting-access.md @@ -68,7 +68,7 @@ of requests coming to the voting service and see that all incoming requests to the voting-grpc server are currently unauthorized: ```bash -> linkerd viz authz -n emojivoto deploy/voting +$ linkerd viz authz -n emojivoto deploy/voting ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 100.00% 0.1rps 1ms 1ms 1ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.2rps 1ms 1ms 1ms @@ -112,7 +112,7 @@ the `linkerd viz auth` command queries over a time-window, you may see some UNAUTHORIZED requests displayed for a short amount of time. ```bash -> linkerd viz authz -n emojivoto deploy/voting +$ linkerd viz authz -n emojivoto deploy/voting ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 100.00% 0.1rps 1ms 1ms 1ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.2rps 1ms 1ms 1ms @@ -123,7 +123,7 @@ We can also test that request from other pods will be rejected by creating a `grpcurl` pod and attempting to access the Voting service from it: ```bash -> kubectl run grpcurl --rm -it --image=networld/grpcurl --restart=Never --command -- ./grpcurl -plaintext voting-svc.emojivoto:8080 emojivoto.v1.VotingService/VoteDog +$ kubectl run grpcurl --rm -it --image=networld/grpcurl --restart=Never --command -- ./grpcurl -plaintext voting-svc.emojivoto:8080 emojivoto.v1.VotingService/VoteDog Error invoking method "emojivoto.v1.VotingService/VoteDog": failed to query for service descriptor "emojivoto.v1.VotingService": rpc error: code = PermissionDenied desc = pod "grpcurl" deleted pod default/grpcurl terminated (Error) @@ -153,7 +153,7 @@ following logic when deciding whether to allow a request: We can set the default policy to `deny` using the `linkerd upgrade` command: ```bash -> linkerd upgrade --default-inbound-policy deny | kubectl apply -f - +linkerd upgrade --default-inbound-policy deny | kubectl apply -f - ``` Alternatively, default policies can be set on individual workloads or namespaces diff --git a/linkerd.io/content/2.18/tasks/troubleshooting.md b/linkerd.io/content/2.18/tasks/troubleshooting.md index ca2b5b104d..dfbc9a5bfe 100644 --- a/linkerd.io/content/2.18/tasks/troubleshooting.md +++ b/linkerd.io/content/2.18/tasks/troubleshooting.md @@ -1789,12 +1789,12 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the prometheus related resources are present and running correctly. ```bash -❯ kubectl -n linkerd-viz get deploy,cm | grep prometheus +$ kubectl -n linkerd-viz get deploy,cm | grep prometheus deployment.apps/prometheus 1/1 1 1 3m18s configmap/prometheus-config 1 3m18s -❯ kubectl get clusterRoleBindings | grep prometheus +$ kubectl get clusterRoleBindings | grep prometheus linkerd-linkerd-viz-prometheus ClusterRole/linkerd-linkerd-viz-prometheus 3m37s -❯ kubectl get clusterRoles | grep prometheus +$ kubectl get clusterRoles | grep prometheus linkerd-linkerd-viz-prometheus 2021-02-26T06:03:11Zh ``` diff --git a/linkerd.io/content/2.19/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.19/tasks/configuring-per-route-policy.md index 011c10ff9e..1887c303ff 100644 --- a/linkerd.io/content/2.19/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.19/tasks/configuring-per-route-policy.md @@ -30,7 +30,7 @@ haven't already done this. Inject and install the Books demo application: ```bash -$ kubectl create ns booksapp && \ +kubectl create ns booksapp && \ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp.yml \ | linkerd inject - \ | kubectl -n booksapp apply -f - @@ -44,21 +44,21 @@ run in the `booksapp` namespace. Confirm that the Linkerd data plane was injected successfully: ```bash -$ linkerd check -n booksapp --proxy -o short +linkerd check -n booksapp --proxy -o short ``` You can take a quick look at all the components that were added to your cluster by running: ```bash -$ kubectl -n booksapp get all +kubectl -n booksapp get all ``` Once the rollout has completed successfully, you can access the app itself by port-forwarding `webapp` locally: ```bash -$ kubectl -n booksapp port-forward svc/webapp 7000 & +kubectl -n booksapp port-forward svc/webapp 7000 & ``` Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the diff --git a/linkerd.io/content/2.19/tasks/managing-egress-traffic.md b/linkerd.io/content/2.19/tasks/managing-egress-traffic.md index a43eadb61a..32e8baee9e 100644 --- a/linkerd.io/content/2.19/tasks/managing-egress-traffic.md +++ b/linkerd.io/content/2.19/tasks/managing-egress-traffic.md @@ -77,7 +77,7 @@ In a separate shell, you can use the Linkerd diagnostics command to visualize the traffic. ```bash -linkerd dg proxy-metrics -n egress-test po/client | grep outbound_http_route_request_statuses_total +$ linkerd dg proxy-metrics -n egress-test po/client | grep outbound_http_route_request_statuses_total outbound_http_route_request_statuses_total{ parent_group="policy.linkerd.io", @@ -129,7 +129,7 @@ Hostname metrics can also be enabled cluster-wide through the values in ```bash # With a single value -linkerd install --set proxy.metrics.hostnameLabels=true | kubectl apply -f - +$ linkerd install --set proxy.metrics.hostnameLabels=true | kubectl apply -f - # Or ith a values.yaml file # @@ -138,7 +138,7 @@ proxy: metrics: hostnameLabels: true -linkerd install --values=values.yaml | kubectl apply -f - +$ linkerd install --values=values.yaml | kubectl apply -f - ``` {{< note >}} @@ -235,7 +235,7 @@ Interestingly enough though, if we go back to our client shell and we try to initiate HTTPS traffic to the same service, it will not be allowed: ```bash -~ $ curl -v https://httpbin.org/get +$ curl -v https://httpbin.org/get curl: (35) TLS connect error: error:00000000:lib(0)::reason(0) ``` @@ -271,7 +271,7 @@ This fixes the problem and we can see HTTPS requests to the external service succeeding reflected in the metrics: ```bash -linkerd dg proxy-metrics -n egress-test po/client | grep outbound_tls_route_open_total +$ linkerd dg proxy-metrics -n egress-test po/client | grep outbound_tls_route_open_total outbound_tls_route_open_total{ parent_group="policy.linkerd.io", @@ -296,7 +296,7 @@ our client, we will see the proxy eagerly closing the connection because it is not forbidden by our current policy configuration: ```bash -linkerd dg proxy-metrics -n egress-test po/client | grep outbound_tls_route_close_total +$ linkerd dg proxy-metrics -n egress-test po/client | grep outbound_tls_route_close_total outbound_tls_route_close_total{ parent_group="policy.linkerd.io", diff --git a/linkerd.io/content/2.19/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2.19/tasks/multicluster-using-statefulsets.md index 81969979a0..2f5a04c073 100644 --- a/linkerd.io/content/2.19/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2.19/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:linkerd/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:linkerd/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -185,7 +185,7 @@ If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +$ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -260,7 +260,7 @@ $ kubectl --context=k3d-east exec curl-56dc7d945d-96r6p -it -c curl -- sh # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-k3d-west.default.svc.east.cluster.local +$ curl nginx-set-0.nginx-svc-k3d-west.default.svc.east.cluster.local diff --git a/linkerd.io/content/2.19/tasks/restricting-access.md b/linkerd.io/content/2.19/tasks/restricting-access.md index 5654518600..a5787cf354 100644 --- a/linkerd.io/content/2.19/tasks/restricting-access.md +++ b/linkerd.io/content/2.19/tasks/restricting-access.md @@ -68,7 +68,7 @@ of requests coming to the voting service and see that all incoming requests to the voting-grpc server are currently unauthorized: ```bash -> linkerd viz authz -n emojivoto deploy/voting +$ linkerd viz authz -n emojivoto deploy/voting ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 100.00% 0.1rps 1ms 1ms 1ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.2rps 1ms 1ms 1ms @@ -112,7 +112,7 @@ the `linkerd viz auth` command queries over a time-window, you may see some UNAUTHORIZED requests displayed for a short amount of time. ```bash -> linkerd viz authz -n emojivoto deploy/voting +$ linkerd viz authz -n emojivoto deploy/voting ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 100.00% 0.1rps 1ms 1ms 1ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.2rps 1ms 1ms 1ms @@ -123,7 +123,7 @@ We can also test that request from other pods will be rejected by creating a `grpcurl` pod and attempting to access the Voting service from it: ```bash -> kubectl run grpcurl --rm -it --image=networld/grpcurl --restart=Never --command -- ./grpcurl -plaintext voting-svc.emojivoto:8080 emojivoto.v1.VotingService/VoteDog +$ kubectl run grpcurl --rm -it --image=networld/grpcurl --restart=Never --command -- ./grpcurl -plaintext voting-svc.emojivoto:8080 emojivoto.v1.VotingService/VoteDog Error invoking method "emojivoto.v1.VotingService/VoteDog": failed to query for service descriptor "emojivoto.v1.VotingService": rpc error: code = PermissionDenied desc = pod "grpcurl" deleted pod default/grpcurl terminated (Error) @@ -153,7 +153,7 @@ following logic when deciding whether to allow a request: We can set the default policy to `deny` using the `linkerd upgrade` command: ```bash -> linkerd upgrade --default-inbound-policy deny | kubectl apply -f - +linkerd upgrade --default-inbound-policy deny | kubectl apply -f - ``` Alternatively, default policies can be set on individual workloads or namespaces diff --git a/linkerd.io/content/2.19/tasks/troubleshooting.md b/linkerd.io/content/2.19/tasks/troubleshooting.md index baaa71e206..aae02ac2bb 100644 --- a/linkerd.io/content/2.19/tasks/troubleshooting.md +++ b/linkerd.io/content/2.19/tasks/troubleshooting.md @@ -1810,7 +1810,7 @@ Example failure: Verify that the metrics API pod is running correctly ```bash -❯ kubectl -n linkerd-viz get pods +$ kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE metrics-api-7bb8cb8489-cbq4m 2/2 Running 0 4m58s tap-injector-6b9bc6fc4-cgbr4 2/2 Running 0 4m56s