CandleKeep
Kubernetes

Kubernetes: Official Documentation

by Kubernetes Project

kubernetes
Pages3145
Formatmarkdown
ListedMarch 16, 2026
UpdatedMarch 16, 2026
Subscribers20

About

Official Kubernetes documentation covering cluster architecture, components, concepts, workloads, services, storage, configuration, security, administration, and API fundamentals.

646Chapters
11684Topics
3145Pages

Preview

Kubernetes Documentation

Source: https://github.com/kubernetes/website/tree/main/content/en/docs


Concepts

Concepts

<!-- overview -->

The Concepts section helps you learn about the parts of the Kubernetes system and the abstractions Kubernetes uses to represent your {{< glossary_tooltip text="cluster" term_id="cluster" length="all" >}}, and helps you obtain a deeper understanding of how Kubernetes works.

<!-- body -->

Cluster Architecture

A Kubernetes cluster consists of a control plane plus a set of worker machines, called nodes, that run containerized applications. Every cluster needs at least one worker node in order to run Pods.

The worker node(s) host the Pods that are the components of the application workload. The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.

This document outlines the various components you need to have for a complete and working Kubernetes cluster.

{{< figure src="/images/docs/kubernetes-cluster-architecture.svg" alt="The control plane (kube-apiserver, etcd, kube-controller-manager, kube-scheduler) and several nodes. Each node is running a kubelet and kube-proxy." caption="Figure 1. Kubernetes cluster components." class="diagram-large" >}}

{{< details summary="About this architecture" >}} The diagram in Figure 1 presents an example reference architecture for a Kubernetes cluster. The actual distribution of components can vary based on specific cluster setups and requirements.

In the diagram, each node runs the kube-proxy component. You need a network proxy component on each node to ensure that the {{< glossary_tooltip text="Service" term_id="service">}} API and associated behaviors are available on your cluster network. However, some network plugins provide their own, third party implementation of proxying. When you use that kind of network plugin, the node does not need to run kube-proxy. {{< /details >}}

Control plane components

The control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new {{< glossary_tooltip text="pod" term_id="pod">}} when a Deployment's {{< glossary_tooltip text="replicas" term_id="replica" >}} field is unsatisfied).

Control plane components can be run on any machine in the cluster. However, for simplicity, setup scripts typically start all control plane components on the same machine, and do not run user containers on this machine. See Creating Highly Available clusters with kubeadm for an example control plane setup that runs across multiple machines.

kube-apiserver

{{< glossary_definition term_id="kube-apiserver" length="all" >}}

etcd

{{< glossary_definition term_id="etcd" length="all" >}}

kube-scheduler

{{< glossary_definition term_id="kube-scheduler" length="all" >}}

kube-controller-manager

{{< glossary_definition term_id="kube-controller-manager" length="all" >}}

There are many different types of controllers. Some examples of them are:

  • Node controller: Responsible for noticing and responding when nodes go down.
  • Job controller: Watches for Job objects that represent one-off tasks, then creates Pods to run those tasks to completion.
  • EndpointSlice controller: Populates EndpointSlice objects (to provide a link between Services and Pods).
  • ServiceAccount controller: Create default ServiceAccounts for new namespaces.

The above is not an exhaustive list.

Add to library to read more

Table of Contents

Kubernetes Documentation

required command line arguments for this feature
requestheader-allowed-names can be set to blank to allow any Common Name
optional flags for this feature

Fetch kubelet logs from a node named node-1.example

RuntimeClass is defined in the node.k8s.io API group

# This Pod needs 2 of the hardware-vendor.example/foo devices
and can only schedule onto a Node that's able to satisfy
that need.
# If the Node has more than 2 of those devices available, the

In a namespace

You need to change this example to match the actual runtime name, and per-Pod

Run this on the node where the Pod is scheduled
Run this on the node where the Pod is scheduled
Run this on the node where the Pod is scheduled.

The per-mode level label indicates which policy level to apply for the mode.
# MODE must be one of `enforce`, `audit`, or `warn`.
LEVEL must be one of `privileged`, `baseline`, or `restricted`.
Optional: per-mode version label that can be used to pin the policy to the
version that shipped with a given Kubernetes minor version (for example v{{< skew currentVersion >}}).
# MODE must be one of `enforce`, `audit`, or `warn`.

This manifest mounts /data/foo on the host as /foo inside the
single container that runs within the hostpath-example-linux Pod.
# The mount into the container is read-only.
This manifest mounts C:\Data\foo on the host as C:\foo, inside the
single container that runs within the hostpath-example-windows Pod.

wait for rollout to finish

┌───────────── minute (0 - 59)
│ ┌───────────── hour (0 - 23)
│ │ ┌───────────── day of the month (1 - 31)
│ │ │ ┌───────────── month (1 - 12)
│ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)
│ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat
│ │ │ │ │
│ │ │ │ │

View revision history

List all revisions for the StatefulSet

container restart delays will start at 10s, increasing
2x each time they are restarted, to a maximum of 100s

The format is
name:firstID:count of IDs
where
- firstID is 65536 (the minimum value possible)
- count of IDs is 110 * 65536

sysctl params required by setup, params persist across reboots

sysctl params required by setup, params persist across reboots

This is a file that "kubeadm init" and "kubeadm join" generate at runtime, populating
the KUBELET_KUBEADM_ARGS variable dynamically
This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably,
the user should use the .NodeRegistration.KubeletExtraArgs object in the configuration files instead.

Add kubernetes-dashboard repository

Check all possible clusters, as your .KUBECONFIG may have multiple contexts:
Select name of cluster you want to interact with from above output:
Point to the API server referring the cluster name
Create a secret to hold a token for the default service account
Wait for the token controller to populate the secret with a token:
Get the token value

Clone java library

cat-pictures-pvc.yaml
cat-pictures-writer-deployment.yaml
IMPORTANT: Make sure to edit your PVC in cat-pictures-pvc.yaml before applying. You need to:
- Set ReadWriteOncePod as the only access mode

# CAUTION: this is an example configuration.
Do not use this for your own cluster!

Do not run this in a session where you have set a random number

providers is a list of credential provider helper plugins that will be enabled by the kubelet.
Multiple providers may match against a single image, in which case credentials
from all providers will be returned to the kubelet. If multiple providers are called
for a single image, the results are combined. If providers return overlapping

Disable AppArmor
Ignore an error during setting oom_score_adj
Disable hugetlb cgroup v2 controller (because systemd does not support delegating hugetlb controller)
Using non-fuse overlayfs is also possible for kernel >= 5.11, but requires SELinux to be disabled
We use cgroupfs that is delegated by systemd, so we do not use SystemdCgroup driver
(unless you run another systemd in the namespace)
Using non-fuse overlayfs is also possible for kernel >= 5.11, but requires SELinux to be disabled
We use cgroupfs that is delegated by systemd, so we do not use "systemd" driver

We use cgroupfs that is delegated by systemd, so we do not use "systemd" driver

Skip setting sysctl value "net.netfilter.nf_conntrack_max"
Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"

DCCP is unlikely to be needed, has had multiple serious
vulnerabilities, and is not well-maintained.
SCTP is not used in most Kubernetes clusters, and has also had

Retrieve the latest available Kubernetes release version
Verify the SHA512 sum
Verify the SHA256 sum
Retrieve sigstore signature and certificate

Run this on a control plane node
Run this on a control plane node

Run this on a control plane node

On your system, this configuration file could have a different name
On your system, this configuration file could have a different name

If you are running cluster with a replicated control plane, this command

example.yaml
Will be used as the target "cluster" in the kubeconfig
Will be used as the "server" (IP or DNS name) of this cluster in the kubeconfig

Set certificate expiration time in days
Process all CSR files except those for front-proxy and etcd
Process all etcd CSRs

Cleanup CSR files

For Kubernetes control plane components

Find the latest {{< skew currentVersion >}} version in the list.
It should look like {{< skew currentVersion >}}.x-*, where x is the latest patch.
Find the latest {{< skew currentVersion >}} version in the list.
It should look like {{< skew currentVersion >}}.x-*, where x is the latest patch.
Find the latest {{< skew currentVersion >}} version in the list.

replace x in {{< skew currentVersion >}}.x-* with the latest patch version
replace x in {{< skew currentVersion >}}.x-* with the latest patch version

execute this command on a control plane node

execute this command on a control plane node

Please edit the object below. Lines beginning with a '#' will be ignored,
and an empty file will abort the edit. If an error occurs while saving this file, it will be
reopened with the relevant failures.

This assumes that your Node uses "sudo" to run commands
as the superuser
This again assumes that your Node uses "sudo" to run commands

Be sure to run these 3 commands inside the root shell that comes from

This assumes that your Node uses "sudo" to run commands
as the superuser

Be sure to run these 3 commands inside the root shell that comes from
running "kubectl exec" in the previous step
Be sure to run these commands inside the root shell that comes from

This assumes that your Node uses "sudo" to run commands

Download the sample files into `configure-pod-container/configmap/` directory

Env-files contain a list of environment variables.
These syntax rules apply:
Each line in an env file has to be in VAR=VAL format.
Lines beginning with # (i.e. comments) are ignored.
Blank lines are ignored.
There is no special handling of quotation marks (i.e. they will be part of the ConfigMap value)).
Download the sample files into `configure-pod-container/configmap/` directory
The env-file `game-env-file.properties` looks like below

The following cluster-scoped commands are only needed once.
Per-namespace disable

Wait a moment for the pod to be running
Alternative methods:
kubectl -n qos-example edit pod resize-demo --subresource resize

Wait a moment for the pod to be running
Alternative methods:
kubectl -n qos-example edit pod resize-demo --subresource resize

run this inside the "shell" container
run this inside the "shell" container

Run this command on the node where the kubelet is running
Run these commands on the node where the kubelet is running

This assumes you are using filesystem-hosted static Pod configuration
Run these commands on the node where the container is running
mv /etc/kubernetes/manifests/static-web.yaml /tmp

Linux
macOS

Run this inside the container

Run this inside the container

optional argument handling

create a plugin
"install" your plugin by moving it to a directory in your $PATH
check that kubectl recognizes your plugin
test that calling your plugin via a "kubectl" command works
even when additional arguments and flags are passed to your

create a plugin containing an underscore in its filename
move the plugin into your $PATH
You can now invoke your plugin via kubectl:
You can invoke your custom command with a dash

If the new Pod isn't yet healthy, rerun this command a few times.

Create a temporary interactive container
Run these commands inside the Pod
Note the rabbitmq-service has a DNS name, provided by Kubernetes:
run this check inside the Pod
Run these commands inside the Pod
In the next line, rabbitmq-service is the hostname where the rabbitmq-service
can be reached. 5672 is the standard port for rabbitmq.
If you could not resolve "rabbitmq-service" in the previous step,
then use this command instead:
Now create a queue:

this assumes you downloaded and then edited the manifest already

this assumes you downloaded and then edited the manifest already

This uses the first approach (relying on $JOB_COMPLETION_INDEX)

Remove the Jobs you created

Remove the Jobs you created

last-applied-configuration value
configuration file value
live configuration

last-applied-configuration value
configuration file value
live configuration

...
...
last-applied-configuration
configuration file
live configuration

Create a application.properties file
Create a .env file

Create a password.txt file

Create a deployment.yaml file
Create a service.yaml file

Create a deployment.yaml file
Create a patch increase_replicas.yaml
Create another patch set_memory.yaml
Create a deployment.yaml file
Create a json patch
Create a kustomization.yaml
Create a deployment.yaml file (quoting the here doc delimiter)

Create a directory to hold the base
Create a base/deployment.yaml
Create a base/service.yaml file

Create a deployment.yaml file

Kubernetes-managed hosts file.

Point to the internal API server hostname
Path to ServiceAccount token
Read this Pod's namespace
Read the ServiceAccount bearer token
Reference the internal certificate authority (CA)

Run this in a separate terminal
so that the load generation continues and you can carry on with the rest of the steps

Start a new terminal, and leave this running.

Allocate storage and restrict access
Create an encrypted device backed by the allocated storage
Format the swap space
Activate the swap space for paging
Allocate storage and restrict access
Format the swap space

Pick one Pod that belongs to the Deployment, and view its logs
You can leave the existing metadata as they are.

You can leave the existing metadata as they are.
The values you'll see won't exactly match these.
As the text explains, the output does NOT change
Trigger the rollout
Wait for the rollout to complete

this stays running in the background
You can leave the existing metadata as they are.
The values you'll see won't exactly match these.

this stays running in the background
You can leave the existing metadata as they are.
The values you'll see won't exactly match these.

Pick one Pod that belongs to the Deployment, and view its logs

Change 32373 to the port number you saw from "kubectl get service audit-pod"

The log path on your computer might be different from "/var/log/syslog"

Create a public private key pair

Run this in a shell on the node you want to query.
Run this inside the terminal from "kubectl run"

Run this locally on a node you choose

use this terminal to run commands that specify --watch

Do not start a new watch;

Run this in the dns-test container shell
Start a new watch
End this watch when you've seen that the delete is finished
This should already be running

End this watch when you've reached the end of the section.
At the start of "Scaling a StatefulSet" you'll start a new watch.

If you already have a watch running, you can continue using that.
Otherwise, start one.
End this watch when there are 5 healthy Pods for the StatefulSet

End this watch when there are only 3 Pods for the StatefulSet

End this watch when the rollout is complete

The value of "partition" determines which ordinals a change applies to
Make sure to use a number bigger than the last ordinal for the
StatefulSet

The value of "partition" should match the highest existing ordinal for
the StatefulSet
This should already be running

End this watch when there are no Pods for the StatefulSet
Leave this watch running until the next time you start a watch

Leave this running until the next page section

sts is an abbreviation for statefulset

Use this if you are able to apply cassandra-statefulset.yaml unmodified

clusters refers to the remote service.

# CAUTION: this is an example configuration.
Do not use this as-is for your own cluster!

# CAUTION: this is an example configuration.
Do not use this for your own cluster!
apiVersion: apiserver.config.k8s.io/v1
list of authenticators to authenticate Kubernetes users using JWT compliant tokens.

Kubernetes API version
kind of the API object
clusters refers to the remote service.
users refers to the API server's webhook configuration.

# CAUTION: this is an example configuration.
Check and amend this before you use it in your own cluster!

# DO NOT USE THE CONFIG AS IS. THIS IS AN EXAMPLE.

Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1
name should be set to the DNS name of the service or the host (including port) of the URL the webhook is configured to speak to.
If a non-443 port is used for services, it must be included in the name when configuring 1.16+ API servers.
# For a webhook configured to speak to a service on the default port (443), specify the DNS name of the service:
- name: webhook1.ns1.svc
user: ...
# For a webhook configured to speak to a service on non-default port (e.g. 8443), specify the DNS name and port of the service in 1.16+:
- name: webhook1.ns1.svc:8443
user: ...
and optionally create a second stanza using only the DNS name of the service for compatibility with 1.15 API servers:
- name: webhook1.ns1.svc
user: ...
# For webhooks configured to speak to a URL, match the host (and port) specified in the webhook's URL. Examples:
A webhook with `url: https://www.example.com`:
- name: www.example.com
user: ...
# A webhook with `url: https://www.example.com:443`:
- name: www.example.com:443
user: ...
# A webhook with `url: https://www.example.com:8443`:
- name: www.example.com:8443
user: ...
- name: 'webhook1.ns1.svc'
The `name` supports using * to wildcard-match prefixing segments.

HELP apiserver_admission_webhook_rejection_count [ALPHA] Admission webhook rejection count, identified by name and broken out for each admission type (validating or admit) and operation. Additional labels specify an error type (calling_webhook_error or apiserver_internal_error if an error occurred; no_error otherwise) and optionally a non-zero rejection code if the webhook rejects the request with an HTTP status code (honored by the apiserver when the code is greater or equal to 400). Codes greater than 600 are truncated to 600, to keep the metrics cardinality bounded.

Approve all CSRs for the group "system:bootstrappers"

When you create the "monitoring-endpointslices" ClusterRole,

Can set "Impersonate-Extra-scopes" header and the "Impersonate-Uid" header.
Can impersonate the user "[email protected]"
Can impersonate the groups "developers" and "admins"
Can impersonate the extras field "scopes" with the values "view" and "development"

Kubernetes API version
kind of the API object
clusters refers to the remote service.
users refers to the API Server's webhook configuration.

You need to run "kubectl proxy" first

HELP kubernetes_healthcheck [ALPHA] This metric records the result of a single healthcheck.
TYPE kubernetes_healthcheck gauge
HELP kubernetes_healthchecks_total [ALPHA] This metric records the results of all healthcheck.

Replace <node-name> with the name of a node in your cluster

Replace <node-name> with the name of a node in your cluster

Replace <node-name> with the name of a node in your cluster

Create a service using the definition in example-service.yaml.
Create a replication controller using the definition in example-controller.yaml.
Create the objects that are defined in any .yaml, .yml, or .json file within the <directory> directory.
List all pods in plain-text output format.
List all pods in plain-text output format and include additional information (such as node name).
List the replication controller with the specified name in plain-text output format. Tip: You can shorten and replace the 'replicationcontroller' resource type with the alias 'rc'.
List all replication controllers and services together in plain-text output format.
List all daemon sets in plain-text output format.
List all pods running on node server01
Display the details of the node with name <node-name>.
Display the details of the pod with name <pod-name>.
Display the details of all the pods that are managed by the replication controller named <rc-name>.
Remember: Any pods that are created by the replication controller get prefixed with the name of the replication controller.
Describe all pods
Delete a pod using the type and name specified in the pod.yaml file.
Delete all the pods and services that have the label '<label-key>=<label-value>'.
Delete all pods, including uninitialized ones.
Get output from running 'date' from pod <pod-name>. By default, output is from the first container.
Get output from running 'date' in container <container-name> of pod <pod-name>.
Get an interactive TTY and run /bin/bash from pod <pod-name>. By default, output is from the first container.
Return a snapshot of the logs from pod <pod-name>.
Start streaming the logs from pod <pod-name>. This is similar to the 'tail -f' Linux command.
Diff resources included in "pod.json".

create a simple plugin in any language and name the resulting executable file
so that it begins with the prefix "kubectl-"
this plugin prints the words "hello world"
and move it to a location in our PATH
You have now created and "installed" a kubectl plugin.
You can begin using this plugin by invoking it from kubectl as if it were a regular command
You can "uninstall" a plugin, by removing it from the folder in your
$PATH where you placed it
this plugin makes use of the `kubectl config` command in order to output
information about the current user, based on the currently selected context
make the file executable

start the pod running nginx
add env to nginx-app

exit

kubectl does not support regular expressions for JSONpath output
The following command does not work

use multiple kubeconfig files at the same time and view merged config
Show merged kubeconfig settings and raw certificate data and exposed secrets
get the password for the e2e user
get the certificate for the e2e user
configure the URL to a proxy server to use for requests made by this client in the kubeconfig
add a new user to your kubeconf that supports basic auth
permanently save the namespace for all subsequent kubectl commands in that context.
set a context utilizing a specific username and namespace.

create a Job which prints "Hello World"
create a CronJob that prints "Hello World" every minute
Create multiple YAML objects from stdin

Get commands with basic output
Describe commands with verbose output
List Services Sorted by Name
List pods Sorted by Restart Count
List PersistentVolumes sorted by capacity
Get the version label of all pods with label app=cassandra
Retrieve the value of a key with dots, e.g. 'ca.crt'
Retrieve a base64 encoded value with dashes instead of underscores.
Get all worker nodes (use a selector to exclude results that have a label
named 'node-role.kubernetes.io/control-plane')
Get all running pods in the namespace
Get ExternalIPs of all nodes
List Names of Pods that belong to Particular RC
"jq" command useful for transformations that are too complex for jsonpath, it can be found at https://jqlang.github.io/jq/
Show labels for all pods (or any other Kubernetes object that supports labelling)
Check which nodes are ready
Check which nodes are ready with custom-columns
Output decoded secrets without external tools
List all Secrets currently in use by a pod
List all containerIDs of initContainer of all pods
Helpful when cleaning up stopped containers, while avoiding removal of initContainers.
List Events sorted by timestamp
List all warning events
Compares the current state of the cluster against the state that the cluster would be in if the manifest was applied.
Produce a period-delimited tree of all keys returned for nodes
Helpful when locating a key within a complex nested JSON structure
Produce a period-delimited tree of all keys returned for pods, etc
Produce ENV for all pods, assuming you have a default container for the pods, default namespace and the `env` command is supported.
Helpful when running any supported command across all pods, not just `env`

Force replace, delete and then re-create the resource. Will cause a service outage.
Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000

Partially update a node
Update a container's image; spec.containers[*].name is required because it's a merge key
Update a container's image using a json patch with positional arrays
Disable a deployment livenessProbe using a json patch with positional arrays
Add a new element to a positional array

View existing taints on which exist on current nodes.

All images running in a cluster
All images running in namespace: default, grouped by Pod

Install bash completion on a Mac using homebrew
Load the kubeadm completion code for bash into the current shell
Write bash completion code to a file and source it from .bash_profile

See the OWNERS docs at https://go.k8s.io/owners
This is the localization project for Spanish.

These are the original gist links, linking to my gists now.
curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxfiles.plist

reopen an issue
transfer issues that don't fit in k/website to another repository
change the state of rotten issues
change the state of stale issues
assign sig to an issue
add specific area
for beginner friendly issues
issues that needs help
tagging issue as support specific
to accept triaging for an issue

add English label
add squash label to PR if more than one commit

Add to Library

Free · Live updates included

20 readers subscribed