Kubernetes can look intimidating because it’s an entire distributed system—API server, scheduler, controllers, networking, and a workload runtime—yet a “basic” cluster is very achievable in one sitting if you standardize the environment and keep your first goal narrow: a functional control plane, at least one worker node, a CNI (Container Network Interface) plugin for pod networking, and kubectl access from an admin workstation.
This how-to focuses on a fast, repeatable path that still reflects real-world operational needs. You’ll use kubeadm (Kubernetes’ upstream bootstrapping tool) on Ubuntu with containerd as the container runtime. You’ll initialize a control-plane node, join worker nodes, install a CNI plugin, and validate the cluster by deploying a simple application.
The steps are written for IT administrators and system engineers who need a cluster they can reason about. You’ll see where “minutes” really come from (mostly automation and good defaults) and where you should intentionally slow down (network CIDRs, versions, and access). By the end, you should have a clean baseline you can extend into production patterns.
What “basic Kubernetes cluster” means (and what it doesn’t)
A basic Kubernetes cluster in this guide includes:
- One control-plane node (formerly “master”) running the Kubernetes API server, scheduler, controller manager, and etcd.
- One or more worker nodes running kubelet and a container runtime.
- A CNI plugin providing pod-to-pod networking.
kubectlconfigured for an administrator to interact with the cluster.
It does not include production-grade add-ons like an ingress controller, monitoring stack, externalized etcd, backup/restore automation, or a hardened PKI workflow. Those are important, but they’re also where cluster design branches quickly based on your environment. The point here is to create a stable foundation you can build on.
As you follow the setup, keep one operational principle in mind: Kubernetes is opinionated about immutability and declarative state. Your cluster is easier to operate if you avoid ad-hoc manual drift on nodes and instead rely on repeatable configuration.
Platform choices that keep the setup fast and predictable
Kubernetes can run in many places, but “minutes” depends on choosing a path with fewer moving parts. This guide assumes:
- Ubuntu Server 22.04 LTS or 24.04 LTS on all nodes.
- Nodes can reach each other over the network (no restrictive east-west firewalling without deliberate rules).
- You have root or sudo access.
- You can allocate at least 2 vCPU and 2–4 GB RAM for the control plane (more is better), and 1–2 vCPU and 2 GB RAM for workers.
For Kubernetes itself, kubeadm is the most direct upstream method. Managed Kubernetes (EKS/AKS/GKE) can be faster, but it changes the nature of the article: your job becomes “configure a service” rather than “set up a cluster.” If you need an on-prem cluster or you want to understand the mechanics, kubeadm is the right choice.
For the container runtime, Docker is no longer the default integration point for Kubernetes (Kubernetes removed dockershim). You can still use Docker via cri-dockerd, but containerd is the simplest supported runtime for kubeadm installs.
For pod networking, you need a CNI plugin. This guide uses Calico in examples because it’s widely used and straightforward for a first cluster. You could swap in Flannel, Cilium, or another plugin, but keep your first install standard.
Prerequisites: sizing, networking, and ports
Before running commands, confirm you can satisfy Kubernetes’ expectations. Many “it doesn’t work” moments trace back to basic networking and name resolution.
Node inventory and naming
Decide on a small inventory. For example:
cp1(control plane) — 10.0.10.10w1(worker) — 10.0.10.11w2(worker) — 10.0.10.12
Use stable hostnames, and ensure forward and reverse name resolution is consistent. Kubernetes doesn’t strictly require DNS for node-to-node, but it reduces confusion when you start reading logs and events.
If you don’t have internal DNS, you can use /etc/hosts for a lab cluster, as long as it’s consistent on every node.
Network CIDR planning (pod CIDR and service CIDR)
Kubernetes needs IP ranges that won’t collide with your existing network:
- Pod CIDR: the address space used for pod IPs (assigned by the CNI plugin). Common:
192.168.0.0/16or10.244.0.0/16. - Service CIDR: the virtual IP range for Kubernetes Services (ClusterIP). Common:
10.96.0.0/12.
The default service CIDR used by kubeadm is typically 10.96.0.0/12. You can keep it unless it overlaps with your corporate routing. The pod CIDR must match what your CNI plugin expects, or you must configure the plugin accordingly.
Plan this now, because changing these ranges later is non-trivial.
Required ports and firewall expectations
At a minimum, nodes must communicate on Kubernetes control and overlay networking ports. If you’re in a restricted environment, ensure:
- Workers can reach the control plane API server on TCP 6443.
- Control plane components can communicate internally (kubeadm config covers typical ports).
- Your CNI plugin’s requirements are satisfied (Calico uses BGP in some modes, and encapsulation protocols depending on configuration).
If you’re building a lab, you can keep host firewalls permissive initially and tighten later.
Real-world scenario 1: a fast lab cluster for change testing
A common reason to set up a basic Kubernetes cluster quickly is to test changes safely: a new NGINX config, a patched application container, or a Helm chart update. In many IT shops, waiting weeks for shared environments slows everything down.
In this scenario, you want a cluster that resembles production primitives (nodes, CNI, kubelet behavior) without production-grade hardening. The approach in this guide fits well: you can stand it up, run validation workloads, and tear it down or rebuild it with minimal state.
Keeping that use case in mind helps you avoid over-engineering early. You’re optimizing for repeatability and clarity.
Prepare Ubuntu on all nodes
The fastest kubeadm installs succeed because the nodes are prepared consistently. Do the following on every node (control plane and workers).
Set hostname and ensure time sync
Set a meaningful hostname (example for cp1):
sudo hostnamectl set-hostname cp1
Ensure time synchronization is running. On Ubuntu, systemd-timesyncd is often enabled by default. Verify:
bash
timedatectl status
Time drift breaks TLS validation and can cause confusing errors when nodes join.
Disable swap
Kubernetes expects swap to be disabled unless you’ve configured kubelet for swap support (a relatively new and intentionally managed setting). For a basic cluster, disable it.
Turn off swap immediately:
bash
sudo swapoff -a
Prevent swap from re-enabling on reboot by commenting out swap entries in /etc/fstab.
Load kernel modules and sysctl settings
Kubernetes networking relies on bridged traffic being visible to iptables/nftables and on IP forwarding.
Load modules:
bash
cat <<'EOF' | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
Apply sysctl settings:
bash
cat <<'EOF' | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
These settings are foundational for CNI plugins and kube-proxy.
Install and configure containerd (all nodes)
Containerd is a CNCF container runtime that Kubernetes can use via CRI (Container Runtime Interface). The important operational detail is that Kubernetes expects the runtime to use the same cgroup driver as kubelet, and on modern Linux that typically means systemd cgroups.
Install containerd from Ubuntu packages:
bash
sudo apt-get update
sudo apt-get install -y containerd
Generate a default config and enable systemd cgroups:
bash
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
Restart and enable containerd:
bash
sudo systemctl restart containerd
sudo systemctl enable containerd
If you skip the systemd cgroup alignment, you may still get a cluster, but kubelet can behave inconsistently under pressure and you may see warnings or node readiness issues in some environments.
Install Kubernetes packages (kubeadm, kubelet, kubectl)
Kubernetes packages should be installed from the official Kubernetes repositories rather than relying on potentially stale distro versions.
Because Kubernetes packaging details can vary by release, follow the current upstream guidance for your Ubuntu version and Kubernetes version. The key operational practices are:
- Pin a known Kubernetes version rather than “latest,” especially in business environments.
- Install
kubeletandkubeadmon all nodes. - Install
kubectlon the node where you’ll administer the cluster (often the control plane, or an admin workstation).
After installing, hold the packages to prevent accidental upgrades:
bash
sudo apt-mark hold kubelet kubeadm kubectl
If you maintain clusters long-term, you’ll later remove the hold intentionally during planned upgrades.
Initialize the control plane with kubeadm
Once the OS and runtime are consistent, you can bootstrap the cluster. On the control-plane node (cp1), run kubeadm init. This step generates certificates, writes kubeconfig files, creates static pod manifests for control-plane components, and bootstraps etcd.
Choose an API endpoint strategy
For a single control-plane node, the API endpoint can simply be the node’s IP address. For anything beyond a lab, you typically place a load balancer or virtual IP in front of multiple control-plane nodes.
For a basic single-node control plane, you can do:
--apiserver-advertise-address=<cp1-ip>so the API server binds the correct interface.--pod-network-cidr=<cidr>matching your CNI plugin configuration.
Example (adjust IP and CIDR):
bash
sudo kubeadm init \
--apiserver-advertise-address=10.0.10.10 \
--pod-network-cidr=192.168.0.0/16
Kubeadm will print a kubeadm join command at the end. Save it; it contains a bootstrap token and the CA certificate hash.
Configure kubectl access
Kubeadm writes the admin kubeconfig to /etc/kubernetes/admin.conf. To use kubectl as a non-root user:
bash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Verify connectivity:
bash
kubectl get nodes
At this point, the control-plane node will often show NotReady because you have not installed a CNI plugin yet. That’s expected.
Install a CNI plugin (Calico example)
Kubernetes does not provide pod networking by itself. The control plane can start, but pods won’t be scheduled normally until networking is in place.
Calico can be installed via a standard manifest. The key is ensuring the pod CIDR you chose in kubeadm init matches Calico’s configuration.
Apply the Calico manifest (use the current official manifest URL from the Calico documentation):
bash
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/master/manifests/calico.yaml
Then watch system pods come up:
bash
kubectl get pods -n kube-system -w
Once Calico is running, the control-plane node should become Ready:
bash
kubectl get nodes
This transition—from NotReady to Ready—is a useful early signal that container runtime, kubelet, control-plane components, and pod networking are working together.
Join worker nodes to the cluster
On each worker node, run the kubeadm join command that kubeadm printed during initialization. It will look like:
bash
sudo kubeadm join 10.0.10.10:6443 \
--token <token> \
--discovery-token-ca-cert-hash sha256:<hash>
After joining, return to the control-plane node and confirm the workers register and become ready:
bash
kubectl get nodes
Kubernetes node readiness depends on kubelet health and networking. With containerd configured and Calico running, nodes typically become Ready quickly.
If you lost the join command
If you didn’t save the join command, you can create a new token on the control-plane node:
bash
kubeadm token create --print-join-command
This prints a fresh join command you can run on worker nodes.
Real-world scenario 2: a small on-prem cluster for internal tools
Many organizations start Kubernetes by moving a few internal tools—GitLab Runner, internal APIs, or a metrics exporter—off ad-hoc VMs and into a cluster that standardizes deployment and scaling. The driver is usually operational consistency: one deployment workflow, one place to manage secrets, and a clearer separation between “node maintenance” and “app configuration.”
A basic kubeadm cluster is often the first step. Even if you later migrate to a managed service, you’ll benefit from understanding:
- How
kubeadm initmaps to control-plane components. - Why a CNI plugin is mandatory.
- How node join and certificates actually work.
That knowledge makes you faster when incidents happen, because you can distinguish “Kubernetes control plane” failures from “app deployment” issues.
Validate the cluster: core checks that confirm health
Now that the cluster is up, validation should do more than “kubectl works.” You want to prove scheduling, DNS, service routing, and basic pod-to-pod connectivity.
Check kube-system components
Run:
bash
kubectl get pods -n kube-system
You should see, at minimum:
corednspods runningkube-proxyon each node- Calico components (e.g.,
calico-nodeas a DaemonSet) - Control-plane pods (API server, controller manager, scheduler, etcd) on the control plane
These are not just “background noise.” For example, CoreDNS is what makes service discovery work. If CoreDNS isn’t healthy, your apps can run but won’t resolve service names reliably.
Verify DNS resolution from a test pod
Create a temporary pod and query DNS:
bash
kubectl run -it --rm dns-test --image=busybox:1.36 --restart=Never -- sh
Inside the pod:
sh
nslookup kubernetes.default.svc.cluster.local
Exit the shell to remove the pod.
Verify cross-node networking
If you have multiple nodes, verify pod-to-pod connectivity across nodes by deploying a small DaemonSet or two pods on different nodes. A simple approach is to deploy two pods and ping between them, but note that some images don’t include ping. You can use busybox if your cluster allows ICMP.
Alternatively, validate service connectivity (often more relevant than ICMP).
Deploy a small application to confirm scheduling and service routing
A basic deployment and service validates the “day 1” developer experience: can workloads schedule, and can they be reached inside the cluster?
Create a namespace to keep test resources isolated:
bash
kubectl create namespace demo
Deploy a simple HTTP server (nginx) and expose it as a ClusterIP service:
bash
cat <<'EOF' | kubectl apply -n demo -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 2
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: web
spec:
selector:
app: web
ports:
- port: 80
targetPort: 80
EOF
Confirm pods and service:
bash
kubectl get pods -n demo -o wide
kubectl get svc -n demo
Now access it from inside the cluster using a temporary curl pod:
bash
kubectl run -n demo -it --rm curl --image=curlimages/curl:8.5.0 --restart=Never -- \
curl -sS http://web
If this works, you’ve validated:
- Scheduler placement
- Container runtime execution
- CNI pod networking
- kube-proxy (or equivalent) service routing
- CoreDNS service name resolution (if you used the service name)
This is a more meaningful validation than simply seeing Ready nodes.
Make the cluster usable: taints, scheduling, and basic hygiene
With a single control-plane node, kubeadm typically applies a taint to prevent workloads from being scheduled on it. This is a best practice for production, but for a minimal lab you may want to allow scheduling on the control plane—especially if you only have one node.
Allow scheduling on the control-plane node (optional)
If you are building a single-node cluster (or you want the control plane to run workloads), remove the taint:
bash
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
Do this only if you understand the trade-off: control-plane resources compete with workloads. For a lab, that’s often acceptable.
Enable kubectl bash completion (optional but practical)
This doesn’t change cluster behavior, but it makes administrators faster:
bash
sudo apt-get install -y bash-completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
Operationally, anything that reduces command errors reduces time spent diagnosing self-inflicted problems.
Real-world scenario 3: a branch office cluster with limited staff
Consider a small branch office that needs a local application due to latency constraints—inventory scanning, a print pipeline, or a local API used by kiosks. The site has minimal IT staffing and you want a platform that can be rebuilt quickly after a hardware failure.
A “basic” kubeadm cluster is attractive here because:
- You can document the exact node prep and bootstrap steps.
- You can rebuild a failed worker and rejoin it without rearchitecting the application.
- You can keep the control plane small and stable.
This scenario also highlights why it’s worth making a few careful decisions early, even if your cluster is simple. Picking non-overlapping CIDRs and pinning versions means your rebuild procedures remain consistent, and you avoid surprises when a node OS update changes kernel networking behavior.
Understand what kubeadm changed on your nodes
One reason kubeadm is popular is that it makes Kubernetes feel “installable,” but it’s still valuable to know what it did so you can audit and maintain the cluster.
On the control-plane node, kubeadm:
- Writes manifests under
/etc/kubernetes/manifests/for static pods (API server, controller manager, scheduler, etcd). Kubelet watches this directory and ensures those pods run. - Places kubeconfigs under
/etc/kubernetes/(includingadmin.conf). - Configures certificates under
/etc/kubernetes/pki/.
On workers, kubeadm:
- Configures kubelet to talk to the control plane.
- Establishes client certificates for node authentication.
Knowing these paths helps when you need to verify configuration drift, rotate certificates, or understand why a control-plane component is restarting.
Basic security posture for a “minutes” cluster
Even in a lab, you should avoid unnecessary exposure. A basic posture doesn’t mean “insecure,” it means “minimal but intentional.”
Protect the API server endpoint
Your API server listens on TCP 6443. If the control-plane node is reachable from untrusted networks, restrict inbound access at the network level. Kubernetes authentication is strong, but reducing the attack surface is still standard practice.
Treat kubeconfig as a credential
$HOME/.kube/config (or /etc/kubernetes/admin.conf) is effectively admin access. Store it accordingly, don’t email it, and don’t bake it into images. If you administer from your workstation, copy the config securely and lock down file permissions.
Avoid running everything as cluster-admin
For a first cluster, you often use the admin kubeconfig to bootstrap, but operationally you’ll want separate, least-privilege identities for CI/CD and read-only users. Kubernetes uses RBAC (role-based access control) to define permissions. Even a basic setup benefits from creating a namespace-scoped role for routine operations.
Here’s a minimal example to grant a user (represented by a certificate subject or external identity) read access in the demo namespace:
bash
cat <<'EOF' | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: demo-read
namespace: demo
rules:
- apiGroups: ["", "apps", "batch"]
resources: ["pods", "services", "deployments", "replicasets", "jobs", "configmaps"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: demo-read-binding
namespace: demo
subjects:
- kind: User
name: demo-reader
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: demo-read
apiGroup: rbac.authorization.k8s.io
EOF
In production you’d integrate with an identity provider (OIDC) or manage cert issuance properly, but the key concept is to start thinking in roles and namespaces early.
Operational checks you should automate early
Once the cluster is running, the next “minutes saved” come from standard checks you can run repeatedly. Even if you don’t deploy a full monitoring stack yet, you can script basic health verification.
Quick status snapshot
This snapshot is useful after every change:
bash
kubectl get nodes -o wide
kubectl get pods -A
kubectl get events -A --sort-by=.lastTimestamp | tail -n 25
Events provide near-term diagnostic information without having to jump into logs immediately. For example, image pull errors, scheduling failures, and readiness probe failures show up here.
Verify cluster DNS and service connectivity periodically
A short script can validate DNS and service routing. For example:
bash
cat <<'EOF' > ./cluster-smoke-test.sh
#!/usr/bin/env bash
set -euo pipefail
kubectl get nodes
kubectl get pods -n kube-system | grep -E 'coredns|calico|kube-proxy'
kubectl create ns smoke >/dev/null 2>&1 || true
kubectl -n smoke apply -f - <<'MAN'
apiVersion: v1
kind: Pod
metadata:
name: curl
spec:
restartPolicy: Never
containers:
- name: curl
image: curlimages/curl:8.5.0
command: ["sh", "-c", "sleep 3600"]
MAN
kubectl -n smoke wait --for=condition=Ready pod/curl --timeout=120s
kubectl -n smoke exec curl -- sh -c 'nslookup kubernetes.default.svc.cluster.local'
kubectl -n smoke delete pod curl --ignore-not-found
EOF
chmod +x ./cluster-smoke-test.sh
./cluster-smoke-test.sh
This kind of smoke test is especially helpful in the branch-office scenario, where you want an operator with limited Kubernetes background to have a deterministic “is it basically working?” workflow.
Optional: install a metrics baseline (without overbuilding)
Even basic clusters benefit from resource visibility. The Kubernetes Metrics Server provides CPU/memory metrics to kubectl top. It’s not a full monitoring solution, but it helps you validate scheduling pressure and right-size nodes.
Install Metrics Server using the official manifest for your Kubernetes version (URLs and required flags can change). After installation, verify:
bash
kubectl top nodes
kubectl top pods -A
If you’re in an environment with strict TLS requirements between nodes and the metrics server, you may need to configure it carefully; the point here is not to force it in, but to acknowledge that “basic operations” quickly require at least some metrics.
Node maintenance basics: what to do before you reboot or patch
Once you’ve created a cluster, routine OS patching and reboots become part of the lifecycle. Kubernetes provides safe patterns to reduce disruption.
Cordon and drain a node
Before rebooting a worker, prevent new pods from scheduling and evict existing pods:
bash
kubectl cordon w1
kubectl drain w1 --ignore-daemonsets --delete-emptydir-data
After maintenance:
bash
kubectl uncordon w1
These commands are fundamental for system engineers: they let you treat Kubernetes nodes like maintainable infrastructure rather than pets.
Control-plane node maintenance
For a single control-plane cluster, draining the control plane can cause more disruption because the API server itself is on that node. In a lab, you may accept downtime; in production you’d build multiple control-plane nodes and a stable endpoint.
Understanding that difference early helps you communicate realistic expectations: “minutes to set up” does not imply “zero downtime upgrades” without additional architecture.
A clear mental model of the moving parts (so you can extend safely)
At this stage you can deploy workloads, but it’s worth connecting the components you touched:
- kubeadm bootstraps the cluster and writes static pod manifests for control-plane components.
- kubelet runs on every node, talks to the API server, and ensures pods run.
- containerd pulls images and runs containers.
- CNI plugin (Calico) assigns pod IPs and provides routing/encapsulation.
- CoreDNS provides service discovery.
- kube-proxy (or an equivalent dataplane in some CNIs) implements Service VIP routing.
This mental model matters when you add features. For example, installing an ingress controller is not “just an app.” It depends on service routing working, and it often depends on node ports or a load balancer integration. Similarly, adding network policies depends on your CNI plugin supporting and enforcing them.
Common extensions once the basic cluster is stable (kept intentionally brief)
After you set up a basic Kubernetes cluster, the next choices depend on your environment. Rather than prescribing one stack, it helps to understand the categories:
You typically add an ingress layer (NGINX Ingress, HAProxy Ingress, or a gateway API implementation) when you need north-south traffic into services. You add storage (CSI drivers) when you need persistent volumes. You add a GitOps or CI/CD workflow when you want consistent rollouts. You add monitoring and log aggregation when uptime and MTTR matter.
Because each of those can become a full design discussion, the practical takeaway is this: keep your baseline cluster clean and validated, then add one capability at a time, re-running the same smoke tests after each change.
Full command flow recap (for speed and repeatability)
If you want the “minutes” experience after you’ve read the details, you should aim to automate node prep and then run a short, consistent sequence.
On all nodes:
- Disable swap
- Apply sysctl and kernel modules
- Install and configure containerd with systemd cgroups
- Install kubeadm/kubelet (and kubectl where needed)
On control plane:
kubeadm initwith chosen advertise address and pod CIDR- Configure
kubectl - Install CNI plugin
On workers:
kubeadm join ...
Then validate:
kubectl get nodeskubectl get pods -n kube-system- Deploy a small workload and test service access
This recap is not a substitute for the reasoning above; it’s the operational checklist you can turn into an internal runbook or automation script.