I think the t2.medium or larger requirement for masters is real, but that it comes from the default kube-system p0d set which requested just a bit >1 whole CPU. I run a non-HA single-node kube cluster for my Jenkins on a t2.medium and I have enough room left over after kube-system to spin up my single Rails pod with Postgres sidecar as Jenkins JNLP slave (no k8s minions required, look up “kubectl taint” for how to disable the dedicated taint).
I had to pay attention to the resource requests because assigning what seemed like a reasonable amount of CPU in a resource request for my own pods puts me right up against the wall.
I just looked through my (unmodified) kube-system namespace and it looks like it’s requesting 1030 “millicores,” or just a bit more than one core. That’s your t2.medium right there! Ask for less cores and you will probably get by with less cores. API Server requests the most at 250m, with Controller Manager and ETCD coming in just behind at 200m each. A few other kube-system pods request between 10–150m each.
I suspect that if you cut all of these larger CPU requests in half, keeping in mind your workloads, you could tune your small HA cluster to run on a couple of t2.small k8s masters.
I haven’t tried this though, so you’d do it at your own risk!