So do people containerize databases in production these days? I thought a couple of years ago DBs were the classic example of applications that don't belong in containers.
Depends on the scale. Something small is okay to keep in containers. If you want to push performance to the limits - you definitely will run your DBMS outside a container.
To host it in an orchestrator your cluster has to be more available than your DB.
you want 3 9s of availability for your DBs maybe more.
Then you need 4 9s for your cluster/orchestrator.
If your team can make that cluster, then it makes more sense to put all under one roof then develop a whole new infrastructure with the same level of reliability or more.
This is a persistent myth that is just flat out wrong. Your k8s cluster orchestrator does not need to be online very often at all. The kube proxies will gladly continue proxying traffic as last they best know. Your containers will still continue to run. Hiccups, or outright outages, in the kubi API server do not cause downtime, unless you are using certain terrible, awful, no good, very bad proxies within the cluster (istio, linkerd).
I also would like to know this, I was just told that databases should be outside the cluster a couple days ago by someone with a decade of K8s experience.
So do people containerize databases in production these days? I thought a couple of years ago DBs were the classic example of applications that don't belong in containers.
Depends on the scale. Something small is okay to keep in containers. If you want to push performance to the limits - you definitely will run your DBMS outside a container.
To host it in an orchestrator your cluster has to be more available than your DB.
you want 3 9s of availability for your DBs maybe more.
Then you need 4 9s for your cluster/orchestrator.
If your team can make that cluster, then it makes more sense to put all under one roof then develop a whole new infrastructure with the same level of reliability or more.
This is a persistent myth that is just flat out wrong. Your k8s cluster orchestrator does not need to be online very often at all. The kube proxies will gladly continue proxying traffic as last they best know. Your containers will still continue to run. Hiccups, or outright outages, in the kubi API server do not cause downtime, unless you are using certain terrible, awful, no good, very bad proxies within the cluster (istio, linkerd).
Your CONTROL PLANE doesn't immediately cause outages if it goes down.
But if your workloads stop and can't be started on the same node you've got a degradation if not an outage.
I also would like to know this, I was just told that databases should be outside the cluster a couple days ago by someone with a decade of K8s experience.
Generall, yes.
Unless you have a dedicated team to do the stuff for you.
Crunchydata is a good starting point
Supports Kubernetes and Bitnami images now too. :)
Relevant if you didn't already see it: https://news.ycombinator.com/item?id=44608856
Fuck them