Our List Of Kubernetes Best Practices — From The Real World
Kubernetes (K8s) is rapidly being adopted and scaled despite its notorious complexity. Kubernetes adoption is inevitable, yet how you will deploy it is still up for grabs. Will K8s be rapidly imposed upon you by trillion-dollar cloud providers pushing to remake global IT infrastructure in their image? Or will it take place more slowly, with a rising tide of cloud-native technology surrounding, but never flooding, remaining islands of client-server and mainframe IT? Will Kubernetes-based, multicloud container development platforms help overcome barriers to creating meaningful multicloud integrations? Or will your K8s implementation become cluttered with cloud-specific dependencies just to keep it running apps at targeted service-level agreements? Our Best Practices: Kubernetes report provides some important insights into these and other matters. But we aren’t the only place in town.
There are hundreds, if not thousands, of lists of best practices for Kubernetes. Given its open source origins, its minimalist approach of core K8s, and the varying needs for operators, add-ons, proxies, sidecars, custom resource definitions, etc., it is little surprise that the community has risen to the occasion. Where can you find this information?
- The Cloud Native Computing Foundation, the effective Kubernetes HQ, hosts multiple documents of best practices for various use cases.
- Vendors propose many K8s best practices of their own — often conveniently aligned with their particular products. Beware: Although these can be useful, much of the content is disguised as vendor success stories.
- You can also consult Twitter for some rather hilarious examples of Kubernetes worst practices, too.
To add to this list, our own advice on Best Practices: Kubernetes was built using interviews and inquiries from major enterprise Kubernetes users. Our goal was to help advise early Kubernetes users with real advice from those that came before to help them get started with the basics. Although there’s something to be learned from upbeat customer case studies, the users we spoke with dispensed with happy talk and shared what it means to wrestle with Kubernetes from testing into production — and keeping it up and running. A few takeaways from this report:
- DIY K8s isn’t easy. For some organizations, the sheer technical novelty of Kubernetes and cloud native will point them toward low-stakes deployments where delays and downtime are tolerable. In other cases, an enterprise taking on a sweeping IT modernization program may opt to take Kubernetes into the heart of the environment, with expectations of big benefits in the near term. A few well-resourced business and government users can wade into Kubernetes open source and build their own environment from scratch.
- You’ll need to develop at least some Kubernetes expertise. You can sidestep some complexity by using prebuilt Kubernetes operators, curated in an exchange initiated by Red Hat. Nevertheless, you’ll need to dedicate namespaces for databases and configure relevant components, including ConfigMaps, secrets, and services, as well as implementing management of persistent volume storage.
- Identify where Kubernetes security needs to be enhanced. Your team will need to do a detailed evaluation of where K8s needs to be enhanced. While Kubernetes is designed to expect transport layer security, some K8s implementations experience unsecured traffic anyway — it’s up to you to figure out where.
For more on a users’-eye view of real-life Kubernetes, read the report, or schedule a meeting to discuss.