These days I’m working at a client creating workflows for their state of the art private cloud platform. It’s really quite nice: internal clients can use a webportal to request machines, which are then customized with Puppet runs and workflows to install additional software and perform custom tasks like registering the machine in a CMDB. All this is ideal for running legacy workloads like SQL databases.
Other offerings include ‘PaaS’ workloads for running business applications, e.g.: developers can make requests for ‘Scale Out’ application servers, meaning 2 linux VMs with Tomcat installed behind a loadbalancer.
The most popular offering by far is the large VM with a preinstalled Docker engine. In fact, they are so popular you might wonder why.
Is it because Developers have an intrinsic desire to create and run Docker containers? Naively, the current hype around containerization in general and Docker as a specific technology could indeed be explained as such.
However, if you know Developers a bit you know what they really want is to push their code into production every day.
To get to this ideal state, modern development teams adopt Agile, Scrum, and Continuous Delivery. Sadly, especially the latter usually fails to deliver to production in enterprise IT, giving rise to the waterscrumfall phenomenon: Continuous Delivery fails to break the massive ITIL wall constructed by IT Ops to make sure no changes come through and uptime is guaranteed.
So guess what’s happening when your Dev/Business teams request the largest possible deployment of a Docker blueprint?
Yep, you just created a massive hole in your precious wall. You’ll have your CMDB filled with ‘Docker machine’ entries, and have just lost all visibility of what really runs where on your infrastructure.
Docker in production is a problem masquerading as a solution.
Does this mean containers are evil? Not at all. Containers are the ideal shipping vehicles for code. You just don’t want anyone to schedule them manually and directly. In fact, you don’t even want to create or expose the raw containers, but rather keep them internal to your platform.
So how do you use the benefits of containers, stay in control of your infrastructure, and satisfy the needs of your Developers and Business all at the same time? With a real DevOps enablement platform: a platform which makes it clear who is responsible for what - Ops: platform availability, Dev: application availability - and which enables Developers to just push their code.