So I’ve spent all Labor Day Weekend playing with Deis and I have to say I am very impressed with the concept, but really I am not yet sure it’s ready for my needs yet. However we do plan on engaging with Gabriel so hopefully he can clear up some issues I’ve been having. For those of you who are new to Deis I highly suggest you keep an eye on this project.
So the installation is pretty basic. Simply go over to the deis git repo and visit the contrib/ec2 directory and follow the directions. You will have a working cluster inside of EC2 in no time. At the time of this post I was testing on 0.11.0. Here are the steps I took and what associated issues I found.
Simple Python Application
First I needed a simple application that I could use for testing purposes. So I rolled a quick and dirty python app that simply took a few environmental variables. The code can be found here:
This application simply requires that two environment variables are passed.
Deploy To Deis
Once you have a working Deis cluster inside of your AWS VPC, all you need to do is make sure you have registered as an admin user.
deis register http://deis.yourdomain.com
Next you need to make sure you have a cluster created
deis clusters:create dev deis.yourdomain.com --hosts=x.x.x.x --auth ~/.ssh/deis
Once you have a working cluster now you can go ahead and actually create an application.
deis apps:create hello-world --cluster=dev
This will add a git remote of ‘deis’ which will allow you to simply push your new Dockerfile and it’s content right into Deis directly. This is a feature that I absolutely love!
Simply use git to deploy your app to Deis:
git push deis master
Scaling Containers In Deis
Next, you probably need to understand how to actually scale up a set of containers in Deis.
deis ps:scale cmd=5 -a hello-world
Now you should get the following output:
=== hello-world Processes
cmd.1 up (v2)
cmd.2 up (v2)
cmd.3 up (v2)
cmd.4 up (v2)
cmd.5 up (v2)
Pushing A New Release
One of the things I love the most about Deis is how easy it is to push out a new release using simple git commands. For instance if I where to update my code I simply need to run the following commands:
git commit -m "my latest commit" -a
git push deis master
This basically pushes the Dockerfile out to the Deis server for building, then Deis will push the finished Docker container to an internal registry which holds copies of all the Docker containers used in your environment. My only issue with this setup is that apparently as new containers are bring created, users of your website will get ‘both’ the old version of code and the new version of code.
Issues I’ve Encountered
So while I love the concept of Deis, I have encountered quite a few problems. None of which I haven’t found a workaround for, but the operational overhead makes me question how viable of a solution this is for me at this time. I say that with the caveat that I haven’t actually spoken to Gabriel, but I do plan to run these issues by him and get his input. This also isn’t an inditement of the product, I still have high hopes that Deis is really going to have a big impact on the PaaS market.
So I am not exactly sure if this is an issue with how Deis creates the Registry or an actual limitation of the Docker registry, but it seems as if only a single ‘unit’ can pull down an individual Docker container.
Repository 10.x.x.x:5000/hello-world already being pulled by another client. Waiting.
The problem this presents is that when scaling up an application with multiple ‘processes’ it takes quite a while to deploy all the new ‘units’. Imagine having 20 or even 100 ‘processes’ for a given application, waiting on each ‘unit’ can take quite a long time.
Technically Deis doesn’t what I would consider true A/B deployments. While working at Pearson our developers created a tool called ‘Thalassa’ which handled true A/B deployments. The code can be found here:
This allowed for a new set of nodes to be deployed, smoke tested, and then traffic almost instantaneously was migrated to the new set of nodes running newer code.
What Deis does is slightly different. As new ‘units’ are created they are immediately added to the pool creating a temporary situation where both the old code and new code are served up to end users. It isn’t until such time that all the new ‘units’ have been created does Dies delete the ‘old units’ taking old code out of rotation. As long as your company’s development practices are mature enough to handle this, it isn’t a huge deal, but you must be decoupling deployments from feature releases (which as we all know is one key ideal in DevOps methodologies).
This is the area that causes me the most concern. One issue I’ve ran into when scaling down ‘processes’ for a given application or when deploying new versions of an existing application is that ‘zombie units’ can sometimes exist. What this means is that Deis reports that a new deployment is finished, however, old ‘units’ running old code still exist. This has required me to use fleetctl to discover old units and terminate them.
The only concern I have over this is that it does require a certain level of operational overhead to make sure that ‘zombie units’ don’t exist after a scale down or new release of software.
So in conclusion, I think the concept behind Deis is a great one. Designing self-service infrastructure is something I’ve spent the last three years working on. Having designed a fully self-service, multi-tenant, centrally managed AWS environment at Pearson has lead me to investigate how not to reinvent the wheel again in my new position. Deis looks hopeful and maybe I’ll make the time to contribute to the project vs writing blog posts offering up my criticism 🙂