When you have C code, you compile it once, and then copy it to different servers. This way you know the same object is running everywhere. The same happens when you combine Infrastructure as Code and Immutable Infrastructure.
Infrastructure as Code is one of the core concepts of the modern server management. You define in a set of text files the state of your servers, and the resources on them. When the configuration management tool is run, the server is setup magically: software is installed, configuration files are created, permissions are granted among other things. This method works quite well, but has some problems. I will describe two of them.
First, is the management of state transitions over the time. One simple example I’ve faced recently: if you change the location of several files, you have to define a declaration to delete them from the old place. When servers are running for a long time, those state transitions are even worse. At some point you lose track of them. If you are managing different environments, you have the risk of creating differences between them.
Second, provision is slow. If you need provision quickly, because there was an issue or because you need to scale, you need to wait until the server is built.
The solution to these problems is to start to manage the servers as artifacts, as Immutable Infrastructure. With this method, you build the servers all the time, after each commit in the application code or the infrastructure code, using tools to the image, and to provision it. This image is going to be used in the different environments, and when you scale up. If you need more environments or more copies of the servers to support high load, you just copy the image or artifact.
With Immutable servers, state transition disappear, because you are rebuilding all the time. After each commit, you create an image, you deploy it and test it. Once it is tested, you are sure the object can be copied N times, and it should work as expected.
There are two approaches of Immutable Infrastructure:
Dynamic golden images
In Dynamic Golden Images, each time you commit a code you create an instance in your cloud provider, you provision it and then you have an image. This image is the artifact to deploy on different environments. In the case of autoscaling, you need to update the autoscaling configuration to setup the new one.
With Containers, the concept is similar, however they are lighter and they start faster. You provision an image, which is published in a repository. Then you deploy it is to run your application.
In both cases, all the process should be automated end-to-end and be part of your Continuous Delivery Pipeline. Your Continuous Integration server should detect change in the code repositories (application and/or infrastructure) and run a set of scripts to build and publish your images. On deployment there are different options. One of them is the blue-green deployment which launches the images, and once they are up, old ones are stopped. More details about this would require a new article.
To wrap up the idea, servers should be managed as artifacts. It doesn’t matter which technology you choose. Bake the images, save them somewhere, and deploy.