Application Runtime as Code
Virtualisation revolutionized the way we used datacentres. By applying virtualisation, we could increase the density of compute that can be realized a unit space i.e., a server rack. It also has laid the foundation of what was to come next – Containerisation. Containerisation conceptually is similar to virtualisation. It differs only in the way end goal is achieved. Many popular articles that delve on the concepts talk at length on the differences. For purpose of this dispatch, we can use simpler explanation; which is virtualisation accomplishes its goal by virtualising the entire machine stack right from the OS to the interfaces to hardware like hard disk, network interface etc. Containerisation on the other hand accomplishes a similar goal only runtime required for any application, rest of all is left to the host OS to manage for the container.
Between the era of virtualisation hitting its peak and containerisation gaining on popularity the “as a Code” concept also evolved. What was missed out in virtualisation where the virtual unit could be expressed as code was central to the shaping of the containerisation’s evolution.
Key terms
As we developers build more of the world around us using code, we need to recognize some key terms that are used in the context of containerisation.
Let us bust ourselves out of the greatest myth Docker IS NOT Containerisation or Container. Docker is one tool that has been flag bearer of containerisation as technology but it is not the notion of containerisation. Popular alternative like PodMan exist for Linux ecosystem or another like rkt.
The next biggest myth is containerisation is not Infrastructure as Code. Containerisation has enabled Infrastructure as Code but attributing IaC as containerisation we believe is going overboard.
Container is the first term that comes to everyone’s mind when talking of containerisation. It is the representation of runtime that an application requires for to function.
Image is a template that is used to build the container. It defines what tools, libraries and components are required to put together the container so that the application functions. To give an example the Operating System is one of the vanilla image that while working with containers anyone will start. Customising the OS further with code is the crux of “coding our way through”.
These are the two terms that are universal across various container platforms. We will use docker to delve deeper. Thus, few more terms that are relevant from the docker context will also come in handy.
Registry or more aptly known as Docker registries is directory of all available images for docker. There is one popular and connected by default is the Docker registry. However, anyone can build their own private and public repositories where various images are stored.
Daemon also known as Docker daemon is the workhorse that does the heavy lifting to deliver the containerisation promise. It is a server that listens to API requests and performs the operation by working with the host OS. Thus, you will need different daemon for different host OS. We are using the word host OS in the same way it is used in the virtualisation concepts.
Client and yes, known as Docker client is the command that developer works with to compile the image, interact with registry and certainly with the daemon to instantiate applications in containers.
As we have familiarised with key terms let us dive in.
Our focus in this dispatch is on coding our application runtime. Thus, we can pick any working application and will not focus on building a working application. If you are interested in building your own feel free to do so or you could also have a console application built to emit Hello World. It in this context really does not matter. Picking a functional application however will help us wider our horizon with container as more advanced concepts can be picked up quickly. Like interacting with storage and network etc.
We pick a well-established reference repository for this purpose – eShopOnWeb. This is a sample repository of Microsoft where it demonstrates its latest .NET core 6.0 capabilities.
One of the critical components in containerising an application is the dockerfile. This is a file that resides in the root folder of the application. It instructs what should the docker client do to build the image. Docker client then issues them as command to the docker daemon to accomplish the final outcome which is a container that has the application running within it.
If you have already cloned the code, you will notice there are multiple folders. We focus on src/Web/Dockerfile specifically. Dockerfile is used to instruct docker client on instructions required to build our application’s image.
Here is the content of that docker file –
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build WORKDIR /app
COPY *.sln . COPY . .
WORKDIR /app/src/Web
RUN dotnet restore
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS runtime WORKDIR /app COPY –from=build /app/src/Web/out ./
ENTRYPOINT [“dotnet”, “Web.dll”]
|
A quick summary of what we have done here is – There are two stages for building the target image. In both the stages we are connecting to a public repository of Microsoft. We are initialising different base image that contains the dependencies required for the app to run. Then we are executing some OS commands to get the application we have written copied into the container. Finally in the last line we are getting the app running so that whenever a container is created using this image the app is available immediately.
FROM keyword is used in a docker file to indicate the base image which should be used to build our application’s image. In both stages we are connecting to the Microsoft’s public docker repository. In the first stage we use the dotnet core 6.0 framework’s image that comes with the OS. In the next stage we use the aspnet core 6.0 image as runtime.
If you are wondering what kind of OS is used for our application. At this juncture you do not get to specify that which works towards your productivity. Since, that is already handled by base image that you pulled from Microsoft’s public docker image repository.
We use the instruction WORKDIR to set the directory that will be used in the subsequent command. Bear in mind that the OS where the docker will run has impact on the path name and folder name used in this file. If you are wondering why in the build stage, we are setting the working directory as /app/src/Web; we recommend you follow through with this as-is and the concept will become more clearer in our next dispatch with the docker compose being discussed.
COPY command does exactly as it says. The first argument is the source and the subsequent one is destination. The command acts recursively. Thus, if there are sub-directories it is also part of the copy and no star symbol is required to recurse that. The command for the runtime stage is peculiar. It does not copy anything from the developer machine rather copies the output of an earlier stage in the process. The first argument i.e., the –from argument points to the name of the stage. Followed by the folder location within that stage. The second argument carries the same meaning i.e., the destination.
The RUN keyword as the name mentions executes the command in the shell of the container when this image is used. Thus, arguments to the RUN is usually a command that is known in the base OS of the image. In the example above we are using that build and publish the dotnet app. These commands are standard dotnet commands to build and publish the application.
The last command is interesting. It sets the default command to execute when the container is instantiated. Thus, when the container is started the application is available for use. If you miss this important line no matter what you do the application cannot be started. Such, kind of docker images is great for distributing libraries or to be used as base image for any other container.
Now to experience the fruit of the labour, you have to run the dotnet build and dotnet run command in sequence. In the next dispatch we will pick the reigns from here and show how the docker compose can be used to further this step in more automatic manner.