Squeezing more from containers

Virtualization of the compute layer was revolutionary for its time. Fast forward 5 years, and
containers took that to an entirely new level. The rate at which things changed after that is historical. It is
best read from a Wikipedia entry. The common thing of that time was to expand horizontally. Kubernetes
exploded the number of containers. When the number grew, orchestrating the communication between
them was important. The era before that was all about expanding vertically. More virtual servers per
physical server. Did you realize back then that it was the horizontal expansion that fuelled the cloud?
Vertical expansion is currently fuelling the golden era for the development and adoption of AI.
We are intentionally flipping between the vertical and horizontal expansion to build our narrative. One of the
scaling approaches is not Thor’s hammer. At this point of our narrative let us replace the word expansion
with its synonym of scaling. Like Thor’s hammer is a weapon that helps fight many villains and troubles in
the universe the scaling helps us fight many of the computing and business challenges. But it is not always
that either one of them ends up solving all the challenges. Horizontal scaling is good in some scenarios and
ends up solving the challenge typically experienced in retail customer-facing solutions. Whereas the same
type of scaling introduces unmanageable complexity and makes it difficult in fast computing scenarios.
Some solutions even benefit hugely due to horizontal scaling but the cost of maintenance (read taming
complexity) could put it out of favour particularly if you have fewer commercial constraints, vertical scaling
could fit the bill.
We found ourselves in one such situation in a recent engagement. We had been supporting the lifecycle of
a Line of Business application for a large enterprise. They recently revamped their business structure and
needed a change to the way access was defined within the application. Earlier, the granularity was at the
module level. Now such access checks are needed on blocks of information that are displayed on the
screen. Thus, tuning up the level of granularity of permission check. The labor of writing code that makes
such a check at the desired level of granularity could not be optimized any further as we would have to
introduce those if-else blocks at relevant places in the codebase. But we could think ahead and build a
framework to deal with the changes in permissions for such access definitions. The framework we had in
mind was mentioned earlier in one of our dispatches. It was about reimagining authorization using OPA i.e.,
Open Policy Agent.
If you recall OPA uses a domain language called Rego. In that dispatch, we demonstrated how it can be
used to describe the permissions in the form or rules using Rego. Let us take a quick look back at that here
default allow = false
allow = true {
input.method = "POST"
input.path = ["AssignUserToRole"]
input.subject.roles[_] = "Administrator"
}

Now this rule is to be assessed by the web app in runtime. It is achieved by invoking an HTTP endpoint and
passing the JSON payload that contains the necessary data. In this case the role of the user. We could let
OPA run in a daemon mode as a separate container and the web app as a separate container. If you are a
fan of applying Microservices everywhere, you would have gone down this approach already.
Stop for a moment and think. Microservices are good where it is necessary. For the context, is that right?
We thought it was not. Here are our reasons –
1. If we run OPA as a daemon in its container we will have to pay performance tax.
2. By making OPA as a service i.e., a separate container; we are inviting other services to consume it
as is.
Performance tax
Notice the increase in granularity. With increased granularity volume of HTTP requests flowing from the
web app to OPA would be higher. Network latency will pull down the overall throughput of the system. The
alternative you ask; why not put the OPA as a service within the container hosting the web app? We will
circle back to how on that in a while.
OPA as a service
OPA is a general-purpose policy agent. It can help us accomplish many scenarios where policy evaluation
is required. Thus, the conviction to make it a separate service is very high. However, there are policies
which can be functional and there are those policies that are critical to the security. The level of control one
will put on the policy servers dealing with security control of the system will be far greater than those which
are not dealing with the securities.
Sidecar without overheads
Sidecar is a popular design pattern used while deploying containers. There are two different ways to deploy
one is pawn as a process and another is as a container. By our reasoning above we ruled out the container
deployment. We wanted to avoid the performance tax for one and avoid accidental spider web dependency
on the OPA service just because it existed. How many times we would have encountered this in an
enterprise where such service is consumed “just because it existed”?
This is a cautious step and we are putting this on the table as an option. Remember this approach is no
Thor’s hammer that can be applied everywhere. Now the next big question is how do we do that in Docker?
Docker file
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build-env
WORKDIR /App
COPY . ./
RUN dotnet restore
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /App
COPY –from=build-env /App/out .
ENTRYPOINT ["dotnet", "webapp.dll"]
The above docker file is a simplified one. We have kept it simple to focus on the key point. To introduce the
sidecar, we will bring about a small change; where we leverage the PowerShell cmdlet Start-Job
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build-env
WORKDIR /App

Start-Job -Name -ScriptBlock { opa run –server
https://yoursecuredomain.com/policybundle/policy.tar.gz}
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /App
COPY –from=build-env /App/out .
ENTRYPOINT ["dotnet", "webapp.dll"]
PowerShell allows you to run a job that can spawn the process and access ports. Note this process will not
be accessible from outside of the container. It will be only the web app that can port mapped when the
container is run using the docker run command. The web app can communicate with the OPA server using

a local host address. Within the container, OPA service will be accessible. Thus, reducing the network trips
and making policy evaluation very fast. The HTTPS endpoint from where it is downloading must be a public
endpoint. You could secure it further with signature validation. From an infrastructure point of view, you can
do all that within the script block in the above docker file.
Using the above docker file we have effectively triggered two processes with lesser overhead and more
versatility for the change request that customers demanded.