SecShift: Analysis and Conception of Traffic Security for the OpenShift Platform


This post summarizes the results gathered during my diploma thesis, SecShift. It is a continuation of my previous work, Tencrypt. The full PDFs of my thesis (98 pages) and the corresponding presentation (24 slides) are available online.

Are you deploying distributed applications in a public cloud environment – for example OpenShift or Kubernetes? Did you ever wonder how your network traffic can be better secured in foreign infrastructure? If yes, this post is for you!

SecShift proposes a possible approach to solve network encryption in shared cloud platforms by implementing an application-agnostic, transparent and distributed encryption overlay for each platform tenant. The original question was simple: is it possible to establish a trusted traffic encryption network among deployed applications of the same project? Based on the OpenShift platform, which in turn is based on Kubernetes and Docker, the best points of integration were evaluated. To understand the work process of SecShift, the following list of steps might explain it best:

  1. Define the idea and the overall goal of a solution.
  2. Look into the technical details, achieving a better understanding of the platform.
  3. Find a fitting problem domain which constraints the environment further tasks are executed in.
  4. Perform threat analysis and threat modelling.
  5. Research existing work and technologies to prevent overlapping of solutions.
  6. Generate a detailed design for a solution, with coverage of related aspects.
  7. Create a proof of concept (code) and an evaluation (theory).
  8. Finish the process with a conclusion and a collection of ideas for future work.

A visual and practical demonstration has the best effect in showing what SecShift does. The five minute video below is a demonstration of SecShift in action. Please be aware that the file size is 40MB, in case it might drain your mobile data. You might also wonder why the interface wg0 is used between compute nodes – I have written about this node-to-node encrypted network mesh in another post.

The source code of SecShift is also available online at my Github repo bitkeks/secshift. Now that you have seen what the result is, let's have a look at some parts of the thesis. In the following I will present some key parts of SecShift - and if you wish to know more, but don't have time to read the full thesis, the presentation provides a more complete picture!

OpenShift technology stack

At first, more detailed knowledge about the existing technology stack of OpenShift must be gathered. For this, multiple layers are examined: Linux kernel, Docker, Kubernetes and OpenShift on top. The graphic below shows the composition, with each layer and the features it introduces.

Technology layers of OpenShift

The most important insight during this task is the identification of networking endpoints and how pods use shared Linux namespaces for network interaction of a group of containers. Networking namespaces, automated networking, pods and users/projects as well as overlay networking are the main focus in later examination and implementation.

Problem domain and threats

The third and fourth step then went into theory. This includes the creation of a so called problem domain which basically means we narrow down the proposed environment in which further examination is taking place. An abstract network topology of a generic OpenShift cluster served as a base. Since SecShift focuses on traffic encryption, the topology could then be stripped off components which did not come in contact with any exchange paths of deployed applications, for example the storage. With the shaped topology, affected traffic routes were identified and marked with weakness IDs (W-n), as shown in the picture below.

Topology of OpenShift and its threats

The weaknesses were also grouped into responsibility groups (Rn), to outline related components and to limit the scope. R1 for example already has existing security features called secured routes. Responsibility group R2 is then the final focus of SecShift, with R3 as part of one design proposal.

For threat modelling, the STRIDE methodology was chosen. STRIDE is a mnemonic for six categories of threats and their security goal violations. A full list with an ample description can be found in the thesis' appendix:

  1. Spoofing violates authenticity
  2. Tampering violates integrity
  3. Repudiability violates non-repudiability
  4. Information disclosure violates confidentiality
  5. Denial of service violates availability
  6. Elevation of privilege violates authorisation

Each weakness was analyzed for possible attack vectors according to STRIDE, combined with a ranking from low (one or two threats) to high (five or six). The result showed that W-5 and W-6 are both susceptible to threats in all six categories! An improvement is certainly needed – with SecShift being a practical approach.

Existing technology and research

Of course SecShift is not the first research in cloud security, let alone in network traffic security. The field is extremly diverse! As a summary of my findings, let me give you a list of some projects that are referenced in the thesis:

  • tcpcrypt by Andrea Bittau et al.
  • Performance Analysis of VPN Gateways by Maximilian Pudelko
  • Zero Trust Networking as created by John Kindervag and expanded in Zero Trust Networks: Building Trusted Systems in Untrusted Networks by Evan Gilman and Doug Barth
  • The Kubernetes extensions istio and Envoy Proxy, Cilium, Wormhole
  • WireGuard and other VPN stacks
  • Memory isolation stacks like Intel SGX, Amazon Firecracker and OpenStack's Kata Containers

As always, more can be found in the corresponding chapter in the thesis.

SecShift: topology, daemons and connections

In the demo you have seen the functionality SecShift provides. But how does it work? As shown in the topology below, SecShift introduces two new components: the SecShift tenant node daemon (STNd) and the pod daemon (SPd). In practice, only the STNd is executed as a system daemon, it then starts the pod daemons.

Topology of SecShift

One STNd is started for each project on each node. By querying the pods and services APIs (for IP addresses and meta data) as well as the local Docker daemon (information about the namespace), the STNd gathers a list of pods on its host, switches into each pod's namespace and runs one SPd for each local pod.

The SPd is then responsible for setting up the WireGuard interface with a new secret key and IP address, supplying the STNd with the public key and maintaining the list of peers as put in by the STNd. Each STNd in the meshed network provides the connected STNds with its list of pods, including the public keys.

SecShift key exchange

Since a project's pods are distributed over nodes in the platform, these daemons need a way to communicate with each other – and this is where the secrets API is utilized! The design uses a secret, provided by the Kubernetes API, which is only accessible via a project token. And in it, each daemon stores where it runs and on which port it's listening for peers. Authentication and authorisation are therefore enforced by the token's scope (improvement of the daemon's security is future work).

SecShift daemon peer announcement

I've named this design the hybrid design. It's still very dependent on the central API for the exchange of daemon connection meta data. In an extended version, the decentralised design, the secret API can be dropped and is replaced by a third daemon: the SecShift multiplexer daemon (SMd). There's one instance of it on each node and it serves as the connection broker to STNds across nodes. You can find the topology in the thesis.

Design choices and future variants

During the creation of SecShift, a lot of design choices had to be made. Different topics, for example the encryption key exchange mechanism, allowed different solutions. See the next diagram for an overview.

SecShift design choices, alternatives and variants

A drastic improvement to SecShift would for example be the deployment of the daemons as Kubernetes DaemonSets. Using defined networking interfaces like the container networking interface (CNI) might also pose a promising path, given that it would abstract the direct modification of Linux namespaces.

Revisited: DNS proxying

The last topic for this post is the topic of DNS modification. As you might have read in my article about Tencrypt, DNS is a central feature of Kubernetes and OpenShift in regards to the implementation of services. These are platform-internal hostnames which resolve to virtual IP addresses, which in turn route packets towards applications in pods according to load balancing algorithms. In OpenShift this is done via NAT rules in iptables.

This approach breaks SecShift's assumption that pods of one project are directly connected to each other. Implementing peering through this abstracted IP addresses is difficult and possibly unreliable. How do you know what peer you talk to? To solve this problem with services, I opted to re-use the approach chosen in Tencrypt: modifying DNS replies within pods. The numbers in the following flow graph specify the order of execution.

DNS proxy flow graph

Bypassing iptables is of course only a temporary solution. Nevertheless I really like the idea of pulling load balancing to remote services into the pod namespace, especially since later versions of OpenShift could potentially drop IPv4 NAT.

Conclusion

The work consisted of a lot more topics as presented in this blog post. For example the evaluation of performance and functionality is left out entirely. I also used the so called Systems and software Quality Requirements and Evaluation (SQuaRE) catalogue, which is an ISO standard providing a list of characteristics for software evaluation. Additionally, the first chapter provides an extensive technical insight into Linux (namespaces, cgroups), Docker, Kubernetes and OpenShift.

In the end, SecShift worked as planned. But it is not only one technical solution to a problem, it's a process – as is security in general! Many, many aspects of this work could lead to different results if changed and so I can only recommend everyone interested in traffic security to pick up the process and run it again. You might find new threats, might be able to mitigate weaknesses differently and possibly improve the architecture. Feedback very welcome!