Container Networking Connectivity Models

Application containerization, or lightweight process virtualization, is a method used for deploying multiple isolated applications over a single host accessing the same operating system (OS). By using application containers, you can run many containerized processes or apps on a machine, each process giving the user the illusion that it runs over a dedicated OS.


In this blog post, we'll cover an introduction to this kind of application wrap-up, covering the main differences between virtual machines (VM) and containers and describing the basics of container networking models.


Containers vs Virtual Machines

 

VMs were a huge step in the IT industry, allowing a big step towards hardware efficiency. Hypervisors abstract the VMs from the physical hardware underneath, allowing the assignment of resources to VMs on demand. Every time you need a VM, you will need to install a brand new copy of the preferred OS as if it were installed over dedicated hardware. Modern hypervisors support resource over-subscription, but that is out of the scope of this introduction to containers.


Containers can be seen as an evolution of VMs for some environments. Instead of virtualizing the physical hardware, container engine provides an abstraction of the machine OS to the containerized applications. That kind of virtualization provides an equivalent resource isolation with some additional benefits, such as an additional physical resource efficiency and increased application portability. Each container packages application code, runtime, system tools, libraries, and settings, and runs as an isolated process in user space.



Containers include many advantages over VMs. They are lighter, smaller in size, and much faster in daily operation than VMs:


  • Lightweight: containers require much less compute resources compared to VMs.
  • Scalable: higher resource efficiency, because the containers that one physical machine can support are orders of magnitude bigger than the VMs a virtual machine supervisor can handle.
  • Fast: booting a container is faster that booting a VM. Creating one container unit is shorter than the required time to create a brand new VM.
  • Portable: containers package code and dependencies together, making it very suitable for dynamic development environments with lean development practices.
  • Flexible: containers work on bare-metal, cloud providers, VMs, etc…
  • Isolation: provides advantages in security and provides a better transition in the software life cycle.


Both VMs and containers use two kinds of abstractions. While the former run over abstractions at the physical layer, the latter run over abstractions at the application layer. These two kind of abstractions, hardware and application abstractions, can be mixed up together, in the sense that it’s usual to run containers inside VMs.


Container Networking

 

We’ll include here a description of the most common types of container connectivity. Every specific containerization tool may provide some different ways to provide that connectivity. This is not a extensive list. The intent here is to provide some case to build upon towards more complex internetworking designs.


Linux containers are based on the namespace and cgroup subsystems (lightweight process virtualization). By default, newly created containers are isolated from each other. Each container runs on a different namespace and has available an isolated filesystem. By default, when you create a new container, that container doesn’t have a way to communicate with other containers on different machines. That’s why container networking is an important process in service delivery.

Connectivity models

Let’s start with a brief overview of the container connectivity models. For this section, I’ll assume that you have a working Docker setup ready on your system.


By default, the system creates tree connectivity modes for us. We can list the existing container networks on the current Docker host typing the next command:

 

$ docker network ls

NETWORK ID          NAME DRIVER              SCOPE

157716241d1d        bridge bridge              local

9ab30453a4a9        host host                local

6f8fb6ce5b4f        none null                local


Docker uses bridge mode as the default connectivity model for new containers:

 

$ docker network inspect bridge

<...>       

"Name": "bridge",                   

"Subnet": "172.17.0.0/16"           

"com.docker.network.bridge.name": "docker0",           

"com.docker.network.driver.mtu": "1500"


In case we need a different connectivity model for our application, we can use the network driver that best suits our needs. With the info command we can retrieve a list of the available network plugins in the system:

 

$ docker info

Plugins: Network: null bridge host ipvlan macvlan overlay

<...>


This is a brief description of the available plugins in the system:


  • Null/None: the container receives their own network stack, but without external connectivity. It has available a loopback interface for testing purposes. This mode is useful for isolating containers.
  • Bridge: this is the default container connectivity model when working with Docker (docker0). The bridge connectivity acts like a virtual switch interconnecting every container on the same machine. For external connectivity, needs NAT.
  • Host: host networking eliminates the need for NAT, improving networking performance. In this mode, the container has access to all host’s network interfaces. This connectivity model has the drawback that may suffer from port conflicts.
  • Underlay: underlay connectivity drivers expose host interfaces directly to containers. This model brings a simpler model than the bridged counterpart, removing the need for port mapping overhead. MACvlan and IPvlan are two common underlay network drivers. Both allow the creation of multiple virtual network subinterfaces behind the host physical interface. IPvlan shares the same MAC address for all the containers in the same host, so you need to evaluate if your application needs visibility of the mac address of every container before going for one or another connectivity model.
  • Overlay: overlay connectivity models are used to interconnect containers across containerization hosts. Tunneling technologies allow for spanning your network across many hosts. VXLAN is one of the common technologies available to provide this kind of interconnection.



NullBridgeHostUnderlayOverlay
External connectivityNo external connectivityNATHost gatewayUnderlay gatewayNo external connectivity
EncapsulationNoneIPIPSingleDouble (VXLAN)
Use caseTests, IsolationDefault connectivity for intra-host comm.Need full networking control or full performanceContainers needing direct underlay connectivityContainers connectivity across different hosts or clouds
Traffic patternNoneN-SN-SN-S & W-EW-E



Usually, containerization engines are flexible enough to suit your application networking needs. In a software development, the election of one connectivity model over another may require some time and analysis of the pros and cons of each approach. For projects of mid to large size, the evolution of the selected connectivity model over time is not rare. That’s why it’s not bad to start as simple and light as possible and evolve towards a more complex model as your project grows and scalability problems potentially start to arise.

Swarm

In this section we’re going to show you a simple example of how you can create an overlay network that can be used to interconnect containers hosted by different machines, even when they are deployed on different cloud providers.


Overlay networks have some advantages related to scalability and flexibility, but the increase of east west traffic between containerized applications means that some additional security mechanism must be in place to ensure security and visibility of that kind of traffic.


This next list summarizes the steps of the next configuration example:


  1. Creation of a new Swarm Master Node.
  2. Adding a new worker to the Swarm.
  3. Creating an attachable and secure (control and data plane) secure overlay network (based on VXLAN interconnection technology).
  4. Create containers in different hosts, joining them to the overlay network.
  5. Test connectivity between containers.


First, let’s create a new Swarm Master Node, with the corresponding IP listening address:


[node1] $ docker swarm init --advertise-addr=192.168.0.5
Swarm initialized: current node (t9y1s3kmvaf99esidbxj8nqz1) is now a manager.
<...>


As a second step, let’s add a new worker to the Swarm. Of course, you can add more workers on demand, but for simplicity, we are going to add only one additional worker here. For that, add the next command on the second host using the token printed on the Master Node:


[node2] $ docker swarm join --token S...zetqxy8 192.168.0.5:2377
This node joined a swarm as a
worker.


Now it’s time to create a new attachable and secure overlay network.


[node1] $ docker network create --driver=overlay --attachable --opt encrypted mi-red
6dwgda86kpbffe153uqu2xeco


Create a container on each host and attach it to the overlay network


[node1] $ docker run -it --name debian-node-1 --network mi-red debian

root@0b3a39784cd3:/# ip a

<...>

13: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1424 qdisc noqueue state UP group default

    link/ether 02:42:0a:00:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0

    inet 10.0.0.5/24 brd 10.0.0.255 scope global eth0

       valid_lft forever preferred_lft forever

<...>

[node2] $ docker run -it --name debian-node-2 --network mi-red debian

root@1e630784d238:/# ip a
<...>

13: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1424 qdisc noqueue state UP group default
    link/ether 02:42:0a:00:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.0.4/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
<...>


Check connectivity:


[node1] root@0b3a39784cd3:/# ping -c2 10.0.0.4

PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.

64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.285 ms

64 bytes from 10.0.0.4: icmp_seq=2 ttl=64 time=0.156 ms


Note that the MTU is adjusted to a lower value than 1500 bytes. That’s needed to accommodate the VXLAN header and the data plane encryption.


Summary

 

Containers network design is not any different than classical network design. It faces the same challenges, and the solutions to them are found on the well-known classical literature, each one with their benefits and trade-offs.


What containers bring to the table is a new model for application automation and deployment. Containerization tools provide a wide range of networking configuration options to design your own user defined networks. It’s up to you to select the option that best suits your application networking needs.


After this brief explanation, we’re ready to get deep into more convoluted network designs with new components used on real production deployments, like load balancers, web servers, databases, security elements and message queues interacting together. Perhaps this might be covered in the next post.


Thank you very much for reading!