In order to make the service layer networking functional, a worker-level tunnel-proxy is transparently instantiated
as part of the Oakestra network component. The following picture is an example of what happens in a worker node
based on the IPv4 implementation of the Network Manager component. Note that
IPv4 and IPv6 work identically. For simplicity, we will stick to IPv4 addresses in this walkthrough.
Let’s assume that we have two worker nodes, namely Node 1 and Node 2, each executing two containers.
The containers are instantiated and managed by the Node Engine, while
the Network Manager, creates a network namespace for each container (the cloud surrounding the container),
enabling the Virtual Layer abstraction.
The Service Layer abstraction is realized hierarchically with a mechanism of route interest registration and
proxy translation. This section details the proxy translation that allows transparent
conversion of Service IPs into Namespace IPs, therefore enabling transparent Virtual Layer ↔ Service Layer conversion.
Following the example mentioned above, suppose we deployed services X1 and X3 using the following deployment descriptor.
Click here to view the deployment descriptor for Services X1 and X3
Therefore, we register into the platform two services, X.default.X1.default and X.default.X3.default.
At deployment time, we request 2 instances of X1 (X.default.X1.default.0 and X.default.X1.default.1)
and one instance of X3 (X.default.X3.default.0). The scheduling decision places the instances as shown in
the picture.
From the deployment descriptor, we asked the platform to provision the Service IP 10.30.1.30 to X3 with Round
Robin policy. Therefore, X1 will use this address to perform load-balanced requests toward X3.
Click on different steps to see what is happening behind-the-scenes.
X1 performs a GET request using the Service IP 10.30.1.30. The default gateway for the 10.0.0.0/8 subnetwork
is the ProxyTUN component of the Network Manager. The request will be directed there.
From an L4 perspective, the packet will look somewhat like this:
The from ip, is the Virtual Layer IP, the Namespace IP of the container. This Namespace IP is assigned to the
VETH device used to connect the container namespace to the virtual bridge in the system namespace.
When receiving the request packet, the proxy does not yet have an active conversion entry in its cache. This results
in a cache miss. With a cache miss, the proxy tunnel fetches the information required for the conversion to the Environment
Manager. This component keeps track of the services deployed internally in the worker node, as well as the relevant
services deployed on other worker nodes.
This is an example of the Conversion Table maintained by the Environment Manager at this moment.
The entries of the table keep the cross-layer information of each service, including the physical layer address and
port, the virtual layer address, and all the service layer addresses. As the number of records is limited, the table
only keeps track of the services currently deployed in this machine. No interest in external services has been
recorded so far.
When the address 10.30.1.30 must be converted using the Conversion Table, this will result in a table miss.
The Environment Manager is then forced to ask the cluster orchestration for an entry that enables the conversion of
this address.
This operation is called table query and serves a double purpose:
Hierarchical lookup to fetch the required information.
If the information exists, an interest in that information is registered.
Therefore, any update, such as a service migration or service scaling, results in an update for that table entry.
This is one of the building blocks of the proposed abstraction, and it is detailed in the Interest Registration section.
The cluster resolved the Service IP 10.30.1.30 into a table entry describing only X.default.X3.default.0
(apparently, no other instances are in the system yet).
The Environment Manager can now answer the proxy with the virtual layer and physical layer addresses resolving
the previous cache miss and the balancing policy metadata associated with the address. In this case the response
will look like this:
Service IP conversion - from: 10.30.1.30 to 10.21.0.1#
Given the resolution details, the proxy, using the balancing policy information,
picks an instance from the instance list and adds an entry to the conversion list.
In this example, the entry will look like this:
Notice how the proxy also replaces the from address with the Instance IP of X1.
In abstract terms, the proxy is converting a the incoming packet from the form of
to
The conversion just shown is the key to enabling transparent Service Layer abstraction.
In this step, the proxy tunnel uses the Physical layer information to create a tunnel between Node 1
and Node 2 and forward the packet to the destination machine’s Network Manager.