Proxy Conversion

In order to make the service layer networking functional, a worker-level tunnel-proxy is transparently instantiated as part of the Oakestra network component. The following picture is an example of what happens in a worker node based on the IPv4 implementation of the Network Manager component. Note that IPv4 and IPv6 work identically. For simplicity, we will stick to IPv4 addresses in this walkthrough.

Example

Let’s assume that we have two worker nodes, namely Node 1 and Node 2, each executing two containers. The containers are instantiated and managed by the Node Engine, while the Network Manager, creates a network namespace for each container (the cloud surrounding the container), enabling the Virtual Layer abstraction.

`
Net Manager
Environment Manager
X.default.X1.default.0
Container / Process
Container / Process
ProxyTUN
X.default.X1.default.1
Net Manager
Environment Manager
Y.prod.Y1.default.0
Container / Process
Container / Process
Edge Proxy
X.default.X3.default.0
Cache Miss
Table query
 3 
 1 
UDP to 131.1.21.5:55301
 6 
http://10.21.0.1
:30443/api/hello
 7 
 5 

Service IP
conversion:
10.30.1.30
> 10.21.0.1

 2 
Update
 4 
Node 1
Node 2
Conversion
Table
Conversion
Table
http://10.21.0.1:30443/api/hello

http://10.30.1.30
:30443/api/hello

`
Net Manager
Environment Manager
X.default.X1.default.0
Container / Process
Container / Process
ProxyTUN
X.default.X1.default.1
Net Manager
Environment Manager
Y.prod.Y1.default.0
Container / Process
Container / Process
Edge Proxy
X.default.X3.default.0
Cache Miss
Table query
 3 
 1 
UDP to 131.1.21.5:55301
 6 
http://10.21.0.1
:30443/api/hello
 7 
 5 

Service IP
conversion:
10.30.1.30
> 10.21.0.1

 2 
Update
 4 
Node 1
Node 2
Conversion
Table
Conversion
Table
http://10.21.0.1:30443/api/hello

http://10.30.1.30
:30443/api/hello

The Service Layer abstraction is realized hierarchically with a mechanism of route interest registration and proxy translation. This section details the proxy translation that allows transparent conversion of Service IPs into Namespace IPs, therefore enabling transparent Virtual Layer ↔ Service Layer conversion.

Following the example mentioned above, suppose we deployed services X1 and X3 using the following deployment descriptor.

Click here to view the deployment descriptor for Services X1 and X3
{
  "sla_version" : "v2.0",
  "customerID" : "Admin",
  "applications" : [
    {
      "applicationID" : "",
      "application_name" : "X",
      "application_namespace" : "default",
      "application_desc" : "X application",
      "microservices" : [
        {
          "microserviceID": "",
          "microservice_name": "X1",
          "microservice_namespace": "default",
          "virtualization": "container",
          "code": "docker.io/X/X1",
          "addresses": {
            "rr_ip": "10.30.0.1",
            "rr_ip_v6": "fdff:2000::1"
          },
        },
        {
          "microserviceID": "",
          "microservice_name": "X3",
          "microservice_namespace": "default",
          "virtualization": "container",
          "code": "docker.io/X/X3",
          "addresses": {
            "rr_ip": "10.30.1.30",
            "rr_ip_v6": "fdff:2000::30",
          	},
        }
      ]
    }
  ]
}

Therefore, we register into the platform two services, X.default.X1.default and X.default.X3.default. At deployment time, we request 2 instances of X1 (X.default.X1.default.0 and X.default.X1.default.1) and one instance of X3 (X.default.X3.default.0). The scheduling decision places the instances as shown in the picture. From the deployment descriptor, we asked the platform to provision the Service IP 10.30.1.30 to X3 with Round Robin policy. Therefore, X1 will use this address to perform load-balanced requests toward X3.


Click on different steps to see what is happening behind-the-scenes.



Interest Registration

Here we show a sequence diagram of how a table query and an interest registration work in the worker-cluster-root hierarchy.

Table
miss
Cache miss
Table
miss
Environment
Manager
Service X1
Cluster Service Manager
Root Service Manager
http://10.30.1.30
Local table check 
Table query 10.30.1.30
Table check
Table query 10.30.1.30
X.default.X3.default table
X.default.X3.default table
...X3.default register interest 
ProxyTUN
Local cache check
resolution request
resolved IP
...X3.default register interest 
Table
miss
Cache miss
Table
miss
Environment
Manager
Service X1
Cluster Service Manager
Root Service Manager
http://10.30.1.30
Local table check 
Table query 10.30.1.30
Table check
Table query 10.30.1.30
X.default.X3.default table
X.default.X3.default table
...X3.default register interest 
ProxyTUN
Local cache check
resolution request
resolved IP
...X3.default register interest 
  • The environment manager keeps the ‘interests subscriptions’ for 10 seconds
  • If the route is not used for more than 10 seconds, the interest is removed, and the table entry is cleared
  • A cluster maintains an interest as long as at least one worker node is interested in that route
  • A worker node is ALWAYS subscribed to interests regarding the instances deployed internally.