Unikernel Deployment

Oakestra supports unikernel executions for applications built using Unikraft.

What do you need?

It’s simple. All you need to start deploying unikernels is:

  1. A unikernel .tar.gz hosted in a repository accessible to the Oakestra
  2. A service deployment descriptor
  3. A worker machine with KVM Runtime enabled

Packaging your first Unikraft image

Once you build your first Unikernel using Unikraft, you can create a tarball of the kernel image and the necessary files.

myunikernel.tar.gz
|-- kernel
`--files1/
      |--...
 `--...

Your unikernel tarball MUST contain a file named kernel, which is the unikernel image itself. Additionally, it CAN contain a files1 folder, which is mounted by Oakestra as a kernel filesystem at runtime. This folder contains all the additional files required by the unikernel to run.

Looking for an example?

Check out the Unikraft Nginx walkthrough in the Application Catalogue

Once you’ve packaged your unikernel, you can upload it to a location accessible to Oakestra, such as a web server or a cloud storage service.

Creating a Service Deployment Descriptor for your Unikernel

Unikernels can be deployed to Oakestra like any other service. You need to create a service deployment descriptor that describes the unikernel service you want to deploy.

Here is an example of an Nginx server using Unikraft
unikernel-nginx.json
 {
   "sla_version": "v2.0",
   "customerID": "Admin",
   "applications": [
 {
       "applicationID": "",
       "application_name": "nginx",
       "application_namespace": "test",
       "application_desc": "Simple demo of a Unikraft Nginx server",
       "microservices": [
 {
           "microserviceID": "",
           "microservice_name": "nginx",
           "microservice_namespace": "nginx",
           "virtualization": "unikernel",
           "cmd": [
             ""
 ],
           "memory": 400,
           "vcpus": 1,
           "vgpus": 0,
           "vtpus": 0,
           "bandwidth_in": 0,
           "bandwidth_out": 0,
           "port": "9000:80",
           "storage": 0,
           "code": "https://github.com/oakestra/oakestra/releases/download/alpha-v0.4.301/nginx_amd64.tar.gz",
           "arch": [
             "amd64"
 ],
           "state": "",
           "added_files": []
 }
 ]
 }

What’s important to note in this SLA?

  • The virtualization field is set to unikernel
  • The code field contains the URL to the unikernel tarball. This URL must be accessible to the Oakestra worker nodes. We’re using the Oakestra release page as an example here to host the tarball file.
  • The arch field specifies the architecture of the unikernel. In this case, it’s amd64.

The rest of the fields are similar to the ones used for container services. You can then deploy the application as usual using the Oakestra Dashboard, the Oakestra CLI, or the Oakestra APIs.

Caution

You need at least one worker node in one of your clusters with a unikernel runtime enabled to deploy your unikernels.

Enable a KVM Runtime for an Oakestra Worker Node

If your node supports nested virtualization and you have KVM installed, you can enable the KVM runtime for your Oakestra worker node.

How to install KVM?

E.g., If your worker node uses an Ubuntu distro, you can follow this guide to install KVM.

If your infrastrcuture fulfills all the requirements, all you need to do is:

1: Stop your worker node (if already running)

sudo NodeEngine stop

2: Enable unikernel runtime

sudo NodeEngine config virtualization unikernel on

3: Restart your worker node

sudo NodeEngine -a <Cluster Orchestrator IP> -d

Did you know?

You can check the list of enabled runtimes using:

sudo NodeEngine config virtualization

You can turn off any virtualization runtime using:

sudo NodeEngine config virtualization <unikernel/container> off