revise reproducible build project: only care about binary installation

This commit is contained in:
Hannes Mehnert 2022-03-01 23:03:39 +01:00
parent 3304df3a66
commit 055a05b8f4
2 changed files with 112 additions and 93 deletions

View file

@ -4,13 +4,13 @@ title: Projects
# Robur Reproducible Builds
Over the past year we in [Robur](https://robur.coop/) have been working towards easing deployment of reproducible mirage applications. The work has been funded by the Eurepean Union under the [Next Generation Internet (NGI Pointer) initiative](https://pointer.ngi.eu/). The result is [available as a website](https://builds.robur.coop).
In 2021 we in [Robur](https://robur.coop/) have been working towards easing deployment of reproducible mirage applications. The work has been funded by the Eurepean Union under the [Next Generation Internet (NGI Pointer) initiative](https://pointer.ngi.eu/). The result is [online](https://builds.robur.coop).
The overall goal is to push MirageOS into production in a trustworthy way. We worked on reproducible builds for MirageOS - with the infrastructure being reproducible itself. A handful of core packages are hosted there (described below in this article), next to several ready-to-use MirageOS unikernels - ranging from [authoritative DNS servers](https://builds.robur.coop/job/dns-primary-git/) ([secondary](https://builds.robur.coop/job/dns-secondary/), [let's encrypt DNS solver](https://builds.robur.coop/job/dns-letsencrypt-secondary/)), [DNS-and-DHCP service (similar to dnsmasq)](https://builds.robur.coop/job/dnsvizor/), [TLS reverse proxy](https://builds.robur.coop/job/tlstunnel/), [Unipi - a web server that delivers content from a git repository](https://builds.robur.coop/job/unipi/), [DNS resolver](https://builds.robur.coop/job/dns-resolver/), [CalDAV server](https://builds.robur.coop/job/caldav/), and of course your own MirageOS unikernel.
The overall goal is to push MirageOS into production in a trustworthy way. We worked on reproducible builds for [Opam](https://opam.ocaml.org) packages and [MirageOS](https://mirageos.org) - with the infrastructure being reproducible itself. Reproducible builds are crucial for supply chain security - everyone can reproduce the exact same binary (by using the same sources and environment), without reproducible builds we would not publish binaries.
Reproducible builds are crucial for supply chain security - everyone can reproduce the exact same binary (by using the same sources and environment), without reproducible builds we would not publish binaries.
Reproducible builds are also great for fleet management: by inspecting the hash of the binary that is executed, we can figure out which versions of which libraries are in the unikernel - and suggest updates if newer builds are available or if a used library has a security flaw -- `albatross-client-local update my-unikernel` is everything needed for an update.
Reproducible builds are also great for fleet management: by inspecting the hash of the binary that is executed, we can figure out which versions of which packages are in the unikernel - and suggest updates if newer builds are available or if a used packages has a security flaw in that version -- `albatross-client-local update my-unikernel` is everything needed for an update.
Several ready-to-use MirageOS unikernels are built on a daily basis - ranging from [authoritative DNS servers](https://builds.robur.coop/job/dns-primary-git/) ([secondary](https://builds.robur.coop/job/dns-secondary/), [let's encrypt DNS solver](https://builds.robur.coop/job/dns-letsencrypt-secondary/)), [DNS-and-DHCP service (similar to dnsmasq)](https://builds.robur.coop/job/dnsvizor/), [TLS reverse proxy](https://builds.robur.coop/job/tlstunnel/), [Unipi - a web server that delivers content from a git repository](https://builds.robur.coop/job/unipi/), [DNS resolver](https://builds.robur.coop/job/dns-resolver/), [CalDAV server](https://builds.robur.coop/job/caldav/), and of course your own MirageOS unikernel.
[read more](/Projects/Reproducible_builds)

View file

@ -2,34 +2,71 @@
title: Robur Reproducible Builds
---
Over the past year we in [Robur](https://robur.coop/) have been working towards easing deployment of reproducible mirage applications. The work has been funded by the Eurepean Union under the [Next Generation Internet (NGI Pointer) initiative](https://pointer.ngi.eu/). The result is [available as a website](https://builds.robur.coop).
In 2021 we in [Robur](https://robur.coop/) have been working towards easing deployment of reproducible mirage applications. The work has been funded by the Eurepean Union under the [Next Generation Internet (NGI Pointer) initiative](https://pointer.ngi.eu/). The result is [online](https://builds.robur.coop).
The overall goal is to push MirageOS into production in a trustworthy way. We worked on reproducible builds for MirageOS - with the infrastructure being reproducible itself. A handful of core packages are hosted there (described below in this article), next to several ready-to-use MirageOS unikernels - ranging from [authoritative DNS servers](https://builds.robur.coop/job/dns-primary-git/) ([secondary](https://builds.robur.coop/job/dns-secondary/), [let's encrypt DNS solver](https://builds.robur.coop/job/dns-letsencrypt-secondary/)), [DNS-and-DHCP service (similar to dnsmasq)](https://builds.robur.coop/job/dnsvizor/), [TLS reverse proxy](https://builds.robur.coop/job/tlstunnel/), [Unipi - a web server that delivers content from a git repository](https://builds.robur.coop/job/unipi/), [DNS resolver](https://builds.robur.coop/job/dns-resolver/), [CalDAV server](https://builds.robur.coop/job/caldav/), and of course your own MirageOS unikernel.
The overall goal is to push MirageOS into production in a trustworthy way. We worked on reproducible builds for [Opam](https://opam.ocaml.org) packages and [MirageOS](https://mirageos.org) - with the infrastructure being reproducible itself. Reproducible builds are crucial for supply chain security - everyone can reproduce the exact same binary (by using the same sources and environment), without reproducible builds we would not publish binaries.
Reproducible builds are crucial for supply chain security - everyone can reproduce the exact same binary (by using the same sources and environment), without reproducible builds we would not publish binaries.
Reproducible builds are also great for fleet management: by inspecting the hash of the binary that is executed, we can figure out which versions of which libraries are in the unikernel - and suggest updates if newer builds are available or if a used library has a security flaw -- `albatross-client-local update my-unikernel` is everything needed for an update.
Reproducible builds are also great for fleet management: by inspecting the hash of the binary that is executed, we can figure out which versions of which packages are in the unikernel - and suggest updates if newer builds are available or if a used packages has a security flaw in that version -- `albatross-client-local update my-unikernel` is everything needed for an update.
In the following, we'll explain in more detail two scenarios: how to deploy MirageOS unikernels using the infrastructure we provide, how to bootstrap and run the infrastructure for yourself. Afterwards we briefly describe how to reproduce a package, and what are our core packages and their relationships.
Several ready-to-use MirageOS unikernels are built on a daily basis - ranging from [authoritative DNS servers](https://builds.robur.coop/job/dns-primary-git/) ([secondary](https://builds.robur.coop/job/dns-secondary/), [let's encrypt DNS solver](https://builds.robur.coop/job/dns-letsencrypt-secondary/)), [DNS-and-DHCP service (similar to dnsmasq)](https://builds.robur.coop/job/dnsvizor/), [TLS reverse proxy](https://builds.robur.coop/job/tlstunnel/), [Unipi - a web server that delivers content from a git repository](https://builds.robur.coop/job/unipi/), [DNS resolver](https://builds.robur.coop/job/dns-resolver/), [CalDAV server](https://builds.robur.coop/job/caldav/), and of course your own MirageOS unikernel.
## Brief robur and MirageOS introduction
MirageOS is an operating system, developed in OCaml, which produces unikernels. A unikernel serves a single purpose and is a single process, i.e. only has the really needed dependencies. For example, an OpenVPN endpoint does neither include persistent storage (block device, file system) nor user management. MirageOS unikernels are developed in OCaml, a statically typed and type-safe programming language - which avoids common pitfalls from the grounds up (spatial and temporal memory safety issues).
[MirageOS](https://mirageos.org) is an operating system, developed in OCaml, which produces unikernels. A unikernel serves a single purpose and is a single process, i.e. only has the really needed dependencies. For example, an OpenVPN endpoint does neither include persistent storage (block device, file system) nor user management. MirageOS unikernels are developed in [OCaml](https://ocaml.org), a statically typed and type-safe programming language - which avoids common pitfalls from the grounds up (spatial and temporal memory safety issues).
[Robur](https://robur.coop) is a collective that develops MirageOS and OCaml software with open source license. It was started in 2017, and is part of the non-profit company [center for the cultivation of technology](https://techcultivation.org). We received funding from several projects (prototypefund, NGI pointer), donations, and some commercial contracts.
[Robur](https://robur.coop) is a collective that develops MirageOS and OCaml software with open source license. It was started in 2017, and is part of the non-profit company [center for the cultivation of technology](https://techcultivation.org). We received funding from several projects ([prototypefund](https://prototypefund.de), [NGI pointer](https://pointer.ngi.eu)), donations, and commercial contracts.
## For someone who wants to run MirageOS unikernels
## Deploying MirageOS unikernel
To run a MirageOS unikernel on your laptop or computer with virtualization extensions (VT-x - KVM/BHyve), you can first install solo5-hvt as a [package](https://builds.robur.coop/job/solo5-hvt/) (take which fits your distribution), and [albatross](https://builds.robur.coop/job/albatross/).
To run a MirageOS unikernel on your laptop or computer with virtualization extensions (VT-x - KVM/BHyve), you first have to install the `solo5-hvt` and `albatross` packages. Afterwards you need to setup a virtual network switch (a bridge interface) where your unikernels will communicate, and forwarding.
There is no configuration needed, you should start the `albatross_console` and the `albatross_daemon` service (via `systemctl daemon-reload ; systemctl start albatross_daemon` on Linnux or `service albatross_daemon start` on FreeBSD). Executing `albatross-client-local info ` should return success (exit code 0) and no running unikernel. You may need to be in the albatross group, or change the permissions of the Unix domain socket (`vmmd.sock` in `/run/albatross/util/` on Linux, `/var/run/albatross/util/` on FreeBSD).
### Host system package installation
### Network setup
For Debian and Ubuntu systems, we provide package repositories. Browse the [dists](https://apt.robur.coop/dists) folder for one matching your distribution, and add it to `/etc/apt/sources.list`:
```
$ wget -q -O - https://apt.robur.coop/gpg.pub | apt-key add - # adds the GnuPG public key
$ echo "deb https://apt.robur.coop ubuntu-20.04 main" >> /etc/apt/sources.list # replace ubuntu-20.04 with e.g. debian-10 on a debian buster machine
$ apt update
$ apt install solo5-hvt albatross
```
To setup networking, you need a bridge interface, usually named service, that albatross will use for unikernels. To provide network connectivity to that bridge interface, you can either use NAT, forward public IP addresses there, provide a gateway that tunnels via VPN, or add your network interface to the bridge. In the following, we describe the setup in detail on Linux. Get in touch with us if you're interested in other platforms.
On FreeBSD:
```
$ fetch -o /usr/local/etc/pkg/robur.pub https://pkg.robur.coop/repo.pub # download RSA public key
$ echo 'robur: {
url: "https://pkg.robur.coop/${ABI}",
mirror_type: "srv",
signature_type: "pubkey",
pubkey: "/usr/local/etc/pkg/robur.pub",
enabled: yes
}' > /usr/local/etc/pkg/robur.conf # Check https://pkg.robur.coop which ABI are available
$ pkg update
$ pkg install solo5-hvt albatross
```
Bridge setup on Linux in `/etc/network/interfaces`:
For other distributions and systems we do not (yet?) provide binary packages. You can compile and install them using [opam](https://opam.ocaml.org) (`opam install solo5 albatross`). Get in touch if you're keen on adding some other distribution to our reproducible build infrastructure.
There is no configuration needed. Start the `albatross_console` and the `albatross_daemon` service (via `systemctl daemon-reload ; systemctl start albatross_daemon` on Linux or `service albatross_daemon start` on FreeBSD). Executing `albatross-client-local info ` should return success (exit code 0) and no running unikernel. You may need to be in the albatross group, or change the permissions of the Unix domain socket (`/run/albatross/util/vmmd.sock` on Linux, `/var/run/albatross/util/vmmd.sock` on FreeBSD).
To check that albatross works, get the latest hello world unikernel and run it:
```
$ wget https://builds.robur.coop/job/hello/build/latest/bin/hello.hvt
$ albatross-client-local console my-hello-unikernel & # this is sent to the background since it waits and displays the console of the unikernel named "my-hello-unikernel"
$ albatross-client-local create my-hello-unikernel hello.hvt # this returns once the unikernel image has been transmitted to the albatross daemon
$ albatross-client-local create --arg='--hello="Hello,\ my\ unikernel" my-hello-unikernel hello.hvt # executes the same unikernel, but passes the boot parameter "--hello"
$ fg # back to albatross-client-local console
$ Ctrl-C # kill that process
```
Voila, we have a working albatross installation. Albatross also supports a remote client (using a TLS handshake) `albatross-client-bistro`, monitoring of unikernels (`albatross_stat` and `albatross_influx` services), and a TLS endpoint (via inetd with `albatross-tls-inetd`).
### Network for your unikernel
Next we want to setup networking for our unikernels. We use a so-called "bridge" interface for this, which is a virtual network switch where you connect "tap" interfaces (layer 2 ethernet devices). A MirageOS unikernel uses tap interfaces for communication. We give our bridge the name "service" (and for example for monitoring and management you may want to setup another bridge "management").
If you're using a network manager that is capable of setting up bridge interfaces, use that interface.
If not, on Linux you can add the following to `/etc/network/interfaces` (the reason for adding a dummy interface to the bridge is that otherwise Linux uses the mac address of the first connected tap interface, and there'll be rather confusing issues):
```
auto service
# Host-only bridge
@ -38,104 +75,86 @@ iface service inet manual
up ip link set dev service-master up
up ip link add service type bridge
up ip link set dev service-master master service
up ip addr add 10.0.42.1/24 dev service
up ip link set dev service up
down ip link del service
down ip link del service-master
```
#### Routing of a subnet
On FreeBSD, add the following to `/etc/rc.conf`:
```
cloned_interfaces="bridge0"
ifconfig_bridge0_name="service"
ifconfig_service="inet 10.0.42.1/24"
```
If your host system acts as a router for a network, enable IPv4 forwarding (` echo "1" > /proc/sys/net/ipv4/ip_forward`), and setup that IP address (`up ip addr add 192.168.0.1/24 dev service`)
Afterwards either restart your system or re-run the service scripts to have the bridge setup in your running system.
#### Physical network interface with IP address space
To check that the networking works, get the latest static website unikernel and run it:
```
$ wget https://builds.robur.coop/job/static-website/build/latest/bin/https.hvt
$ albatross-client-local console my-website & # this is sent to the background since it waits and displays the console of the unikernel named "my-website"
$ albatross-client-local create --net=service --arg='--ipv4=10.0.42.2/24' my-website https.hvt # this returns once the unikernel image has been transmitted to the albatross daemon
$ ping 10.0.42.2 # should receive answers
$ open http://10.0.42.2 # in your browser - also https://10.0.42.2 (you'll get a certificate warning)
$ wget http://10.0.42.2/ # should download the Hello Mirage world!
$ wget --no-check-certificate https://10.0.42.2/ # should also download the Hello Mirage world!
$ fg # back to albatross-client-local console
$ Ctrl-C # kill that process
$ albatross-client-local destroy my-website # kills the unikernel
```
To put your unikernels on the same network as your host system, add that external network interface to the bridge: `up ip link set dev enp0s20f0 master service`.
When you reached this point, you have successfully launched a MirageOS unikernel, and are able to communicate from your computer with it. This uses the OCaml networking stack, and the host bridge interface.
#### NAT (no public IP address, e.g. for testing on your Laptop)
## Routing and Internet
Setup a private network on the `service` bridge (`up ip addr add 192.168.0.1/24 dev service`), enable IPv4 forwarding (`echo "1" > /proc/sys/net/ipv4/ip_forward`), and a firewall rule (`iptables -t nat -A POSTROUTING -o enp0s20f0 -j MASQUERADE`).
Your unikernel may want to communicate not only with your host, but also with the Internet. The other way around is also important (the Internet wants to talk with your unikernel).
### Unikernel execution
There are several options, depending on your setup:
- Your unikernel will be masqueraded (using [NAT](https://en.wikipedia.org/wiki/Network_address_translation)) - some ports may be forwarded to the unikernel,
- Your computer has several public IP addresses (and put the ethernet device with the ethernet cable on the bridge) and there is an external router,
- Your computer acts as a router for a subnet.
Download the [traceroute](https://builds.robur.coop/job/traceroute/) unikernel ([direct link to unikernel image](https://builds.robur.coop/job/traceroute/build/latest/f/bin/traceroute.hvt)), and run it via albatross: in one shell, observe the console output: `albatross-client-local console traceroute`, in a second shell create the unikernel: `albatross-client-local create --net=service traceroute traceroute.hvt --arg='--ipv4=192.168.0.2/24' --arg='--ipv4-gateway=192.168.0.1'`
### NAT
That's it. Albatross has more features, such as block devices, multiple bridges (for management, private networks, ...), restart on certain exit codes, assignment to a specific CPU. It also has remote command execution and resource limits (you can allow your friends to execute U unikernels with M MB memory and B MB block devices accessing your bridges A and B). There is a daemon to collect metrics and report them to Telegraf (to push them into Influx and view in nice Grafana dashboards). MirageOS unikernels also support IPv6, you're not limited to legacy IP.
You can also use `albatross-client-local update` to ensure you're running the latest unikernel - it checks https://builds.robur.coop for the job and suggests to update if there is a newer binary available.
## For someone who wants to build and run MirageOS unikernels
The fundamental tools for building in a reproducible way are orb and builder. On some distributions we provide binary packages ([orb](https://builds.robur.coop/job/orb/), [builder](https://builds.robur.coop/job/builder/)) that you can use. On other distributions you'll need to bootstrap them from source:
- To build in a reproducible way, we developed orb, which is written in OCaml. It is an opam package available at https://github.com/roburio/orb (installation via `opam pin add orb https://github.com/roburio/orb.git`) - once you have OCaml and [opam](https://opam.ocaml.org) installed.
- To build builder, `opam install builder` is all you need to do. `opam install builder-web` will install the latest version of builder-web.
### Setup builder
This won't allow your unikernel to be reachable from the outside.You'll need to:
- enable IPv4 forwarding
- add a firewall rule
On Linux:
Builder provides a systemd service (builder) that you should start. There is as well a builder-worker service that executes the worker process in a docker container. Check the URLs and configuration in the systemd service files, if necessary modify it using `systemctl edit --full builder-worker.service`, and start it. The provided builder-worker.service script will build for Ubuntu 20.04 as of writing.
```
$ echo "1" > /proc/sys/net/ipv4/ip_forward # enables IP forwarding
$ iptables -t nat -A POSTROUTING -o enp0s20f0 -j MASQUERADE # replace enp0s20f0 with your network interface
```
On FreeBSD:
```
$ echo 'gateway_enable="YES"' >> /etc/rc.conf # enable IP forwarding
$ echo 'pf_enable="YES"' >> /etc/rc.conf # enables the packet filter
$ echo "nat pass on em0 inet from 10.0.42.0/24 to any -> (em0)" >> /etc/pf.conf # replace em0 with your ethernet interface)
```
For FreeBSD, rc scripts and an example jail.conf (and shell script to launch) are provided. Setting up a jail is documented in the README (using poudriere).
### Public IP addresses
### Setup builder-web
To put your unikernels on the same network as your host system, add that external network interface to the bridge:
Builder-web needs an initial database, an initial user, and also has a service script. Use the `builder-db migrate` command to create an initial database, and `builder-db user-add --unrestricted my_user` to create a privileged user `my_user`. Setup your builder to use reproducible packages from builder-web and upload results there (by setting the `--upload https://my_user:my_password@builds.robur.coop/upload`).
On Linux, add `up ip link set dev enp0s20f0 master service` in `/etc/network/interfaces` (replace enp0s20f0 with your ethernet interface).
On FreeBSD, add `ifconfig_service="addm em0"` to `/etc/rc.conf` (replace em0 with your ethernet interface).
### Schedule an orb job
### Router
The command `builder-client info` should output the schedule, queues, and running builds. To schedule a daily build, run `builder-client orb-build traceroute traceroute-hvt`. This will create a new job named traceroute and pick up the job template (`/etc/builder/orb-build.template.PLATFORM`) and schedule that job to your worker in order to build the opam package traceroute-hvt.
Enable IPv4 forwarding, and setup one IP address on the bridge (replacing the 10.0.42.1/24 above).
We document the commands, you can always execute it with `--help` to see the man page.
## Unikernel execution
## Reproducing builds
Let's test that your unikernels have access to the Internet by using the [traceroute](https://hannes.robur.coop/Posts/Traceroute) unikernel:
From a build on https://builds.robur.coop, select an operating system and distribution that has been used for a build. Go to the specific build, and download the "system-packages" file -- these are the exact versions of host system packages that were used during the build. Make sure they're installed (version variance may lead to non-reproducibility - orb and builder are not needed for a manual rebuild).
```
$ wget https://builds.robur.coop/job/traceroute/build/latest/bin/traceroute.hvt
$ albatross-client-local console my-traceroute & # this is sent to the background since it waits and displays the console of the unikernel named "my-traceroute"
$ albatross-client-local create --net=service --arg='--ipv4=10.0.42.2/24' --arg='--ipv4-gateway=10.0.42.1' my-traceroute traceroute.hvt # the IP configuration depends on your setup, use your public IP address and actual router IP if you've set it up.
$ fg # back to albatross-client-local console
$ Ctrl-C # kill that process
```
Download the build-environment file, which contains all environment variables that were set during the build. Set these, and only these, in your shell.
Install opam (at least in version 2.1). Then, download the opam-switch file - which includes all opam files and dependencies (including the OCaml compiler). Execute `opam switch import opam-switch --switch reproduced-unikernel` which will create a fresh opam switch where it will install the unikernel. This will be located in `opam switch prefix`/bin/unikernel.hvt.
## Core software components in more detail
### [orb](https://github.com/roburio/orb)
The Opam Reproducible Builder uses the opam libraries to conduct a build of an opam package using any opam repositories. It collects system packages, environment variables, and a full and frozen opam switch export. These artifacts contain the build information and can be used to reproduce the exact same binary.
### [builder](https://github.com/roburio/builder/)
Builder is a suite of three executables: builder-server, builder-worker and builder-client. Together they periodically run scheduled jobs which execute orb, collecting build artifacts and information used for reproducing the build. The builder-worker is executed in a container or jailed environment, and communicates via TCP with the builder-server. The result of the build can be uploaded to builder-web or stored in the file system.
### [builder-web](https://github.com/roburio/builder-web)
Builder-web is a web interface for viewing and downloading builds and build artifacts created by builder jobs. The binary checksums can be viewed and the build inputs (opam packages, environment variables, system packages) can be compared across builds.
It uses [dream](https://github.com/aantron/dream) with sqlite3 as backend database. The database schema evolved over time, we developed migration and rollback tooling to update our live database.
### [albatross](https://github.com/roburio/albatross)
Albatross is an orchestration system for MirageOS unikernels. It manages system resources (tap interfaces, virtual block devices) that can be passed to the unikernels. It reads the console output of a unikernel and provides it via a TCP stream. It also has remote access via TLS, where apart from inspecting the running status also new unikernels can be uploaded. Albatross integrates with builder-web to look up running unikernels by their hash and optionally updating the unikernel binary.
### [solo5](https://github.com/solo5/solo5)
Solo5 is the tender - the application that runs in the host system as a user process, consuming the system resources, and delegating them to the unikernel. This is a pretty small binary with a tiny API between host and unikernel. [A great solo5 overview talk (FOSDEM 2019)](https://archive.fosdem.org/2019/schedule/event/solo5_unikernels/).
## Future
We have enhancements and more features planned in the future. At the same time we are looking for feedback of the reproducible build and unikernel deployment system (with a security perspective, with a devops perspective, etc.). We are also keen to collaborate and would take new people on board.
- Improving the web UI on https://builds.robur.coop/. If you're interested, please get in touch, we have funding available.
- Supporting more distributions: tell us your favourite distribution and how to build a package, then we can integrate that into our reproducible builds infrastructure.
- Supporting spt - the sadboxed process tender - to run unikernels without a hypervisor.
- Data analytics: which system packages updates or opam package releases result in variance of the binaries - did the release of an opam package increase or decrease the overall build times?
- Functional and performance tests of the unikernels: for each different build, conduct basic functional testing, and performance test - to graph in the ouput. Also includes data analytics: did the release of an opam package increase or decrease the performance of unikernels?
- Whole system performance analysis with memory profiling, and how to integrate this into a running unikernel.
- MirageOS 4.0 support.
- Metrics and logging collection and dynamic adjustment of metrics and log levels.
- DNS resolver unikernel, still missing DNSSec support.
Interested? Get in touch with us.
That's it. Albatross has more features, such as block devices, multiple bridges (for management, private networks, ...), restart if the unikernel exited with specific exit code, assignment of a unikernel to a specific CPU. It also has remote command execution and resource limits (you can allow your friends to execute a number of unikernels with limited memory and block storage accessing only some of your bridges). There is a daemon to collect metrics and report them to Grafana (via Telegraf and Influx). MirageOS unikernels also support IPv6, you're not limited to legacy IP.