homepage-data/Our Work/Projects

321 lines
31 KiB
Text
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
title: Projects
---
# Robur Reproducible Builds
Over the past year we in [Robur](https://robur.coop/) have been working towards easing deployment of reproducible mirage applications. The work has been funded by the Eurepean Union under the [Next Generation Internet (NGI Pointer) initiative](https://pointer.ngi.eu/). The result is [available as a website](https://builds.robur.coop).
The overall goal is to push MirageOS into production in a trustworthy way. We worked on reproducible builds for MirageOS - with the infrastructure being reproducible itself. A handful of core packages are hosted there (described below in this article), next to several ready-to-use MirageOS unikernels - ranging from [authoritative DNS servers](https://builds.robur.coop/job/dns-primary-git/) ([secondary](https://builds.robur.coop/job/dns-secondary/), [let's encrypt DNS solver](https://builds.robur.coop/job/dns-letsencrypt-secondary/)), [DNS-and-DHCP service (similar to dnsmasq)](https://builds.robur.coop/job/dnsvizor/), [TLS reverse proxy](https://builds.robur.coop/job/tlstunnel/), [Unipi - a web server that delivers content from a git repository](https://builds.robur.coop/job/unipi/), [DNS resolver](https://builds.robur.coop/job/dns-resolver/), [CalDAV server](https://builds.robur.coop/job/caldav/), and of course your own MirageOS unikernel.
Reproducible builds are crucial for supply chain security - everyone can reproduce the exact same binary (by using the same sources and environment), without reproducible builds we would not publish binaries.
Reproducible builds are also great for fleet management: by inspecting the hash of the binary that is executed, we can figure out which versions of which packages are in the unikernel - and suggest updates if newer builds are available or if a used packages has a security flaw in that version -- `albatross-client-local update my-unikernel` is everything needed for an update.
In the following, we'll explain in more detail two scenarios: how to deploy MirageOS unikernels using the infrastructure we provide, how to bootstrap and run the infrastructure for yourself. Afterwards we briefly describe how to reproduce a package, and what are our core packages and their relationships.
## For someone who wants to run MirageOS unikernels
To run a MirageOS unikernel on your laptop or computer with virtualization extensions (VT-x - KVM/BHyve), you can first install solo5-hvt as a [package](https://builds.robur.coop/job/solo5-hvt/) (take which fits your distribution), and [albatross](https://builds.robur.coop/job/albatross/).
There is no configuration needed, you should start the `albatross_console` and the `albatross_daemon` service (via `systemctl daemon-reload ; systemctl start albatross_daemon` on Linnux or `service albatross_daemon start` on FreeBSD). Executing `albatross-client-local info ` should return success (exit code 0) and no running unikernel. You may need to be in the albatross group, or change the permissions of the Unix domain socket (`vmmd.sock` in `/run/albatross/util/` on Linux, `/var/run/albatross/util/` on FreeBSD).
### Network setup
To setup networking, you need a bridge interface, usually named service, that albatross will use for unikernels. To provide network connectivity to that bridge interface, you can either use NAT, forward public IP addresses there, provide a gateway that tunnels via VPN, or add your network interface to the bridge. In the following, we describe the setup in detail on Linux. Get in touch with us if you're interested in other platforms.
Bridge setup on Linux in `/etc/network/interfaces`:
```
auto service
# Host-only bridge
iface service inet manual
up ip link add service-master address 02:00:00:00:00:01 type dummy
up ip link set dev service-master up
up ip link add service type bridge
up ip link set dev service-master master service
up ip link set dev service up
down ip link del service
down ip link del service-master
```
#### Routing of a subnet
If your host system acts as a router for a network, enable IPv4 forwarding (` echo "1" > /proc/sys/net/ipv4/ip_forward`), and setup that IP address (`up ip addr add 192.168.0.1/24 dev service`)
#### Physical network interface with IP address space
To put your unikernels on the same network as your host system, add that external network interface to the bridge: `up ip link set dev enp0s20f0 master service`.
#### NAT (no public IP address, e.g. for testing on your Laptop)
Setup a private network on the `service` bridge (`up ip addr add 192.168.0.1/24 dev service`), enable IPv4 forwarding (`echo "1" > /proc/sys/net/ipv4/ip_forward`), and a firewall rule (`iptables -t nat -A POSTROUTING -o enp0s20f0 -j MASQUERADE`).
### Unikernel execution
Download the [traceroute](https://builds.robur.coop/job/traceroute/) unikernel ([direct link to unikernel image](https://builds.robur.coop/job/traceroute/build/latest/f/bin/traceroute.hvt)), and run it via albatross: in one shell, observe the console output: `albatross-client-local console traceroute`, in a second shell create the unikernel: `albatross-client-local create --net=service traceroute traceroute.hvt --arg='--ipv4=192.168.0.2/24' --arg='--ipv4-gateway=192.168.0.1'`
That's it. Albatross has more features, such as block devices, multiple bridges (for management, private networks, ...), restart on certain exit codes, assignment to a specific CPU. It also has remote command execution and resource limits (you can allow your friends to execute U unikernels with M MB memory and B MB block devices accessing your bridges A and B). There is a daemon to collect metrics and report them to Telegraf (to push them into Influx and view in nice Grafana dashboards). MirageOS unikernels also support IPv6, you're not limited to legacy IP.
You can also use `albatross-client-local update` to ensure you're running the latest unikernel - it checks https://builds.robur.coop for the job and suggests to update if there is a newer binary available.
## For someone who wants to build and run MirageOS unikernels
The fundamental tools for building in a reproducible way are orb and builder. On some distributions we provide binary packages ([orb](https://builds.robur.coop/job/orb/), [builder](https://builds.robur.coop/job/builder/)) that you can use. On other distributions you'll need to bootstrap them from source:
- To build in a reproducible way, we developed orb, which is written in OCaml. It is an opam package available at https://github.com/roburio/orb (installation via `opam pin add orb https://github.com/roburio/orb.git`) - once you have OCaml and [opam](https://opam.ocaml.org) installed.
- To build builder, `opam install builder` is all you need to do. `opam install builder-web` will install the latest version of builder-web.
### Setup builder
On Linux:
Builder provides a systemd service (builder) that you should start. There is as well a builder-worker service that executes the worker process in a docker container. Check the URLs and configuration in the systemd service files, if necessary modify it using `systemctl edit --full builder-worker.service`, and start it. The provided builder-worker.service script will build for Ubuntu 20.04 as of writing.
On FreeBSD:
For FreeBSD, rc scripts and an example jail.conf (and shell script to launch) are provided. Setting up a jail is documented in the README (using poudriere).
### Setup builder-web
Builder-web needs an initial database, an initial user, and also has a service script. Use the `builder-db migrate` command to create an initial database, and `builder-db user-add --unrestricted my_user` to create a privileged user `my_user`. Setup your builder to use reproducible packages from builder-web and upload results there (by setting the `--upload https://my_user:my_password@builds.robur.coop/upload`).
### Schedule an orb job
The command `builder-client info` should output the schedule, queues, and running builds. To schedule a daily build, run `builder-client orb-build traceroute traceroute-hvt`. This will create a new job named traceroute and pick up the job template (`/etc/builder/orb-build.template.PLATFORM`) and schedule that job to your worker in order to build the opam package traceroute-hvt.
We document the commands, you can always execute it with `--help` to see the man page.
## Reproducing builds
From a build on https://builds.robur.coop, select an operating system and distribution that has been used for a build. Go to the specific build, and download the "system-packages" file -- these are the exact versions of host system packages that were used during the build. Make sure they're installed (version variance may lead to non-reproducibility - orb and builder are not needed for a manual rebuild).
Download the build-environment file, which contains all environment variables that were set during the build. Set these, and only these, in your shell.
Install opam (at least in version 2.1). Then, download the opam-switch file - which includes all opam files and dependencies (including the OCaml compiler). Execute `opam switch import opam-switch --switch reproduced-unikernel` which will create a fresh opam switch where it will install the unikernel. This will be located in `opam switch prefix`/bin/unikernel.hvt.
## Core software components in more detail
### [orb](https://github.com/roburio/orb)
The Opam Reproducible Builder uses the opam libraries to conduct a build of an opam package using any opam repositories. It collects system packages, environment variables, and a full and frozen opam switch export. These artifacts contain the build information and can be used to reproduce the exact same binary.
### [builder](https://github.com/roburio/builder/)
Builder is a suite of three executables: builder-server, builder-worker and builder-client. Together they periodically run scheduled jobs which execute orb, collecting build artifacts and information used for reproducing the build. The builder-worker is executed in a container or jailed environment, and communicates via TCP with the builder-server. The result of the build can be uploaded to builder-web or stored in the file system.
### [builder-web](https://github.com/roburio/builder-web)
Builder-web is a web interface for viewing and downloading builds and build artifacts created by builder jobs. The binary checksums can be viewed and the build inputs (opam packages, environment variables, system packages) can be compared across builds.
It uses [dream](https://github.com/aantron/dream) with sqlite3 as backend database. The database schema evolved over time, we developed migration and rollback tooling to update our live database.
### [albatross](https://github.com/roburio/albatross)
Albatross is an orchestration system for MirageOS unikernels. It manages system resources (tap interfaces, virtual block devices) that can be passed to the unikernels. It reads the console output of a unikernel and provides it via a TCP stream. It also has remote access via TLS, where apart from inspecting the running status also new unikernels can be uploaded. Albatross integrates with builder-web to look up running unikernels by their hash and optionally updating the unikernel binary.
### [solo5](https://github.com/solo5/solo5)
Solo5 is the tender - the application that runs in the host system as a user process, consuming the system resources, and delegating them to the unikernel. This is a pretty small binary with a tiny API between host and unikernel. [A great solo5 overview talk (FOSDEM 2019)](https://archive.fosdem.org/2019/schedule/event/solo5_unikernels/).
## Future
We have enhancements and more features planned in the future. At the same time we are looking for feedback of the reproducible build and unikernel deployment system (with a security perspective, with a devops perspective, etc.). We are also keen to collaborate and would take new people on board.
- Improving the web UI on https://builds.robur.coop/. If you're interested, please get in touch, we have funding available.
- Supporting more distributions: tell us your favourite distribution and how to build a package, then we can integrate that into our reproducible builds infrastructure.
- Supporting spt - the sadboxed process tender - to run unikernels without a hypervisor.
- Data analytics: which system packages updates or opam package releases result in variance of the binaries - did the release of an opam package increase or decrease the overall build times?
- Functional and performance tests of the unikernels: for each different build, conduct basic functional testing, and performance test - to graph in the ouput. Also includes data analytics: did the release of an opam package increase or decrease the performance of unikernels?
- Whole system performance analysis with memory profiling, and how to integrate this into a running unikernel.
- MirageOS 4.0 support.
- Metrics and logging collection and dynamic adjustment of metrics and log levels.
- DNS resolver unikernel, still missing DNSSec support.
Interested? Get in touch with us.
# Bitcoin Piñata
The [Bitcoin Piñata](http://ownme.ipredator.se) is a transparent [bug bounty](https://en.wikipedia.org/wiki/Bug_bounty_program): it holds the private key for a bitcoin wallet. It is a [MirageOS unikernel](/Our%20Work/Technology-Employed#MirageOS) designed to test our TLS and all underlying transport implementations.
Its open communication channels are HTTP and HTTPS, and a TLS client and TLS server endpoint, all written in [OCaml](/Our%20Work/Technology-Employed#OCaml). The cryptographic material for TLS is generated on startup in the Piñata and is supposed to never leave it. However, if an attacker manages to establish a mutually authenticated (using certificates) TLS channel, the private key to the bitcoin wallet is transmitted over this channel, and the attacker gains access to the bait (the bitcoins).
The project was [launched](https://mirage.io/announcing-bitcoin-pinata) on February 10th 2015. At this time friends from the IPredator project lent us 10 bitcoins (back then worth ~2000 EUR) for the bait. By 2018 no one had successfully cracked the Piñata and the bitcoins, by this point worth ~200 000 EUR, were repurposed for other projects, however the project remains live, with a small amount of bitcoins in it, for anyone wishing to try to crack it.
[Hannes Mehnert](/About%20Us/Team) and David Kaloper-Meršinjak designed the Bitcoin Piñata to attract security professionals to look into our [TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security) stack, developed purely in OCaml since early 2014.
#### More technical information:
On startup, the Piñata generates its certificate authority on the fly, including certificates and private keys, this means that only the Piñata itself contains private keys which can authenticate successfully.
The codebase is [thousands of lines of code smaller](/Our%20Work/Technology-Employed#MirageOS) than equivalent implementations and we did not use many external libraries, and those we had to use we read carefully to avoid security issues.
The attack surface is any part of the unikernel (or deployment) - anything allowing you to get a valid certificate (signed by the cryptographic material which shouldn't leave the Piñata), or reading the memory location where the private key to the bitcoin wallet is stored, an exploitable flaw in any software layer (OCaml runtime, virtual network device, TCP/IP stack, TLS library, X.509 validation, or elsewhere), or anything else.
By using a Bitcoin wallet, the Piñata is a transparent bug bounty. Everybody can observe (by looking into the blockchain) whether it has been compromised and the money has been transferred to another wallet. It is also self-serving: when an attacker discovers a flaw, they don't need to fill out any forms to retrieve the bounty, instead they can take the wallet, without any questions asked.
The source code of the Piñata is [open source](https://github.com/mirleft/btc-pinata) and even the running binary (without the private bitcoin wallet key) is published in the git repository.
Further links about the Bitcoin Piñata:
- [Statistics after 5 months](https://mirage.io/blog/bitcoin-pinata-results)
- [Post about whacking the pinata](https://somerandomidiot.com/blog/2018/04/17/whacking-the-bitcoin-pinata/)
- [Evaluation 3 years later](https://hannes.nqsb.io/Posts/Pinata)
- [Usenix security research paper on TLS stack](https://usenix15.nqsb.io)
# CalDAV Server
The CalDAV server is a protocol for synchronizing calendars. Our goal was to develop a calendar server that is robust and less complex to configure and maintain than other services, allowing more people to run their own digital infrastructure.
The CalDAV server project began in 2017 when [Stefanie Schirmer and Hannes Mehnert](/About%20Us/Team) got a grant from [The Prototype Fund](https://prototypefund.de) for developing a CalDAV server (RFC 4791) over a period of 6 months. After that funding period, [Tarides](https://tarides.com) sponsored us further, allowing us to achieve the current status of the CalDAV server.
Currently all basic features of a calendar are implemented, for example "please add this event to the calendar", and "modify the weekly meeting from now on to start half an hour later", and it is tested with a variety of CalDAV clients.
[calendar.robur.coop](https://calendar.robur.coop) is a live test server with the [calDavZAP](https://www.inf-it.com/open-source/clients/caldavzap/) web user interface, try it with any user and any password (first come first serve), it persists to a remote git repository.
Our CalDAV server has a very small codebase which provides a number of security benefits and it stores all the data in git so there is a history of changes, it can easily be exported and converted to and from other formats, and if a client behaves badly (by removing entries they cannot deal with), this can be tracked and reverted.
We would like to develop the CalDAV server further, adding notifications about updates and invitations sent via Mail (as described in [RFC 6638](https://tools.ietf.org/html/rfc6638)). We also aim to integrate the related protocol CardDAV (address book), which could be integrated into the same unikernel.
If you are interested in supporting further work on the CalDAV server through a [donation](/Donate), with a grant, or require additional features to be implemented to accommodate your projects needs please [get in touch with us](/Contact)!
#### More technical information:
Based on existing HTTP and REST libraries, we developed a WebDAV library adequate for CalDAV (e.g. no locks, but ACLs, reports). The REST library needed some extensions to support more HTTP verbs that are used in WebDAV. We also developed an iCalendar (.ics, RFC 5545) decoder and encoder.
The CalDAV unikernel uses the developed libraries to serve requests from clients, which may be "what is the last update to the calendar?", or adding events or changing times of events. Access control is done by the attached metadata to the resource (i.e. only user Tim may access Tim's calendar, groups are possible). User and group (in WebDAV lingo principals) data are stored in the same storage as the calendars. For enrolling new users and groups, we provide HTTP endpoints. The administrator password is provided as boot argument.
The storage of our CalDAV unikernel can use any [mirage-kv](https://github.com/mirage/mirage-kv) implementation, being it a non-persistent only in-memory (not too useful for a real calendar since it is not persistent), a UNIX file system (if the unikernel runs as a UNIX process), or a remote git repository (which we recommend).
The CalDAV server is open source, with the code available on GitHub for the [server](https://github.com/roburio/caldav) and [calendar](https://github.com/roburio/icalendar).
# DNS
The Domain Name System is used like a phone book for the internet - it translates human-memoizable domain names (e.g. robur.coop) to machine-routable IP addresses (e.g. 198.167.222.215) and other records such as where eMail should be sent to. DNS is a fault-tolerant hierarchical decentralized key-value store with caching. DNS has been deployed on the Internet since 1987.
On the one side, the authoritative server, which has delegated responsibility for a domain, provides that mapping information (i.e. that a certain IP is the right one for a certain domain), and on the other side a resolver provides the functionality to figure out which server to request for each query a client has.
Since 2017 we have developed DNS, server, resolver, and client as a spare-time project. They serve different purposes in our ecosystem: the server is used by domains such as nqsb.io and robur.coop as an authoritative server; we use a caching resolver for our bi-annual hack retreats in Marrakesh; and the client is used by any MirageOS unikernel that needs to resolve domain names.
When developing this project we carefully considered which elements were strictly required and have ensured a minimal codebase, providing for better security and ease of use.
Since mid-August 2019 our DNS implementation replaced the existing, but incomplete and barely maintained [OCaml](/Our%20Work/Technology-Employed#OCaml) implementation. It is released to the opam repository.
A specific use case for this project is to combine a DNS resolver with a local zone (where it acts as server), and a DHCP server - a protocol used for dynamic IP address configuration - into a single service. We recently received confirmation of a grant from Nlnet via the next generation internet initiative from the EU to develop such a service based on our DNS library.
If you are interested in supporting this work, or knowing more about it please [get in touch with us](/Contact).
#### More technical information:
Our DNS implementation handles extensions such as dynamic updates, notifications on changes, authentication of update requests, zone transfers (all useful for provisioning X509 certificates via letsencrypt.org).
Using expressive types in OCaml we can ensure that for a given query type the reply record has a specific shape: querying an address record (A) results in a set of IPv4 addresses. Encoding such invariants in the type system reduces the boilerplate (or unsafety) which is there in other implementations.
The DNS code is available on GitHub for the [library](https://github.com/mirage/ocaml-dns) and [unikernels](https://github.com/roburio/unikernels).
# Firewall
Online security is vital and an important way to improve a network's or computer's security is a firewall that can block unwanted traffic going in and out. We are developing a lightweight firewall in the secure programming language [OCaml](/Our%20Work/Technology-Employed#OCaml) that is specifically designed to be implemented in [QubesOS](https://www.qubes-os.org/), a security-orientated operating system.
The Qubes-Mirage-Firewall is based on a project by Thomas Leonard. It is implemented as a [MirageOS unikernel](/Our%20Work/Technology-Employed#MirageOS), it is more lightweight and with a smaller codebase compared to the original Linux based firewall in QubesOS which has security and useability benefits.
In 2019, we received a grant from [The Prototype Fund](https://prototypefund.de) for 6 months work on this project. We would like to continue developing it further and if you wish to support this project with a [donation](/Donate), grant, or just want to hear more about the project please [get in touch with us](/Contact).
#### More technical information:
Thus far we have refactored the NAT library, made the firewall react to rule changes from the Qubes interface, and added a DNS client to the firewall. We added the ability to read rule changes from QubesDB. We added the ability to resolve domain names via a DNS client so that the rules can be written in a human-friendly way, and not just as computer-friendly IP addresses.
The smaller unikernel uses less memory, which is a rare resource in QubesOS, since Qubes runs many virtual machines. As opposed to the first prototype, it can react to rule changes without the need to rebuild and reboot the firewall VM.
The code for the firewall is on [GitHub](https://github.com/yomimono/qubes-mirage-firewall/), but still on several branches since we recently reworked the DNS parts.
# OpenPGP
OpenPGP is a much-used standard of encryption and is widely used to encrypt text, files and emails, amongst other things.
Robur is implementing OpenPGP in OCaml, for use in MirageOS and any other compatible platform or software that is looking for OpenPGP written in a [secure language](/Our%20Work/Technology-Employed#OCaml).
This work is funded through donations and is still an ongoing project, which means that it may not currently possess all the features required for various use-cases. Currently our implementation can sign, verify, compress, encrypt and decrypt.
You can assist us in implementing more of the OpenPGP protocol through a [donation](/Donate). If you are interested hearing more about the project, require additional features to be implemented to accommodate your projects needs, or are able to assist with a grant please [get in touch with us](/Contact)!
#### More technical information:
Robur maintains a partial, opinionated implementation of OpenPGP version 4 (RFC 4880) and the related standards, written in OCaml and compatible with MirageOS.
The software consists of a library, and various UNIX tools that make use of the library, and can be used to interact with systems that are currently using GnuPG or other OpenPGP implementations for file encryption or verification using OpenPGP signatures. Notably it can be used from within MirageOS applications without having to bundle a C implementation, and the UNIX binaries are separated from the library so that your applications can use the library directly, unlike GnuPG or libgpgme whose API translates to repeated executions of the gpg2 binary and parsing of the textual output from that.
Currently we have implemented signing/verification and encryption/decryption, but there is no support for elliptic curve cryptography. Decompression of ZLIB streams is supported through the use of a pure OCaml library called decompress. While some things are implemented with a streaming API, many operations make use of an in-memory buffer, which introduces memory constraints on the file handled (this is an area where there is definitely room for improvement).
The software is available [on Github](https://github.com/roburio/ocaml-openpgp).
# OpenVPN
OpenVPN is a virtual private network protocol that started from a single implementation developed in C, without any specification document. Over time flaws were found in the implementation which lead to further revisions. Also several extensions were developed for coping with other needs.
This history meant that overall OpenVPN has a number of flaws and is overly complex due to revisions on revisions. We implemented only the most recent protocol version and require the current key derivation and authentication method.
We started from scratch developing it in [OCaml](/Our%20Work/Technology-Employed#OCaml) using existing cryptographic libraries and parsers. This approach allowed us to take some design decisions that have security benefits and our codebase is minimal. We strive for compatibility of the configuration file, so our OCaml OpenVPN can be a drop-in replacement.
We began this work in 2018 with a grant from [The Prototype Fund](https://prototypefund.de). Whilst the code is available on [Github](https://github.com/roburio/openvpn) we have not released it yet as it needs further work (in terms of testing, performance evaluation).
If you are interested in supporting further work on our OpenVPN implementation through a [donation](/Donate), with a grant, or just want to hear more about the project please [get in touch with us](/Contact)!
#### More technical information:
Our main goal is a client implementation as a MirageOS unikernel (either forwarding all traffic to a single IP address or NAT of a local network via the OpenVPN tunnel), but we also developed a UNIX client which configures a tap device on the host and adjusts the hosts routing table accordingly. We extended our protocol implementation with a server as well. Testing is done against existing OpenVPN servers.
Our implementation has stronger security promises since we do not implement old protocol versions that are brittle. In addition it is fail-hard when using the NAT unikernel: if the tunnel is down, all packets are dropped (instead of sent unencrypted). We do not support questionable configuration options and we have safe defaults for the configuration.
# Solo5
Solo5 is a sandboxed (separated) execution environment built using unikernels (a.k.a. library operating systems), including but not limited to MirageOS unikernels. Conceptually it provides a common interface between the unikernel and various host operating systems or hypervisors.
Solo5's interfaces are designed in a way that allows us to easily port them to run on different host operating systems or hypervisors. Implementations of Solo5 run on Linux, FreeBSD, OpenBSD, the Muen Separation Kernel and the Genode Operating System Framework.
Currently Solo5 is ready for use by early adopters and we plan to continue developing it further, if you are interested in supporting this development with a [donation](/Donate) or want to hear more about the project please [get in touch with us](/Contact).
#### More technical information:
Solo5 features include:
* a sandboxed environment (CPU, memory) to execute native code in
* clock interfaces to access system time
* network interfaces to allow the unikernel to communicate with the outside world
* block storage interfaces to allow the unikernel to store persistent data
* an output-only "console" interface for logging and debugging
Solo5 has been designed to isolate the unikernel as much as possible from its host, while making the best of the available isolation technologies that the host hardware, operating system or hypervisor provide, such as hardware-assisted virtualization, system call filtering and so on. Interfaces are intentionally designed to treat the unikernel as a "static system". In other words, the unikernel must declare its intent to make use of host resources (such as memory, network or storage) up front, and can not gain access to further host resources at run time.
Compared to existing technologies, such as traditional virtualization using KVM/QEMU, VMWare, crosvm and so on, Solo5 is several orders of magnitude smaller (around 10,000 lines of C) and is tailored to running unikernels in a legacy-free and minimalist fashion.
Our goal for Solo5 is to enable the use of unikernel technology to build hybrid, disaggregated systems where the designer/developer can choose which components are untrusted or security-sensitive and "split them out" from the monolithic host system. At the same time the developer can continue to use existing, familiar, technology as the base or "control plane" for the overall system design/deployment, or mix and match traditional applications and unikernels as appropriate.
The software is available [on Github](https://github.com/solo5).