From 5cf66b4a7b940daa0dc61478d5ccc92243650582 Mon Sep 17 00:00:00 2001 From: Canopy bot Date: Mon, 20 Nov 2023 16:55:40 +0000 Subject: [PATCH] updated from main (commit 8db22fe1520f3d446bf95759a0278f196ee0bf49) --- About | 6 +- Posts/ARP | 12 +-- Posts/Albatross | 38 ++++---- Posts/BadRecordMac | 4 +- Posts/BottomUp | 10 +- Posts/Conex | 20 ++-- Posts/DNS | 14 +-- Posts/Deploy | 10 +- Posts/DnsServer | 28 +++--- Posts/EC | 20 ++-- Posts/Functoria | 10 +- Posts/Jackline | 20 ++-- Posts/Maintainers | 10 +- Posts/Monitoring | 8 +- Posts/NGI | 34 +++---- Posts/OCaml | 8 +- Posts/OpamMirror | 12 +-- Posts/OperatingSystem | 8 +- Posts/Pinata | 6 +- Posts/ReproducibleOPAM | 12 +-- Posts/Solo5 | 10 +- Posts/Summer2019 | 14 +-- Posts/Syslog | 8 +- Posts/Traceroute | 2 +- Posts/VMM | 12 +-- Posts/X50907 | 18 ++-- Posts/nqsbWebsite | 22 ++--- atom | 202 ++++++++++++++++++++--------------------- 28 files changed, 289 insertions(+), 289 deletions(-) diff --git a/About b/About index 5fa98b7..18d938c 100644 --- a/About +++ b/About @@ -1,5 +1,5 @@ -About

About

Written by hannes
Classified under: overviewmyselfbackground
Published: 2016-04-01 (last updated: 2021-11-19)

What is a "full stack engineer"?

+About

About

Written by hannes
Classified under: overviewmyselfbackground
Published: 2016-04-01 (last updated: 2021-11-19)

What is a "full stack engineer"?

Analysing the word literally, we should start with silicon and some electrons, maybe a soldering iron, and build everything all the way up to our favourite communication system.

@@ -12,7 +12,7 @@ case you're interested in trustworthiness of hardware.

simple to setup, secure, decentralised infrastructure. We're not there yet, which also means I've plenty of projects :).

I will write about my projects, which cover topics on various software layers.

-

Myself

+

Myself

I'm Hannes Mehnert, a hacker (in the original sense of the word), 3X years old. In my spare time, I'm not only a hacker, but also a barista. I like to travel and repair my recumbent @@ -81,7 +81,7 @@ another post.

fast enough ("Reassuring, because our blanket performance statement 'OCaml delivers at least 50% of the performance of a decent C compiler' is not invalidated :-)" Xavier Leroy), and the community is sufficiently large.

-

Me on the intertubes

+

Me on the intertubes

You can find me on twitter and on GitHub.

The data of this blog is stored in a git repository.

diff --git a/Posts/ARP b/Posts/ARP index edc489b..198f10d 100644 --- a/Posts/ARP +++ b/Posts/ARP @@ -1,17 +1,17 @@ -Re-engineering ARP

Re-engineering ARP

Written by hannes
Classified under: mirageosprotocol
Published: 2016-07-12 (last updated: 2021-11-19)

What is ARP?

+Re-engineering ARP

Re-engineering ARP

Written by hannes
Classified under: mirageosprotocol
Published: 2016-07-12 (last updated: 2021-11-19)

What is ARP?

ARP is the Address Resolution Protocol, widely used in legacy IP networks (which support only IPv4). It is responsible to translate an IPv4 address to an Ethernet address. It is strictly more general, abstracting over protocol and hardware addresses. It is basically DNS (the domain name system) on a different layer.

ARP is link-local: ARP frames are not routed into other networks, all stay in the same broadcast domain. Thus there is no need for a hop limit (time-to-live). A reverse lookup mechanism (hardware address to protocol) is also available, named reverse ARP ;).

I will focus on ARP in this article, as used widely to translate IPv4 addresses into Ethernet addresses. There are two operations in ARP: request and response. A request is usually broadcasted to all hosts (by setting the destination to the broadcast Ethernet address, ff:ff:ff:ff:ff:ff), while a reply is send via unicast (to the host which requested that information).

The frame format is pretty straightforward: 2 bytes hardware address type, 2 bytes protocol type, 1 byte length for both types, 2 bytes operation, followed by source addresses (hardware and protocol), and target addresses. In total 28 bytes, considering 48 bit Ethernet addresses and 32 bit IPv4 addresses.

It was initially specified in RFC 826, but reading through RFC 1122 (requirements for Internet Hosts - Communication layer), and maybe the newer RFC 5227 (IPv4 address conflict detection) does not hurt.

On UNIX systems, you can investigate your arp table, also called arp cache, using the arp command line utility.

-

Protocol logic

+

Protocol logic

Let us look what our ARP handler actually needs to do? Translating IPv4 addresses to Ethernet addresses, but where does it learn new information?

First of all, our ARP handler needs to know its own IPv4 address and its Ethernet address. It will even broadcast them on startup, so-called gratuitous ARP. The purpose of this is to inform all other hosts on the same network that we are here now. And if another host, let's name it barf, has the same IPv4 address, some sort of conflict resolution needs to happen (otherwise all hosts on the network are confused to whether to send us or barf packets).

Once initialisation is over, our ARP handler needs to wait for ARP requests from other hosts on the network, and if addresses to our IPv4 address, issue a reply. The other event which might happen is that a user wants to send an IPv4 packet to another host on the network. In this case, we either already have the Ethernet address in our cache, or we need to send an ARP request to the network and wait for a reply. Since packets might get lost, we actually need to retry sending ARP requests until a limit is reached. To keep the cache in a reasonable size, old entries should be dropped if unused. Also, the Ethernet address of hosts may change, due to hardware replacement or failover.

That's it. Pretty straightforward.

-

Design

+

Design

Back in 2008, together with Andreas Bogk, we just used a hash table and installed expiration and retransmission timers when needed. Certainly timers sometimes needed to be cancelled, and testing the code was cumbersome. It were only 250 lines of Dylan code plus some wire format definition.

Nowadays, after some years of doing formal verification and typed functional programming, I try to have effects, including mutable state, isolated and explicitly annotated. The code should not contain surprises, but straightforward to understand. The core protocol logic should not be convoluted with side effects, rather a small wrapper around it should. Once this is achieved, testing is straightforward. If the fashion of the asynchronous task library changes (likely with OCaml multicore), the core logic can be reused. It can also be repurposed to run as a test oracle. You can read more marketing of this style in our Usenix security paper.

My proposed style and hash tables are not good friends, since hash tables in OCaml are imperative structures. Instead, a Map (documentation) is a functional data structure for associating keys with values. Its underlying data structure is a balanced binary tree.

@@ -25,15 +25,15 @@
  • Query a query for an IPv4 address using some state leads to a successor state, and either an immediate answer with the Ethernet address, or an ARP request to be sent and waiting for an answer, or just waiting for an answer in the case another task has already requested that IPv4 address. Since we don't want to convolute the protocol core with tasks, we'll let the effectful layer decide how to achieve that by abstracting over some alpha to store, and requiring a merge : alpha option -> alpha function.
  • -

    Excursion: security

    +

    Excursion: security

    ARP is a link-local protocol, thus attackers have to have access to the same link-layer: either a cable in the same switch or hub, or in the same wireless network (if you're into modern technology).

    A very common attack vector for protocols is the so called person in the middle attack, where the attacker sits between you and the remote host. An attacker can achieve this using ARP spoofing: if they can convince your computer that the attacker is the gateway, your computer will send all packets to the attacker, who either forwards them to the remote host, or modifies them, or drops them.

    ARP does not employ any security mechanism, it is more a question of receiving the first answer (depending on the implementation). A common countermeasure is to manually fill the cache with the gateway statically. This only needs updates if the gateway is replaced, or gets a new network card.

    Denial of service attacks are also possible using ARP: if the implementation preserves all replies, the cache might expand immensely. This happens sometimes in switch hardware, which have a limited cache, and once it is full, they go into hub mode. This means all frames are broadcasted on all ports. This enables an attacker to passively sniff all traffic in the local network.

    One denial of service attack vector is due to choosing a hash table as underlying store. Its hash function should be collision-resistant, one way, and its output should be fixed length. A good choice would be a cryptographic hash function (like SHA-256), but these are too expensive and thus rarely used for hash tables. Denial of Service via Algorithmic Complexity Attacks and Efficient Denial of Service Attacks on Web Application Platforms are worth studying. If you expose your hash function to user input (and don't use a private seed), you might accidentally open your attack surface.

    -

    Back to our design

    +

    Back to our design

    To mitigate person in the middle attacks, we provide an API to add static entries, which are never overwritten by network input. While our own IPv4 addresses are advertised if a matching ARP request was received, other static entries are not advertised (neither are dynamic entries). We do only insert entries to our cache if we have an outstanding request or already an entry. To provide low latency, just before a dynamic entry would timeout, we send another request for this IPv4 address to the network.

    -

    Implementation

    +

    Implementation

    I have the source, its documentation, a test suite and a coverage report online.

    The implementation of the core logic still fits in less than 250 lines of code. Below 100 more lines are needed for decoding and encoding byte buffers. And another 140 lines to implement the Mirage ARP interface. Tests are available which cover the protocol logic and decoding/encoding to 100%.

    The effectful layer is underspecified (especially regarding conflicts: what happens if there is an outstanding request for an IPv4 address and I add a static entry for this?). There is an implementation based on hash tables, which I used to benchmark a bit.

    diff --git a/Posts/Albatross b/Posts/Albatross index 802f087..630a4d1 100644 --- a/Posts/Albatross +++ b/Posts/Albatross @@ -1,8 +1,8 @@ Deploying reproducible unikernels with albatross

    Deploying reproducible unikernels with albatross

    Written by hannes
    Classified under: mirageosdeployment
    Published: 2022-11-17 (last updated: 2023-05-16)

    EDIT (2023-05-16): Updated with albatross release version 2.0.0.

    -

    Deploying MirageOS unikernels

    +

    Deploying MirageOS unikernels

    More than five years ago, I posted how to deploy MirageOS unikernels. My motivation to work on this topic is that I'm convinced of reduced complexity, improved security, and more sustainable resource footprint of MirageOS unikernels, and want to ease deployment thereof. More than one year ago, I described how to deploy reproducible unikernels.

    -

    Albatross

    +

    Albatross

    In recent months we worked hard on the underlying infrastructure: albatross. Albatross is the orchestration system for MirageOS unikernels that use solo5 with hvt or spt tender. It deals with three tasks:

    • unikernel creation (destroyal, restart) @@ -13,50 +13,50 @@

    An addition to the above is dealing with multiple tenants on the same machine: remote management of your unikernel fleet via TLS, and resource policies.

    -

    History

    +

    History

    The initial commit of albatross was in May 2017. Back then it replaced the shell scripts and manual scp of unikernel images to the server. Over time it evolved and adapted to new environments. Initially a solo5 unikernel would only know of a single network interface, these days there can be multiple distinguished by name. Initially there was no support for block devices. Only FreeBSD was supported in the early days. Nowadays we built daily packages for Debian, Ubuntu, FreeBSD, and have support for NixOS, and the client side is supported on macOS as well.

    -

    ASN.1

    +

    ASN.1

    The communication format between the albatross daemons and clients was changed multiple times. I'm glad that albatross uses ASN.1 as communication format, which makes extension with optional fields easy, and also allows "choice" (the sum type) to be not tagged (the binary is the same as no choice type), thus adding choice to an existing grammar, and preserving the old in the default (untagged) case is a decent solution.

    So, if you care about backward and forward compatibility, as we do, since we may be in control of which albatross servers are deployed on our machine, but not what albatross versions the clients are using -- it may be wise to look into ASN.1. Recent efforts (json with schema, ...) may solve similar issues, but ASN.1 is as well very tiny in size.

    -

    What resources does a unikernel need?

    +

    What resources does a unikernel need?

    A unikernel is just an operating system for a single service, there can't be much it can need.

    -

    Name

    +

    Name

    So, first of all a unikernel has a name, or a handle. This is useful for reporting statistics, but also to specify which console output you're interested in. The name is a string with printable ASCII characters (and dash '-' and dot '.'), with a length up to 64 characters - so yes, you can use an UUID if you like.

    -

    Memory

    +

    Memory

    Another resource is the amount of memory assigned to the unikernel. This is specified in megabyte (as solo5 does), with the range being 10 (below not even a hello world wants to start) to 1024.

    -

    Arguments

    +

    Arguments

    Of course, you can pass via albatross boot parameters to the unikernel. Albatross doesn't impose any restrictions here, but the lower levels may.

    -

    CPU

    +

    CPU

    Due to multiple tenants, and side channel attacks, it looked right at the beginning like a good idea to restrict each unikernel to a specific CPU. This way, one tenant may use CPU 5, and another CPU 9 - and they'll not starve each other (best to make sure that these CPUs are in different packages). So, albatross takes a number as the CPU, and executes the solo5 tender within taskset/cpuset.

    -

    Fail behaviour

    +

    Fail behaviour

    In normal operations, exceptional behaviour may occur. I have to admit that I've seen MirageOS unikernels that suffer from not freeing all the memory they have allocated. To avoid having to get up at 4 AM just to start the unikernel that went out of memory, there's the possibility to restart the unikernel when it exited. You can even specify on which exit codes it should be restarted (the exit code is the only piece of information we have from the outside what caused the exit). This feature was implemented in October 2019, and has been very precious since then. :)

    -

    Network

    +

    Network

    This becomes a bit more complex: a MirageOS unikernel can have network interfaces, and solo5 specifies a so-called manifest with a list of these (name and type, and type is so far always basic). Then, on the actual server there are bridges (virtual switches) configured. Now, these may have the same name, or may need to be mapped. And of course, the unikernel expects a tap interface that is connected to such a bridge, not the bridge itself. Thus, albatross creates tap devices, attaches these to the respective bridges, and takes care about cleaning them up on teardown. The albatross client verifies that for each network interface in the manifest, there is a command-line argument specified (--net service:my_bridge or just --net service if the bridge is named service). The tap interface name is not really of interest to the user, and will not be exposed.

    -

    Block devices

    +

    Block devices

    On the host system, it's just a file, and passed to the unikernel. There's the need to be able to create one, dump it, and ensure that each file is only used by one unikernel. That's all that is there.

    -

    Metrics

    +

    Metrics

    Everyone likes graphs, over time, showing how much traffic or CPU or memory or whatever has been used by your service. Some of these statistics are only available in the host system, and it is also crucial for development purposes to compare whether the bytes sent in the unikernel sum up to the same on the host system's tap interface.

    The albatross-stats daemon collects metrics from three sources: network interfaces, getrusage (of a child process), VMM debug counters (to count VM exits etc.). Since the recent 1.5.3, albatross-stats now connects at startup to the albatross-daemon and then retrieves the information which unikernels are up and running, and starts periodically collecting data in memory.

    Other clients, being it a dump on your console window, a write into an rrd file (good old MRTG times), or a push to influx, can use the stats data to correlate and better analyse what is happening on the grand scale of things. This helped a lot by running several unikernels with different opam package sets to figure out which opam packages leave their hands on memory over time.

    As a side note, if you make the unikernel name also available in the unikernel, it can tag its own metrics with the same identifier, and you can correlate high-level events (such as amount of HTTP requests) with low-level things "allocated more memory" or "consumed a lot of CPU".

    -

    Console

    +

    Console

    There's not much to say about the console, just that the albatross-console daemon is running with low privileges, and reading from a FIFO that the unikernel writes to. It never writes anything to disk, but keeps the last 1000 lines in memory, available from a client asking for it.

    -

    The daemons

    +

    The daemons

    So, the main albatross-daemon runs with superuser privileges to create virtual machines, and opens a unix domain socket where the clients and other daemons are connecting to. The other daemons are executed with normal user privileges, and never write anything to disk.

    The albatross-daemon keeps state about the running unikernels, and if it is restarted, the unikernels are started again. Maybe worth to mention that this lead sometimes to headaches (due to data being dumped to disk, and the old format should always be supported), but was also a huge relief to not have to care about creating all the unikernels just because albatross-daemon was killed.

    -

    Remote management

    +

    Remote management

    There's one more daemon program: albatross-tls-endpoint. It accepts clients via a remote TCP connection, and establish a mutual-authenticated TLS handshake. When done, the command is forwarded to the respective Unix domain socket, and the reply is sent back to the client.

    The daemon itself has a X.509 certificate to authenticate, but the client is requested to show its certificate chain as well. This by now requires TLS 1.3, so the client certificates are sent over the encrypted channel.

    A step back, X.509 certificate contains a public key and a signature from one level up. When the server knows about the root (or certificate authority (CA)) certificate, and following the chain can verify that the leaf certificate is valid. Additionally, a X.509 certificate is a ASN.1 structure with some fixed fields, but also contains extensions, a key-value store where the keys are object identifiers, and the values are key-dependent data. Also note that this key-value store is cryptographically signed.

    Albatross uses the object identifier, assigned to Camelus Dromedarius (MirageOS - 1.3.6.1.4.1.49836.42) to encode the command to be executed. This means that once the TLS handshake is established, the command to be executed is already transferred.

    In the leaf certificate, there may be the "create unikernel" command with the unikernel image, it's boot parameters, and other resources. Or a "read the console of my unikernel". In the intermediate certificates (from root to leaf), resource policies are encoded (this path may only have X unikernels running with a total of Y MB memory, and Z MB of block storage, using CPUs A and B, accessing bridges C and D). From the root downwards these policies may only decrease. When a unikernel should be created (or other commands are executed), the policies are verified to hold. If they do not, an error is reported.

    -

    Fleet management

    +

    Fleet management

    Of course it is very fine to create your locally compiled unikernel to your albatross server, go for it. But in terms of "what is actually running here?" and "does this unikernel need to be updated because some opam package had a security issues?", this is not optimal.

    Since we provide daily reproducible builds with the current HEAD of the main opam-repository, and these unikernels have no configuration embedded (but take everything as boot parameters), we just deploy them. They come with the information what opam packages contributed to the binary, which environment variables were set, and which system packages were installed with which versions.

    The whole result of reproducible builds for us means: we have a hash of a unikernel image that we can lookup in our build infrastructure, and take a look whether there is a newer image for the same job. And if there is, we provide a diff between the packages contributed to the currently running unikernel and the new image. That is what the albatross-client update command is all about.

    Of course, your mileage may vary and you want automated deployments where each git commit triggers recompilation and redeployment. The downside would be that sometimes only dependencies are updated and you've to cope with that.

    There is a client albatross-client, depending on arguments either connects to a local Unix domain socket, or to a remote albatross instance via TCP and TLS, or outputs a certificate signing request for later usage. Data, such as the unikernel ELF image, is compressed in certificates.

    -

    Installation

    +

    Installation

    For Debian and Ubuntu systems, we provide package repositories. Browse the dists folder for one matching your distribution, and add it to /etc/apt/sources.list:

    $ wget -q -O /etc/apt/trusted.gpg.d/apt.robur.coop.gpg https://apt.robur.coop/gpg.pub
     $ echo "deb https://apt.robur.coop ubuntu-20.04 main" >> /etc/apt/sources.list # replace ubuntu-20.04 with e.g. debian-11 on a debian buster machine
    @@ -77,7 +77,7 @@ $ pkg install solo5 albatross
     

    Please ensure to have at least version 2.0.0 of albatross installed.

    For other distributions and systems we do not (yet?) provide binary packages. You can compile and install them using opam (opam install solo5 albatross). Get in touch if you're keen on adding some other distribution to our reproducible build infrastructure.

    -

    Conclusion

    +

    Conclusion

    After five years of development and operating albatross, feel free to get it and try it out. Or read the code, discuss issues and shortcomings with us - either at the issue tracker or via eMail.

    Please reach out to us (at team AT robur DOT coop) if you have feedback and suggestions. We are a non-profit company, and rely on donations for doing our work - everyone can contribute.

    \ No newline at end of file diff --git a/Posts/BadRecordMac b/Posts/BadRecordMac index 9c79dc2..ef9cb98 100644 --- a/Posts/BadRecordMac +++ b/Posts/BadRecordMac @@ -1,5 +1,5 @@ -Catch the bug, walking through the stack

    Catch the bug, walking through the stack

    Written by hannes
    Classified under: mirageossecurity
    Published: 2016-05-03 (last updated: 2021-11-19)

    BAD RECORD MAC

    +Catch the bug, walking through the stack

    Catch the bug, walking through the stack

    Written by hannes
    Classified under: mirageossecurity
    Published: 2016-05-03 (last updated: 2021-11-19)

    BAD RECORD MAC

    Roughly 2 weeks ago, Engil informed me that a TLS alert pops up in his browser sometimes when he reads this website. His browser reported that the message authentication code was wrong. From RFC 5246: This message is always fatal and should never be observed in communication between proper implementations (except when messages were corrupted in the network).

    I tried hard, but could not reproduce, but was very worried and was eager to find the root cause (some little fear remained that it was in our TLS stack). I setup this website with some TLS-level tracing (extending the code from our TLS handshake server). We tried to reproduce the issue with traces and packet captures (both on client and server side) in place from our computer labs office with no success. Later, Engil tried from his home and after 45MB of wire data, ran into this issue. Finally, evidence! Isolating the TCP flow with the alert resulted in just about 200KB of packet capture data (TLS ASCII trace around 650KB).

    encrypted alert

    @@ -43,7 +43,7 @@

    Certainly, interfacing the outside world is complex. The mirage-block-xen library uses a similar protocol to access block devices. From a brief look, that library seems to be safe (using 64bit identifiers).

    I'm interested in feedback, either via twitter or via eMail.

    -

    Other updates in the MirageOS ecosystem

    +

    Other updates in the MirageOS ecosystem

    • Canopy uses a map instead of a hashtable, tags now contains a list of tags (PR here), both thanks to voila! I also use the new CSS from Engil
    • diff --git a/Posts/BottomUp b/Posts/BottomUp index d0a4b9d..d90a1d7 100644 --- a/Posts/BottomUp +++ b/Posts/BottomUp @@ -1,6 +1,6 @@ Counting Bytes

      Counting Bytes

      Written by hannes
      Classified under: mirageosbackground
      Published: 2016-06-11 (last updated: 2021-11-19)

      I was busy writing code, text, talks, and also spend a week without Internet, where I ground and brewed 15kg espresso.

      -

      Size of a MirageOS unikernel

      +

      Size of a MirageOS unikernel

      There have been lots of claims and myths around the concrete size of MirageOS unikernels. In this article I'll apply some measurements which overapproximate the binary sizes. The tools used for the visualisations are available online, and soon hopefully upstreamed into the mirage tool. This article uses mirage-2.9.0 (which might be outdated at the time of reading).

      Let us start with a very minimal unikernel, consisting of a unikernel.ml:

      module Main (C: V1_LWT.CONSOLE) = struct
      @@ -36,24 +36,24 @@ PKGS   = functoria lwt mirage-clock-unix mirage-console mirage-logs mirage-types
       21964 mirage/mirage-runtime.a
       

      This still does not sum up to 2.8MB since we're missing the transitive dependencies.

      -

      Visualising recursive dependencies

      +

      Visualising recursive dependencies

      Let's use a different approach: first recursively find all dependencies. We do this by using ocamlfind to read META files which contain a list of dependent libraries in their requires line. As input we use LIBS from the Makefile snippet above. The code (OCaml script) is available here. The colour scheme is red for pieces of the OCaml distribution, yellow for input packages, and orange for the dependencies.

      This is the UNIX version only, the Xen version looks similar (but worth mentioning).

      You can spot at the right that mirage-bootvar uses re, which provoked me to open a PR, but Jon Ludlam already had a nicer PR which is now merged (and a new release is in preparation).

      -

      Counting bytes

      +

      Counting bytes

      While a dependency graphs gives a big picture of what the composed libraries of a MirageOS unikernel, we also want to know how many bytes they contribute to the unikernel. The dependency graph only contains the OCaml-level dependencies, but MirageOS has in addition to that a pkg-config universe of the libraries written in C (such as mini-os, openlibm, ...).

      We overapproximate the sizes here by assuming that a linker simply concatenates all required object files. This is not true, since the sum of all objects is empirically factor two of the actual size of the unikernel.

      I developed a pie chart visualisation, but a friend of mine reminded me that such a chart is pretty useless for comparing slices for the human brain. I spent some more time to develop a treemap visualisation to satisfy the brain. The implemented algorithm is based on squarified treemaps, but does not use implicit mutable state. In addition, the provided script parses common linker flags (-o -L -l) and collects arguments to be linked in. It can be passed to ocamlopt as the C linker, more instructions at the end of treemap.ml (which should be cleaned up and integrated into the mirage tool, as mentioned earlier).

      As mentioned above, this is an overapproximation. The libgcc.a is only needed on Xen (see this comment), I have not yet tracked down why there is a libasmrun.a and a libxenasmrun.a.

      -

      More complex examples

      +

      More complex examples

      Besides the hello world, I used the same tools on our BTC Piñata.

      -

      Conclusion

      +

      Conclusion

      OCaml does not yet do dead code elimination, but there is a PR based on the flambda middle-end which does so. I haven't yet investigated numbers using that branch.

      Those counting statistics could go into more detail (e.g. using nm to count the sizes of concrete symbols - which opens the possibility to see which symbols are present in the objects, but not in the final binary). Also, collecting the numbers for each module in a library would be great to have. In the end, it would be great to easily spot the source fragments which are responsible for a huge binary size (and getting rid of them).

      I'm interested in feedback, either via diff --git a/Posts/Conex b/Posts/Conex index 37e052d..038794a 100644 --- a/Posts/Conex +++ b/Posts/Conex @@ -57,7 +57,7 @@ authenticity. There are different kinds of resources:

      Modifications to identities and authorisations need to be approved by a quorum of janitors, package index and release files can be modified either by an authorised id or by a quorum of janitors.

      -

      Documentation

      +

      Documentation

      API documentation is available online, also a coverage report.

      @@ -74,7 +74,7 @@ adapted to the opam repository.

      The TUF spec has a good overview of attacks and threat model, both of which are shared by conex.

      -

      What's missing

      +

      What's missing

      • See issue 7 for a laundry list
      • @@ -89,7 +89,7 @@ has a good overview of attacks and threat model, both of which are shared by con
      • Integration into release management systems
      -

      Getting started

      +

      Getting started

      At the moment, our opam repository does not include any metadata needed for signing. We're in a bootstrap phase: we need you to generate a keypair, claim your packages, and approve your releases.

      @@ -107,7 +107,7 @@ membership and authorised packages were inferred correctly.

      their public key and starts approving releases (and old ones after careful checking that the build script, patches, and tarball checksum are valid). Each resource can be approved in multiple versions at the same time.

      -

      Installation

      +

      Installation

      TODO: remove clone once PR 8494 is merged.

      $ git clone -b auth https://github.com/hannesm/opam-repository.git repo
       $ opam install conex
      @@ -119,7 +119,7 @@ opam file format.  This means can always manually modify them (but be careful,
       modifications need to increment counters, add checksums, and be signed).  Conex
       does not deal with git, you have to manually git add files and open pull
       requests.

      -

      Author enrollment

      +

      Author enrollment

      For the opam repository, we will use GitHub ids as conex ids. Thus, your conex id and your GitHub id should match up.

      repo$ conex_author init --repo ~/repo --id hannesm
      @@ -176,7 +176,7 @@ repo$ git diff //abbreviated output
       +  ]
       +]
       
      -

      Status

      +

      Status

      If you have a single identity and contribute to a single signed opam repository, you don't need to specify --id or --repo from now on.

      The status subcommand presents an author-specific view on the repository. It @@ -199,7 +199,7 @@ authorised to modify (inferred as described authorised for. The --id argument presents you with a view of another author, or from a team perspective. The positional argument is a prefix matching on package names (leave empty for all).

      -

      Resource approval

      +

      Resource approval

      Each resource needs to be approved individually. Each author has a local queue for to-be-signed resources, which is extended with authorisation, init, key, release, and team (all have a --dry-run flag). The queue can be @@ -256,7 +256,7 @@ repo$ git add packages/arp/arp.0.1.1/release packages/arp/package repo$ git commit -m "hannesm key enrollment and some fixes" id packages

      Now push this to your fork, and open a PR on opam-repository!

      -

      Editing a package

      +

      Editing a package

      If you need to modify a released package, you modify the opam file (as before, e.g. introducing a conflict with a dependency), and then approve the modifications. After your local modifications, conex_author status will @@ -285,7 +285,7 @@ added it to your queue. The sign command signed the approved resou repo$ git commit -m "fixed broken arp package" id packages -

      Janitor tools

      +

      Janitor tools

      Janitors need to approve teams, keys, accounts, and authorisations.

      To approve resources which are already in the repository on disk, the key subcommand queues approval of keys and accounts of the provided author:

      @@ -301,7 +301,7 @@ authorisations and teams.

      approved by you. Similar for the key and team subcommands, which also accept all.

      Don't forget to conex_author sign afterwards (or yes | conex_author sign).

      -

      Verification

      +

      Verification

      The two command line utlities, conex_verify_openssl and conex_verify_nocrypto contain the same logic and same command line arguments.

      For bootstrapping purposes (nocrypto is an opam package with dependencies), diff --git a/Posts/DNS b/Posts/DNS index 05e9a45..abb1a24 100644 --- a/Posts/DNS +++ b/Posts/DNS @@ -1,5 +1,5 @@ -My 2018 contains robur and starts with re-engineering DNS

      My 2018 contains robur and starts with re-engineering DNS

      Written by hannes
      Classified under: mirageosprotocol
      Published: 2018-01-11 (last updated: 2021-11-19)

      2018

      +My 2018 contains robur and starts with re-engineering DNS

      My 2018 contains robur and starts with re-engineering DNS

      Written by hannes
      Classified under: mirageosprotocol
      Published: 2018-01-11 (last updated: 2021-11-19)

      2018

      At the end of 2017, I resigned from my PostDoc position at University of Cambridge (in the rems project). Early December 2017 I organised the 4th MirageOS hack @@ -46,7 +46,7 @@ coverage is not high enough, and the lack of documentation). I appreciate early adopters, please let me know if you find any issues or find a use case which is not straightforward to solve. This won't be the last article about DNS this year - persistent storage, resolver, let's encrypt support are still missing.

      -

      What is DNS?

      +

      What is DNS?

      The domain name system is a core Internet protocol, which translates domain names to IP addresses. A domain name is easier to memorise for human beings than an IP address. DNS is hierarchical and @@ -79,7 +79,7 @@ transported via TCP, and even via TLS over UDP or TCP. If a DNS packet transferred via UDP is larger than 512 bytes, it is cut at the 512 byte mark, and a bit in its header is set. The receiver can decide whether to use the 512 bytes of information, or to throw it away and attempt a TCP connection.

      -

      DNS packet

      +

      DNS packet

      The packet encoding starts with a 16bit identifier followed by a 16bit header (containing operation, flags, status code), and four counters, each 16bit, specifying the amount of resource records in the body: questions, answers, @@ -123,7 +123,7 @@ payload is a 16bit priority and a domain name. A CNAME record is an alias to another domain name. These days, there are even records to specify the certificate authority authorisation (CAA) records containing a flag (critical), a tag ("issue") and a value ("letsencrypt.org").

      -

      Server

      +

      Server

      The operation of a DNS server is to listen for a request and serve a reply. Data to be served can be canonically encoded (the RFC describes the format) in a zone file. Apart from insecurity in DNS server implementations, another attack @@ -146,7 +146,7 @@ take time logarithmic in the size of the map").

      type. The found resource records are sent as answer, which also includes the question and authority information (NS records of the zone) and additional glue records (IP addresses of names mentioned earlier in the same zone).

      -

      Dns_map

      +

      Dns_map

      The data structure which contains resource record types as key, and a collection of matching resource records as values. In OCaml the value type must be homogenous - using a normal sum type leads to an unneccessary unpacking step @@ -228,7 +228,7 @@ let get : type a. a Key.t -> t -> a = fun k t ->

      This helps me to programmaticaly retrieve tightly typed values from the cache, important when code depends on concrete values (i.e. when there are domain names, look these up as well and add as additional records). Look into server/dns_server.ml

      -

      Dynamic updates, notifications, and authentication

      +

      Dynamic updates, notifications, and authentication

      Dynamic updates specify in-protocol record updates (supported for example by nsupdate from ISC bind-tools), notifications are used by primary servers @@ -241,7 +241,7 @@ all protocol extensions, there is no need to use out-of-protocol solutions.

      and includes a dependency upon an authenticator (implemented using the nocrypto library, and ptime).

      -

      Deployment and Let's Encrypt

      +

      Deployment and Let's Encrypt

      To deploy servers without much persistent data, an authentication schema is hardcoded in the dns-server: shared secrets are also stored as DNS entries (DNSKEY), and _transfer.zone, _update.zone, and _key-management.zone names diff --git a/Posts/Deploy b/Posts/Deploy index 15ce4cb..15e3bee 100644 --- a/Posts/Deploy +++ b/Posts/Deploy @@ -1,16 +1,16 @@ -Deploying binary MirageOS unikernels

      Deploying binary MirageOS unikernels

      Written by hannes
      Classified under: mirageosdeployment
      Published: 2021-06-30 (last updated: 2021-11-15)

      Introduction

      +Deploying binary MirageOS unikernels

      Deploying binary MirageOS unikernels

      Written by hannes
      Classified under: mirageosdeployment
      Published: 2021-06-30 (last updated: 2021-11-15)

      Introduction

      MirageOS development focus has been a lot on tooling and the developer experience, but to accomplish our goal to "get MirageOS into production", we need to lower the barrier. This means for us to release binary unikernels. As described earlier, we received a grant for "Deploying MirageOS" from NGI Pointer to work on the required infrastructure. This is joint work with Reynir.

      We provide at builds.robur.coop binary unikernel images (and supplementary software). Doing binary releases of MirageOS unikernels is challenging in two aspects: firstly to be useful for everyone, a binary unikernel should not contain any configuration (such as private keys, certificates, etc.). Secondly, the binaries should be reproducible. This is crucial for security; everyone can reproduce the exact same binary and verify that our build service did only use the sources. No malware or backdoors included.

      This post describes how you can deploy MirageOS unikernels without compiling it from source, then dives into the two issues outlined above - configuration and reproducibility - and finally describes how to setup your own reproducible build infrastructure for MirageOS, and how to bootstrap it.

      -

      Deploying MirageOS unikernels from binary

      +

      Deploying MirageOS unikernels from binary

      To execute a MirageOS unikernel, apart from a hypervisor (Xen/KVM/Muen), a tender (responsible for allocating host system resources and passing these to the unikernel) is needed. Using virtio, this is conventionally done with qemu on Linux, but its code size (and attack surface) is huge. For MirageOS, we develop Solo5, a minimal tender. It supports hvt - hardware virtualization (Linux KVM, FreeBSD BHyve, OpenBSD VMM), spt - sandboxed process (a tight seccomp ruleset (only a handful of system calls allowed, no hardware virtualization needed), Linux only). Apart from that, muen (a hypervisor developed in Ada), virtio (for some cloud deployments), and xen (PVHv2 or Qubes 4.0) - read more. We deploy our unikernels as hvt with FreeBSD BHyve as hypervisor.

      On builds.robur.coop, next to the unikernel images, solo5-hvt packages are provided - download the binary and install it. A NixOS package is already available - please note that soon packaging will be much easier (and we will work on packages merged into distributions).

      When the tender is installed, download a unikernel image (e.g. the traceroute described in an earlier post), and execute it:

      $ solo5-hvt --net:service=tap0 -- traceroute.hvt --ipv4=10.0.42.2/24 --ipv4-gateway=10.0.42.1
       

      If you plan to orchestrate MirageOS unikernels, you may be interested in albatross - we provide binary packages as well for albatross. An upcoming post will go into further details of how to setup albatross.

      -

      MirageOS configuration

      +

      MirageOS configuration

      A MirageOS unikernel has a specific purpose - composed of OCaml libraries - selected at compile time, which allows to only embed the required pieces. This reduces the attack surface drastically. At the same time, to be widely useful to multiple organisations, no configuration data must be embedded into the unikernel.

      Early MirageOS unikernels such as mirage-www embed content (blog posts, ..) and TLS certificates and private keys in the binary (using crunch). The Qubes firewall (read the blog post by Thomas for more information) used to include the firewall rules until v0.6 in the binary, since v0.7 the rules are read dynamically from QubesDB. This is big usability improvement.

      We have several possibilities to provide configuration information in MirageOS, on the one hand via boot parameters (can be pre-filled at development time, and further refined at configuration time, but those passed at boot time take precedence). Boot parameters have a length limitation.

      @@ -18,7 +18,7 @@

      Several other unikernels, such as this website and our CalDAV server, store the content in a remote git repository. The git URI and credentials (private key seed, host key fingerprint) are passed via boot parameter.

      Finally, another option that we take advantage of is to introduce a post-link step that rewrites the binary to embed configuration. The tool caravan developed by Romain that does this rewrite is used by our openvpn router (binary).

      In the future, some configuration information - such as monitoring system, syslog sink, IP addresses - may be done via DHCP on one of the private network interfaces - this would mean that the DHCP server has some global configuration option, and the unikernels no longer require that many boot parameters. Another option we want to investigate is where the tender shares a file as read-only memory-mapped region from the host system to the guest system - but this is tricky considering all targets above (especially virtio and muen).

      -

      Behind the scenes: reproducible builds

      +

      Behind the scenes: reproducible builds

      To provide a high level of assurance and trust, if you distribute binaries in 2021, you should have a recipe how they can be reproduced in a bit-by-bit identical way. This way, different organisations can run builders and rebuilders, and a user can decide to only use a binary if it has been reproduced by multiple organisations in different jurisdictions using different physical machines - to avoid malware being embedded in the binary.

      For a reproduction to be successful, you need to collect the checksums of all sources that contributed to the built, together with other things (host system packages, environment variables, etc.). Of course, you can record the entire OS and sources as a tarball (or file system snapshot) and distribute that - but this may be suboptimal in terms of bandwidth requirements.

      With opam, we already have precise tracking which opam packages are used, and since opam 2.1 the opam switch export includes extra-files (patches) and records the VCS version. Based on this functionality, orb, an alternative command line application using the opam-client library, can be used to collect (a) the switch export, (b) host system packages, and (c) the environment variables. Only required environment variables are kept, all others are unset while conducting a build. The only required environment variables are PATH (sanitized with an allow list, /bin, /sbin, with /usr, /usr/local, and /opt prefixes), and HOME. To enable Debian's apt to install packages, DEBIAN_FRONTEND is set to noninteractive. The SWITCH_PATH is recorded to allow orb to use the same path during a rebuild. The SOURCE_DATE_EPOCH is set to enable tools that record a timestamp to use a static one. The OS* variables are only used for recording the host OS and version.

      @@ -51,7 +51,7 @@

    These tools are themselves reproducible, and built on a daily basis. The infrastructure executing the build jobs installs the most recent packages of orb and builder before conducting a build. This means that our build infrastructure is reproducible as well, and uses the latest code when it is released.

    -

    Conclusion

    +

    Conclusion

    Thanks to NGI funding we now have reproducible MirageOS binary builds available at builds.robur.coop. The underlying infrastructure is reproducible, available for multiple platforms (Ubuntu using docker, FreeBSD using jails), and can be easily bootstrapped from source (once you have OCaml and opam working, getting builder and orb should be easy). All components are open source software, mostly with permissive licenses.

    We also have an index over sha-256 checksum of binaries - in the case you find a running unikernel image where you forgot which exact packages were used, you can do a reverse lookup.

    We are aware that the web interface can be improved (PRs welcome). We will also work on the rebuilder setup and run some rebuilds.

    diff --git a/Posts/DnsServer b/Posts/DnsServer index 5c701c3..ae578cd 100644 --- a/Posts/DnsServer +++ b/Posts/DnsServer @@ -1,13 +1,13 @@ -Deploying authoritative OCaml-DNS servers as MirageOS unikernels

    Deploying authoritative OCaml-DNS servers as MirageOS unikernels

    Written by hannes
    Classified under: mirageosprotocoldeployment
    Published: 2019-12-23 (last updated: 2023-03-02)

    Goal

    +Deploying authoritative OCaml-DNS servers as MirageOS unikernels

    Deploying authoritative OCaml-DNS servers as MirageOS unikernels

    Written by hannes
    Classified under: mirageosprotocoldeployment
    Published: 2019-12-23 (last updated: 2023-03-02)

    Goal

    Have your domain served by OCaml-DNS authoritative name servers. Data is stored in a git remote, and let's encrypt certificates can be requested to DNS. This software is deployed since more than two years for several domains such as nqsb.io and robur.coop. This present the authoritative server side, and certificate library of the OCaml-DNS implementation formerly known as µDNS.

    -

    Prerequisites

    +

    Prerequisites

    You need to own a domain, and be able to delegate the name service to your own servers. You also need two spare public IPv4 addresses (in different /24 networks) for your name servers. A git server or remote repository reachable via git over ssh. Servers which support solo5 guests, and have the corresponding tender installed. A computer with opam (>= 2.0.0) installed.

    -

    Data preparation

    +

    Data preparation

    Figure out a way to get the DNS entries of your domain in a "master file format", i.e. what bind uses.

    This is a master file for the mirage domain, defining $ORIGIN to avoid typing the domain name after each hostname (use @ if you need the domain name only; if you need to refer to a hostname in a different domain end it with a dot (.), i.e. ns2.foo.com.). The default time to live $TTL is an hour (3600 seconds). The zone contains a start of authority (SOA) record containing the nameserver, hostmaster, serial, refresh, retry, expiry, and minimum. @@ -21,7 +21,7 @@ ns1 A 127.0.0.1 www A 1.1.1.1 git-repo> git add mirage && git commit -m initial && git push -

    Installation

    +

    Installation

    On your development machine, you need to install various OCaml packages. You don't need privileged access if common tools (C compiler, make, libgmp) are already installed. You have opam installed.

    Let's create a fresh switch for the DNS journey:

    $ opam init
    @@ -31,7 +31,7 @@ $ opam switch create udns 4.14.1
     $ eval `opam env` #sets some environment variables
     

    The last command set environment variables in your current shell session, please use the same shell for the commands following (or run eval $(opam env) in another shell and proceed in there - the output of opam switch sohuld point to udns).

    -

    Validation of our zonefile

    +

    Validation of our zonefile

    First let's check that OCaml-DNS can parse our zonefile:

    $ opam install dns-cli #installs ~/.opam/udns/bin/ozone and other binaries
     $ ozone <git-repo>/mirage # see ozone --help
    @@ -39,7 +39,7 @@ successfully checked zone
     

    Great. Error reporting is not great, but line numbers are indicated (ozone: zone parse problem at line 3: syntax error), lexer and parser are lex/yacc style (PRs welcome).

    FWIW, ozone accepts --old <filename> to check whether an update from the old zone to the new is fine. This can be used as pre-commit hook in your git repository to avoid bad parse states in your name servers.

    -

    Getting the primary up

    +

    Getting the primary up

    The next step is to compile the primary server and run it to serve the domain data. Since the git-via-ssh client is not yet released, we need to add a custom opam repository to this switch.

    # get the `mirage` application via opam
     $ opam install lwt mirage
    @@ -70,7 +70,7 @@ $ dig any mirage @127.0.0.1
     # a DNS packet printout with all records available for mirage
     

    That's exciting, the DNS server serving answers from a remote git repository.

    -

    Securing the git access with ssh

    +

    Securing the git access with ssh

    Let's authenticate the access by using ssh, so we feel ready to push data there as well. The primary-git unikernel already includes an experimental ssh client, all we need to do is setting up credentials - in the following a RSA keypair and the server fingerprint.

    # collect the RSA host key fingerprint
     $ ssh-keyscan <git-server> > /tmp/git-server-public-keys
    @@ -91,7 +91,7 @@ $ ./primary-git --authenticator=SHA256:a5kkkuo7MwTBkW+HDt4km0gGPUAX0y1bFcPMXKxBa
     # started up, you can try the host and dig commands from above if you like
     

    To wrap up, we now have a primary authoritative name server for our zone running as Unix process, which clones a remote git repository via ssh on startup and then serves it.

    -

    Authenticated data updates

    +

    Authenticated data updates

    Our remote git repository is the source of truth, if you need to add a DNS entry to the zone, you git pull, edit the zone file, remember to increase the serial in the SOA line, run ozone, git commit and push to the repository.

    So, the primary-git needs to be informed of git pushes. This requires a communication channel from the git server (or somewhere else, e.g. your laptop) to the DNS server. I prefer in-protocol solutions over adding yet another protocol stack, no way my DNS server will talk HTTP REST.

    The DNS protocol has an extension for notifications of zone changes (as a DNS packet), usually used between the primary and secondary servers. The primary-git accepts these notify requests (i.e. bends the standard slightly), and upon receival pulls the remote git repository, and serves the fresh zone files. Since a git pull may be rather excessive in terms of CPU cycles and network bandwidth, only authenticated notifications are accepted.

    @@ -117,7 +117,7 @@ $ onotify 127.0.0.1 mirage --key=personal._update.mirage:SHA256:kJJqipaQHQWqZL31 # further changes to the hmac secrets don't require a restart anymore, a notify packet is sufficient :D

    Ok, this onotify command line could be setup as a git post-commit hook, or run manually after each manual git push.

    -

    Secondary

    +

    Secondary

    It's time to figure out how to integrate the secondary name server. An already existing bind or something else that accepts notifications and issues zone transfers with hmac-sha256 secrets should work out of the box. If you encounter interoperability issues, please get in touch with me.

    The secondary unikernel is available from another git repository:

    # get the secondary sources
    @@ -130,7 +130,7 @@ $ cd dns-secondary
     $ make
     $ ./dist/secondary
     
    -

    IP addresses and routing

    +

    IP addresses and routing

    Both primary and secondary serve the data on the DNS port (53) on UDP and TCP. To run both on the same machine and bind them to different IP addresses, we'll use a layer 2 network (ethernet frames) with a host system software switch (bridge interface service), the unikernels as virtual machines (or seccomp-sandboxed) via the solo5 backend. Using xen is possible as well. As IP address range we'll use 10.0.42.0/24, and the host system uses the 10.0.42.1.

    The primary git needs connectivity to the remote git repository, thus on a laptop in a private network we need network address translation (NAT) from the bridge where the unikernels speak to the Internet where the git repository resides.

    # on FreeBSD:
    @@ -156,7 +156,7 @@ tap1
     # add them to the bridge
     $ ifconfig service addm tap0 addm tap1
     
    -

    Primary and secondary setup

    +

    Primary and secondary setup

    Let's update our zone slightly to reflect the IP changes.

    git-repo> cat mirage
     $ORIGIN mirage.
    @@ -208,7 +208,7 @@ $ dig foo.mirage @10.0.42.2 # primary
     $ dig foo.mirage @10.0.42.3 # secondary got notified and transferred the zone
     

    You can also check the behaviour when restarting either of the VMs, whenever the primary is available the zone is synchronised. If the primary is down, the secondary still serves the zone. When the secondary is started while the primary is down, it won't serve any data until the primary is online (the secondary polls periodically, the primary sends notifies on startup).

    -

    Dynamic data updates via DNS, pushed to git

    +

    Dynamic data updates via DNS, pushed to git

    DNS is a rich protocol, and it also has builtin updates that are supported by OCaml DNS, again authenticated with hmac-sha256 and shared secrets. Bind provides the command-line utility nsupdate to send these update packets, a simple oupdate unix utility is available as well (i.e. for integration of dynamic DNS clients). You know the drill, add a shared secret to the primary, git push, notify the primary, and voila we can dynamically in-protocol update. An update received by the primary via this way will trigger a git push to the remote git repository, and notifications to the secondary servers as described above.

    # being lazy, I reuse the key above
     $ oupdate 10.0.42.2 personal._update.mirage:SHA256:kJJqipaQHQWqZL31Raar6uPnepGFIdtpjkXot9rv2xg= my-other.mirage 1.2.3.4
    @@ -223,7 +223,7 @@ $ dig my-other.mirage @10.0.42.2
     $ dig my-other.mirage @10.0.42.3
     

    So we can deploy further oupdate (or nsupdate) clients, distribute hmac secrets, and have the DNS zone updated. The source of truth is still the git repository, where the primary-git pushes to. Merge conflicts and timing of pushes is not yet dealt with. They are unlikely to happen since the primary is notified on pushes and should have up-to-date data in storage. Sorry, I'm unsure about the error semantics, try it yourself.

    -

    Let's encrypt!

    +

    Let's encrypt!

    Let's encrypt is a certificate authority (CA), which certificate is shipped as trust anchor in web browsers. They specified a protocol for automated certificate management environment (ACME), used to get X509 certificates for your services. In the protocol, a certificate signing request (publickey and hostname) is sent to let's encrypt servers, which sends a challenge to proof the ownership of the hostnames. One widely-used way to solve this challenge is running a web server, another is to serve it as text record from the authoritative DNS server.

    Since I avoid persistent storage when possible, and also don't want to integrate a HTTP client stack in the primary server, I developed a third unikernel that acts as (hidden) secondary server, performs the tedious HTTP communication with let's encrypt servers, and stores all data in the public DNS zone.

    For encoding of certificates, the DANE working group specified TLSA records in DNS. They are quadruples of usage, selector, matching type, and ASN.1 DER-encoded material. We set usage to 3 (domain-issued certificate), matching type to 0 (no hash), and selector to 0 (full certificate) or 255 (private usage) for certificate signing requests. The interaction is as follows:

    @@ -268,7 +268,7 @@ $ ocertify 10.0.42.2 foo.mirage

    For actual testing with let's encrypt servers you need to have the primary and secondary deployed on your remote hosts, and your domain needs to be delegated to these servers. Good luck. And ensure you have backup your git repository.

    As fine print, while this tutorial was about the mirage zone, you can stick any number of zones into the git repository. If you use a _keys file (without any domain prefix), you can configure hmac secrets for all zones, i.e. something to use in your let's encrypt unikernel and secondary unikernel. Dynamic addition of zones is supported, just create a new zonefile and notify the primary, the secondary will be notified and pick it up. The primary responds to a signed SOA for the root zone (i.e. requested by the secondary) with the SOA response (not authoritative), and additionally notifications for all domains of the primary.

    -

    Conclusion and thanks

    +

    Conclusion and thanks

    This tutorial presented how to use the OCaml DNS based unikernels to run authoritative name servers for your domain, using a git repository as the source of truth, dynamic authenticated updates, and let's encrypt certificate issuing.

    There are further steps to take, such as monitoring (mirage configure --monitoring), which use a second network interface for reporting syslog and metrics to telegraf / influx / grafana. Some DNS features are still missing, most prominently DNSSec.

    I'd like to thank all people involved in this software stack, without other key components, including git, mirage-crypto, awa-ssh, solo5, mirage, ocaml-letsencrypt, and more.

    diff --git a/Posts/EC b/Posts/EC index c8b4a18..60f21e6 100644 --- a/Posts/EC +++ b/Posts/EC @@ -1,7 +1,7 @@ -Cryptography updates in OCaml and MirageOS

    Cryptography updates in OCaml and MirageOS

    Written by hannes
    Classified under: mirageossecuritytls
    Published: 2021-04-23 (last updated: 2021-11-19)

    Introduction

    +Cryptography updates in OCaml and MirageOS

    Cryptography updates in OCaml and MirageOS

    Written by hannes
    Classified under: mirageossecuritytls
    Published: 2021-04-23 (last updated: 2021-11-19)

    Introduction

    Tl;DR: mirage-crypto-ec, with x509 0.12.0, and tls 0.13.0, provide fast and secure elliptic curve support in OCaml and MirageOS - using the verified fiat-crypto stack (Coq to OCaml to executable which generates C code that is interfaced by OCaml). In x509, a long standing issue (countryName encoding), and archive (PKCS 12) format is now supported, in addition to EC keys. In tls, ECDH key exchanges are supported, and ECDSA and EdDSA certificates.

    -

    Elliptic curve cryptography

    +

    Elliptic curve cryptography

    Since May 2020, our OCaml-TLS stack supports TLS 1.3 (since tls version 0.12.0 on opam).

    TLS 1.3 requires elliptic curve cryptography - which was not available in mirage-crypto (the maintained fork of nocrypto).

    There are two major uses of elliptic curves: key exchange (ECDH) for establishing a shared secret over an insecure channel, and digital signature (ECDSA) for authentication, integrity, and non-repudiation. (Please note that the construction of digital signatures on Edwards curves (Curve25519, Ed448) is called EdDSA instead of ECDSA.)

    @@ -9,37 +9,37 @@

    In addition, to use the code in MirageOS, it should be boring C code: no heap allocations, only using a very small amount of C library functions -- the code needs to be compiled in an environment with nolibc.

    Two projects started in semantics, to solve the issue from the grounds up: fiat-crypto and hacl-star: their approach is to use a proof system (Coq or F* to verify that the code executes in constant time, not depending on data input. Both projects provide as output of their proof systems C code.

    For our initial TLS 1.3 stack, Clément, Nathan and Etienne developed fiat-p256 and hacl_x5519. Both were one-shot interfaces for a narrow use case (ECDH for NIST P-256 and X25519), worked well for their purpose, and allowed to gather some experience from the development side.

    -

    Changed requirements

    +

    Changed requirements

    Revisiting our cryptography stack with the elliptic curve perspective had several reasons, on the one side the customer project NetHSM asked for feasibility of ECDSA/EdDSA for various elliptic curves, on the other side DNSSec uses elliptic curve cryptography (ECDSA), and also wireguard relies on elliptic curve cryptography. The number of X.509 certificates using elliptic curves is increasing, and we don't want to leave our TLS stack in a state where it can barely talk to a growing number of services on the Internet.

    Looking at hacl-star, their support is limited to P-256 and Curve25519, any new curve requires writing F*. Another issue with hacl-star is C code quality: their C code does neither compile with older C compilers (found on Oracle Linux 7 / CentOS 7), nor when enabling all warnings (> 150 are generated). We consider the C compiler as useful resource to figure out undefined behaviour (and other problems), and when shipping C code we ensure that it compiles with -Wall -Wextra -Wpedantic --std=c99 -Werror. The hacl project ships a bunch of header files and helper functions to work on all platforms, which is a clunky ifdef desert. The hacl approach is to generate a whole algorithm solution: from arithmetic primitives, group operations, up to cryptographic protocol - everything included.

    In contrast, fiat-crypto is a Coq development, which as part of compilation (proof verification) generates executables (via OCaml code extraction from Coq). These executables are used to generate modular arithmetic (as C code) given a curve description. The generated C code is highly portable, independent of platform (word size is taken as input) - it only requires a <stdint.h>, and compiles with all warnings enabled (once a minor PR got merged). Supporting a new curve is simple: generate the arithmetic code using fiat-crypto with the new curve description. The downside is that group operations and protocol needs to implemented elsewhere (and is not part of the proven code) - gladly this is pretty straightforward to do, especially in high-level languages.

    -

    Working with fiat-crypto

    +

    Working with fiat-crypto

    As mentioned, our initial fiat-p256 binding provided ECDH for the NIST P-256 curve. Also, BoringSSL uses fiat-crypto for ECDH, and developed the code for group operations and cryptographic protocol on top of it.

    The work needed was (a) ECDSA support and (b) supporting more curves (let's focus on NIST curves). For ECDSA, the algorithm requires modular arithmetics in the field of the group order (in addition to the prime). We generate these primitives with fiat-crypto (named npYYY_AA) - that required a small fix in decoding hex. Fiat-crypto also provides inversion since late October 2020, paper - which allowed to reduce our code base taken from BoringSSL. The ECDSA protocol was easy to implement in OCaml using the generated arithmetics.

    Addressing the issue of more curves was also easy to achieve, the C code (group operations) are macros that are instantiated for each curve - the OCaml code are functors that are applied with each curve description.

    Thanks to the test vectors (as structured data) from wycheproof (and again thanks to Etienne, Nathan, and Clément for their OCaml code decodin them), I feel confident that our elliptic curve code works as desired.

    What was left is X25519 and Ed25519 - dropping the hacl dependency entirely felt appealing (less C code to maintain from fewer projects). This turned out to require more C code, which we took from BoringSSL. It may be desirable to reduce the imported C code, or to wait until a project on top of fiat-crypto which provides proven cryptographic protocols is in a usable state.

    To avoid performance degradation, I distilled some X25519 benchmarks, turns out the fiat-crypto and hacl performance is very similar.

    -

    Achievements

    +

    Achievements

    The new opam package mirage-crypto-ec is released, which includes the C code generated by fiat-crypto (including inversion), point operations from BoringSSL, and some OCaml code for invoking these functions and doing bounds checks, and whether points are on the curve. The OCaml code are some functors that take the curve description (consisting of parameters, C function names, byte length of value) and provide Diffie-Hellman (Dh) and digital signature algorithm (Dsa) modules. The nonce for ECDSA is computed deterministically, as suggested by RFC 6979, to avoid private key leakage.

    The code has been developed in NIST curves, removing blinding (since we use operations that are verified to be constant-time), added missing length checks (reported by Greg), curve25519, a fix for signatures that do not span the entire byte size (discovered while adapting X.509), fix X25519 when the input has offset <> 0. It works on x86 and arm, both 32 and 64 bit (checked by CI). The development was partially sponsored by Nitrokey.

    What is left to do, apart from further security reviews, is performance improvements, Ed448/X448 support, and investigating deterministic k for P521. Pull requests are welcome.

    When you use the code, and encounter any issues, please report them.

    -

    Layer up - X.509 now with ECDSA / EdDSA and PKCS 12 support, and a long-standing issue fixed

    +

    Layer up - X.509 now with ECDSA / EdDSA and PKCS 12 support, and a long-standing issue fixed

    With the sign and verify primitives, the next step is to interoperate with other tools that generate and use these public and private keys. This consists of serialisation to and deserialisation from common data formats (ASN.1 DER and PEM encoding), and support for handling X.509 certificates with elliptic curve keys. Since X.509 0.12.0, it supports EC private and public keys, including certificate validation and issuance.

    Releasing X.509 also included to go through the issue tracker and attempt to solve the existing issues. This time, the "country name is encoded as UTF8String, while RFC demands PrintableString" filed more than 5 years ago by Reynir, re-reported by Petter in early 2017, and again by Vadim in late 2020, was fixed by Vadim.

    Another long-standing pull request was support for PKCS 12, the archive format for certificate and private key bundles. This has been developed and merged. PKCS 12 is a widely used and old format (e.g. when importing / exporting cryptographic material in your browser, used by OpenVPN, ...). Its specification uses RC2 and 3DES (see this nice article), which are the default algorithms used by openssl pkcs12.

    -

    One more layer up - TLS

    +

    One more layer up - TLS

    In TLS we are finally able to use ECDSA (and EdDSA) certificates and private keys, this resulted in slightly more complex configuration - the constraints between supported groups, signature algorithms, ciphersuite, and certificates are intricate:

    The ciphersuite (in TLS before 1.3) specifies which key exchange mechanism to use, but also which signature algorithm to use (RSA/ECDSA). The supported groups client hello extension specifies which elliptic curves are supported by the client. The signature algorithm hello extension (TLS 1.2 and above) specifies the signature algorithm. In the end, at load time the TLS configuration is validated and groups, ciphersuites, and signature algorithms are condensed depending on configured server certificates. At session initiation time, once the client reports what it supports, these parameters are further cut down to eventually find some suitable cryptographic parameters for this session.

    From the user perspective, earlier the certificate bundle and private key was a pair of X509.Certificate.t list and Mirage_crypto_pk.Rsa.priv, now the second part is a X509.Private_key.t - all provided constructors have been updates (notably X509_lwt.private_of_pems and Tls_mirage.X509.certificate).

    -

    Finally, conduit and mirage

    +

    Finally, conduit and mirage

    Thanks to Romain, conduit* 4.0.0 was released which supports the modified API of X.509 and TLS. Romain also developed patches and released mirage 3.10.3 which supports the above mentioned work.

    -

    Conclusion

    +

    Conclusion

    Elliptic curve cryptography is now available in OCaml using verified cryptographic primitives from the fiat-crypto project - opam install mirage-crypto-ec. X.509 since 0.12.0 and TLS since 0.13.0 and MirageOS since 3.10.3 support this new development which gives rise to smaller EC keys. Our old bindings, fiat-p256 and hacl_x25519 have been archived and will no longer be maintained.

    Thanks to everyone involved on this journey: reporting issues, sponsoring parts of the work, helping with integration, developing initial prototypes, and keep motivating me to continue this until the release is done.

    In the future, it may be possible to remove zarith and gmp from the dependency chain, and provide EC-only TLS servers and clients for MirageOS. The benefit will be much less C code (libgmp-freestanding.a is 1.5MB in size) in our trusted code base.

    Another potential project that is very close now is a certificate authority developed in MirageOS - now that EC keys, PKCS 12, revocation lists, ... are implemented.

    -

    Footer

    +

    If you want to support our work on MirageOS unikernels, please donate to robur. I'm interested in feedback, either via twitter, hannesm@mastodon.social or via eMail.

    \ No newline at end of file diff --git a/Posts/Functoria b/Posts/Functoria index ea75dd9..174706e 100644 --- a/Posts/Functoria +++ b/Posts/Functoria @@ -1,12 +1,12 @@ Configuration DSL step-by-step

    Configuration DSL step-by-step

    Written by hannes
    Classified under: mirageosbackground
    Published: 2016-05-10 (last updated: 2021-11-19)

    Sorry for being late again with this article, I had other ones planned, but am not yet satisfied with content and code, will have to wait another week.

    -

    MirageOS configuration

    +

    MirageOS configuration

    As described in an earlier post, MirageOS is a library operating system which generates single address space custom kernels (so called unikernels) for each application. The application code is (mostly) independent on the used backend. To achieve this, the language which expresses the configuration of a MirageOS unikernel is rather complex, and has to deal with package dependencies, setup of layers (network stack starting at the (virtual) ethernet device, or sockets), logging, tracing.

    The abstraction over concrete implementation of e.g. the network stack is done by providing a module signature in the mirage-types package. The socket-based network stack, the tap device based network stack, and the Xen virtual network device based network stack implement this signature (depending on other module signatures). The unikernel contains code which applies those dependent modules to instantiate a custom-tailored network stack for the specific configuration. A developer should only describe what their requirements are, the user who wants to deploy it should provide the concrete configuration. And the developer should not need to manually instantiate the network stack for all possible configurations, this is what the mirage tool should embed.

    Initially, MirageOS contained an adhoc system which relied on concatenation of strings representing OCaml code. This turned out to be error prone. In 2015 Drup developed Functoria, a domain-specific language (DSL) to organize functor applications, primarily for MirageOS. It has been introduced in a blog post. It is not limited to MirageOS (although this is the primary user right now).

    Functoria has been included in MirageOS since its 2.7.0 release at the end of February 2016. Functoria provides support for command line arguments which can then either be passed at configuration time or at boot time to the unikernel (such as IP address configuration) using the cmdliner library underneath (and includes dynamic man pages, help, sensible command line parsing, and even visualisation (mirage describe) of the configuration and data dependencies).

    I won't go into details about command line arguments in here, please have a look at the functoria blog post in case you're interested. Instead, I'll describe how to define a Functoria device which inserts content as code at configuration time into a MirageOS unikernel (running here, source). Using this approach, no external data (using crunch or a file system image) is needed, while the content can still be modified using markdown. Also, no markdown to HTML converter is needed at runtime, but this step is completely done at compile time (the result is a small (still too large) unikernel, 4.6MB).

    -

    Unikernel

    +

    Unikernel

    Similar to my nqsb.io website post, this unikernel only has a single resource and thus does not need to do any parsing (or even call read). The main function is start:

    let start stack _ =
       S.listen_tcpv4 stack ~port:80 (serve rendered) ;
    @@ -44,7 +44,7 @@ let header = http_header
     

    This puts together the pieces we need for a simple HTML site. This unikernel does not have any external dependencies, if we assume that the mirage toolchain, the types, and the network implementation are already provided (the latter two are implicitly added by the mirage tool depending on the configuration, the first you'll have to install manually opam install mirage).

    But wait, where do Style and Content come from? There are no ml modules in the repository. Instead, there is a content.md and style.css in the data subdirectory.

    -

    Configuration

    +

    Configuration

    We use the builtin configuration time magic of functoria to translate these into OCaml modules, in such a way that our unikernel does not need to embed code to render markdown to HTML and carry along a markdown data file.

    Inside of config.ml, let's look again at the bottom:

    let () =
    @@ -97,11 +97,11 @@ end
     

    Functoria uses classes internally, and we extend the base_configurable class, which extends configurable with some sensible defaults.

    The important bits are what actually happens during configure and clean: execution of some shell commands (echo, omd, and rm) using the functoria application builder interface. Some information is as well exposed via the Functoria_info module.

    -

    Wrapup

    +

    Wrapup

    We walked through the configuration magic of MirageOS, which is a domain-specific language designed for MirageOS demands. We can run arbitrary commands at compile time, and do not need to escape into external files, such as Makefile or shell scripts, but can embed them in our config.ml.

    I'm interested in feedback, either via twitter or via eMail.

    -

    Other updates in the MirageOS ecosystem

    +

    Other updates in the MirageOS ecosystem

    • now using Html5.P.print instead of string concatenation, as suggested by Drup (both on nqsb.io and in Canopy)
    • diff --git a/Posts/Jackline b/Posts/Jackline index 2d238b1..580e6f7 100644 --- a/Posts/Jackline +++ b/Posts/Jackline @@ -27,7 +27,7 @@ not happy with the code base, but neverthelss consider it to be a successful project: dozens of friends are using it (no exact numbers), I got contributions from other people (more than 25 commits from more than 8 individuals), I use it on a daily basis for lots of personal communication.

      -

      What is XMPP?

      +

      What is XMPP?

      The eXtensible Messaging and Presence Protocol (previously known as Jabber) describes (these days as RFC 6120) a communication protocol based on XML fragments, which enables near real-time @@ -45,7 +45,7 @@ Facebook chat used to be based on XMPP. Those big companies wanted something "more usable" (where they're more in control, reliable message delivery via caching in the server and mandatory delivery receipts, multiple devices all getting the same messages), and thus moved away from the open standard.

      -

      XMPP Security

      +

      XMPP Security

      Authentication is done via a TLS channel (where your client should authenticate the server), and SASL that the server authenticates your client. I investigated in 2008 (in German) @@ -71,7 +71,7 @@ offline messages, nicknames you gave to your buddies, subscription information, and information every time you connect (research of privacy preserving presence protocols has been done, but is not widely used AFAIK, e.g. DP5).

      -

      XMPP client landscape

      +

      XMPP client landscape

      See wikipedia for an extensive comparison (which does not mention jackline :P).

      A more opinionated analysis is that you were free to choose between C - where @@ -97,7 +97,7 @@ take our work away! :)) - also named garbage collection, often used together with automated bounds checking -- this doesn't mean that you're safe - there are still logical flaws, and integer overflows (and funny things which happen at resource starvation), etc.

      -

      Goals and non-goals

      +

      Goals and non-goals

      My upfront motivation was to write and use an XMPP client tailored to my needs. I personally don't use many graphical applications (coding in emacs, mail via thunderbird, firefox, mplayer, mupdf), but stick mostly to terminal @@ -153,7 +153,7 @@ worked roughly one day a week on jackline.

      slime project (watch Luke's presentation from 2013) - if there's someone complaining about an issue, fix it within 10 minutes and ask them to update. This only works if each user compiles the git version anyways.

      -

      User interface

      +

      User interface

      other screenshot

      Stated goal is minimalistic. No heavy use of colours. Visibility on both black and white background (btw, as a Unix process there is no way to find @@ -181,11 +181,11 @@ messages are displayed.

      7bit ASCII version (I output some unicode characters for layout). Recently I added support to customise the colours. I tried to ensure it looks fine on both black and white background.

      -

      Code

      +

      Code

      Initially I targeted GTK with OCaml, but that excursion only lasted two weeks, when I switched to a lambda-term terminal interface.

      -

      UI

      +

      UI

      The lambda-term interface survived for a good year (until 7th Feb 2016), when I started to use notty - developed by David - using a decent unicode library.

      @@ -195,7 +195,7 @@ the symbol 茶 gets two characters width (see screenshot at top of page), and th how many characters are already written on the terminal.

      I recommend to look into notty if you want to do terminal graphics in OCaml!

      -

      Application logic and state

      +

      Application logic and state

      Stepping back, an XMPP client reacts to two input sources: the user input (including terminal resize), and network input (or failure). The output is a screen (80x25 characters) image. Each input event can trigger output events on @@ -233,7 +233,7 @@ need to), configurable colours, tab completions for nicknames and commands, per-user input history, emacs keybindings. It even works with the XMPP gateway provided by slack (some startup doing a centralised groupchat with picture embedding and animated cats).

      -

      Road ahead

      +

      Road ahead

      Common feature requests are: omemo support, IRC support, support for multiple accounts @@ -317,7 +317,7 @@ to the user interface, others are specific to XMPP and/or OTR): a new transport similar minimal UI features and colours).

      -

      Conclusion

      +

      Conclusion

      Jackline started as a procrastination project, and still is one. I only develop on jackline if I enjoy it. I'm not scared to try new approaches in jackline, and either reverting them or rewriting some chunks of code again. It is a diff --git a/Posts/Maintainers b/Posts/Maintainers index e2f7194..e4acfd9 100644 --- a/Posts/Maintainers +++ b/Posts/Maintainers @@ -9,7 +9,7 @@ a list of strings (usually a mail address).

      The data sources we correlate are the maintainers entry in opam file, and who actually committed in the opam repository. This is inspired by some GitHub discussion.

      -

      GitHub id and email address

      +

      GitHub id and email address

      For simplicity, since conex uses any (unique) identifier for authors, and the opam repository is hosted on GitHub, we use a GitHub id as author identifier. Maintainer information is an email address, thus we need a mapping between them.

      @@ -24,7 +24,7 @@ we ignore PRs from GitHub organisations PR.ignore_github, where com PR.ignore_pr are picked from a different author (manually), bad mail addresses, and Jeremy's mail address (it is added to too many GitHub ids otherwise). The goal is to have a for an email address a single GitHub id. 329 authors with 416 mail addresses are mapped.

      -

      Maintainer in opam

      +

      Maintainer in opam

      As mentioned, lots of packages contain a maintainers entry. In maintainers we extract the mail addresses of the most recently released opam @@ -34,7 +34,7 @@ field (such as mirage and xapi-project ;). We're open for suggestions to extend this massaging to the needs. Additionally, the contact at ocamlpro mail address was used for all packages before the maintainers entry was introduced (based on a discussion with Louis Gesbert). 132 packages with empty maintainers.

      -

      Fitness

      +

      Fitness

      Combining these two data sources, we hoped to find a strict small set of whom to authorise for which package. Turns out some people use different mail addresses for git commits and opam maintainer entries, which are be easily @@ -62,14 +62,14 @@ mirage -> MagnusS dbuenzli djs55 hannesm hnrgrgr jonludlam mato mor1 pgj pqwy ocsigen -> balat benozol dbuenzli hhugo hnrgrgr jpdeplaix mfp pveber scjung slegrand45 smondet vasilisp xapi-project -> dbuenzli djs55 euanh mcclurmc rdicosmo simonjbeaumont yomimono -

      Alternative approach: GitHub urls

      +

      Alternative approach: GitHub urls

      An alternative approach (attempted earlier) working only for GitHub hosted projects, is to authorise the use of the user part of the GitHub repository URL. Results after filtering GitHub organisations are not yet satisfactory (but only 56 packages with no maintainer, output repo. This approach completely ignores the manually written maintainer field.

      -

      Conclusion

      +

      Conclusion

      Manually maintained metadata is easily out of date, and not very useful. But combining automatically created metadata with manually, and some manual tweaking leads to reasonable data.

      diff --git a/Posts/Monitoring b/Posts/Monitoring index eea1886..94b6a21 100644 --- a/Posts/Monitoring +++ b/Posts/Monitoring @@ -1,14 +1,14 @@ -All your metrics belong to influx

      All your metrics belong to influx

      Written by hannes
      Published: 2022-03-08 (last updated: 2023-05-16)

      Introduction to monitoring

      +All your metrics belong to influx

      All your metrics belong to influx

      Written by hannes
      Published: 2022-03-08 (last updated: 2023-05-16)

      Introduction to monitoring

      At robur we use a range of MirageOS unikernels. Recently, we worked on improving the operations story thereof. One part is shipping binaries using our reproducible builds infrastructure. Another part is, once deployed we want to observe what is going on.

      I first got into touch with monitoring - collecting and graphing metrics - with MRTG and munin - and the simple network management protocol SNMP. From the whole system perspective, I find it crucial that the monitoring part of a system does not add pressure. This favours a push-based design, where reporting is done at the disposition of the system.

      The rise of monitoring where graphs are done dynamically (such as Grafana) and can be programmed (with a query language) by the operator are very neat, it allows to put metrics in relation after they have been recorded - thus if there's a thesis why something went berserk, you can graph the collected data from the past and prove or disprove the thesis.

      -

      Monitoring a MirageOS unikernel

      +

      Monitoring a MirageOS unikernel

      From the operational perspective, taking security into account - either the data should be authenticated and integrity-protected, or being transmitted on a private network. We chose the latter, there's a private network interface only for monitoring. Access to that network is only granted to the unikernels and metrics collector.

      For MirageOS unikernels, we use the metrics library - which design shares the idea of logs that only if there's a reporter registered, work is performed. We use the Influx line protocol via TCP to report via Telegraf to InfluxDB. But due to the design of metrics, other reporters can be developed and used -- prometheus, SNMP, your-other-favourite are all possible.

      Apart from monitoring metrics, we use the same network interface for logging via syslog. Since the logs library separates the log message generation (in the OCaml libraries) from the reporting, we developed logs-syslog, which registers a log reporter sending each log message to a syslog sink.

      We developed a small library for metrics reporting of a MirageOS unikernel into the monitoring-experiments package - which also allows to dynamically adjust log level and disable or enable metrics sources.

      -

      Required components

      +

      Required components

      Install from your operating system the packages providing telegraf, influxdb, and grafana.

      Setup telegraf to contain a socket listener:

      [[inputs.socket_listener]]
      @@ -18,7 +18,7 @@
       

      Use a unikernel that reports to Influx (below the heading "Unikernels (with metrics reported to Influx)" on builds.robur.coop) and provide --monitor=192.168.42.14 as boot parameter. Conventionally, these unikernels expect a second network interface (on the "management" bridge) where telegraf (and a syslog sink) are running. You'll need to pass --net=management and --arg='--management-ipv4=192.168.42.x/24' to albatross-client.

      Albatross provides a albatross-influx daemon that reports information from the host system about the unikernels to influx. Start it with --influx=192.168.42.14.

      -

      Adding monitoring to your unikernel

      +

      Adding monitoring to your unikernel

      If you want to extend your own unikernel with metrics, follow along these lines.

      An example is the dns-primary-git unikernel, where on the branch future we have a single commit ahead of main that adds monitoring. The difference is in the unikernel configuration and the main entry point. See the binary builts in contrast to the non-monitoring builts.

      In config, three new command line arguments are added: --monitor=IP, --monitor-adjust=PORT --syslog=IP and --name=STRING. In addition, the package monitoring-experiments is required. And a second network interface management_stack using the prefix management is required and passed to the unikernel. Since the syslog reporter requires a console (to report when logging fails), also a console is passed to the unikernel. Each reported metrics includes a tag vm=<name> that can be used to distinguish several unikernels reporting to the same InfluxDB.

      diff --git a/Posts/NGI b/Posts/NGI index 23553fd..939b2c7 100644 --- a/Posts/NGI +++ b/Posts/NGI @@ -1,34 +1,34 @@ -The road ahead for MirageOS in 2021

      The road ahead for MirageOS in 2021

      Written by hannes
      Classified under: mirageos
      Published: 2021-01-25 (last updated: 2021-11-19)

      Introduction

      +The road ahead for MirageOS in 2021

      The road ahead for MirageOS in 2021

      Written by hannes
      Classified under: mirageos
      Published: 2021-01-25 (last updated: 2021-11-19)

      Introduction

      2020 was an intense year. I hope you're healthy and keep being healthy. I am privileged (as lots of software engineers and academics are) to be able to work from home during the pandemic. Let's not forget people in less privileged situations, and let’s try to give them as much practical, psychological and financial support as we can these days. And as much joy as possible to everyone around :)

      I cancelled the autumn MirageOS retreat due to the pandemic. Instead I collected donations for our hosts in Marrakech - they were very happy to receive our financial support, since they had a difficult year, since their income is based on tourism. I hope that in autumn 2021 we'll have an on-site retreat again.

      For 2021, we (at robur) got a grant from the EU (via NGI pointer) for "Deploying MirageOS" (more details below), and another grant from OCaml software foundation for securing the opam supply chain (using conex). Some long-awaited releases for MirageOS libraries, namely a ssh implementation and a rewrite of our git implementation have already been published.

      With my MirageOS view, 2020 was a pretty successful year, where we managed to add more features, fixed lots of bugs, and paved the road ahead. I want to thank OCamlLabs for funding work on MirageOS maintenance.

      -

      Recap 2020

      +

      Recap 2020

      Here is a very subjective random collection of accomplishments in 2020, where I was involved with some degree.

      -

      NetHSM

      +

      NetHSM

      NetHSM is a hardware security module in software. It is a product that uses MirageOS for security, and is based on the muen separation kernel. We at robur were heavily involved in this product. It already has been security audited by an external team. You can pre-order it from Nitrokey.

      -

      TLS 1.3

      +

      TLS 1.3

      Dating back to 2016, at the TRON (TLS 1.3 Ready or NOt), we developed a first draft of a 1.3 implementation of OCaml-TLS. Finally in May 2020 we got our act together, including ECC (ECDH P256 from fiat-crypto, X25519 from hacl) and testing with tlsfuzzer, and release tls 0.12.0 with TLS 1.3 support. Later we added ECC ciphersuites to TLS version 1.2, implemented ChaCha20/Poly1305, and fixed an interoperability issue with Go's implementation.

      Mirage-crypto provides the underlying cryptographic primitives, initially released in March 2020 as a fork of nocrypto -- huge thanks to pqwy for his great work. Mirage-crypto detects CPU features at runtime (thanks to Julow) (bugfix for bswap), using constant time modular exponentation (powm_sec) and hardens against Lenstra's CRT attack, supports compilation on Windows (thanks to avsm), async entropy harvesting (thanks to seliopou), 32 bit support, chacha20/poly1305 (thanks to abeaumont), cross-compilation (thanks to EduardoRFS) and various bug fixes, even memory leak (thanks to talex5 for reporting several of these issues), and RSA interoperability (thanks to psafont for investigation and mattjbray for reporting). This library feels very mature now - being used by multiple stakeholders, and lots of issues have been fixed in 2020.

      -

      Qubes Firewall

      +

      Qubes Firewall

      The MirageOS based Qubes firewall is the most widely used MirageOS unikernel. And it got major updates: in May Steffi announced her and Mindy's work on improving it for Qubes 4.0 - including dynamic firewall rules via QubesDB. Thanks to prototypefund for sponsoring.

      In October 2020, we released Mirage 3.9 with PVH virtualization mode (thanks to mato). There's still a memory leak to be investigated and fixed.

      -

      IPv6

      +

      IPv6

      In December, with Mirage 3.10 we got the IPv6 code up and running. Now MirageOS unikernels have a dual stack available, besides IPv4-only and IPv6-only network stacks. Thanks to nojb for the initial code and MagnusS.

      Turns out this blog, but also robur services, are now available via IPv6 :)

      -

      Albatross

      +

      Albatross

      Also in December, I pushed an initial release of albatross, a unikernel orchestration system with remote access. Deploy your unikernel via a TLS handshake -- the unikernel image is embedded in the TLS client certificates.

      Thanks to reynir for statistics support on Linux and improvements of the systemd service scripts. Also thanks to cfcs for the initial Linux port.

      -

      CA certs

      +

      CA certs

      For several years I postponed the problem of how to actually use the operating system trust anchors for OCaml-TLS connections. Thanks to emillon for initial code, there are now ca-certs and ca-certs-nss opam packages (see release announcement) which fills this gap.

      -

      Unikernels

      +

      Unikernels

      I developed several useful unikernels in 2020, and also pushed a unikernel gallery to the Mirage website:

      -

      Traceroute in MirageOS

      +

      Traceroute in MirageOS

      I already wrote about traceroute which traces the routing to a given remote host.

      -

      Unipi - static website hosting

      +

      Unipi - static website hosting

      Unipi is a static site webserver which retrieves the content from a remote git repository. Let's encrypt certificate provisioning and dynamic updates via a webhook to be executed for every push.

      -

      TLSTunnel - TLS demultiplexing

      +

      TLSTunnel - TLS demultiplexing

      The physical machine this blog and other robur infrastructure runs on has been relocated from Sweden to Germany mid-December. Thanks to UPS! Fewer IPv4 addresses are available in the new data center, which motivated me to develop tlstunnel.

      The new behaviour is as follows (see the monitoring branch):

        @@ -43,9 +43,9 @@
      • setting up a new service is very straightforward: only the new name needs to be registered with tlstunnel together with the TCP backend, and everything will just work
      -

      2021

      +

      2021

      The year started with a release of awa, a SSH implementation in OCaml (thanks to haesbaert for initial code). This was followed by a git 3.0 release (thanks to dinosaure).

      -

      Deploying MirageOS - NGI Pointer

      +

      Deploying MirageOS - NGI Pointer

      For 2021 we at robur received funding from the EU (via NGI pointer) for "Deploying MirageOS", which boils down into three parts:

      • reproducible binary releases of MirageOS unikernels, @@ -58,10 +58,10 @@

        Of course this will all be available open source. Please get in touch via eMail (team aT robur dot coop) if you're eager to integrate MirageOS unikernels into your infrastructure.

        We discovered at an initial meeting with an infrastructure provider that a DNS resolver is of interest - even more now that dnsmasq suffered from dnspooq. We are already working on an implementation of DNSSec.

        MirageOS unikernels are binary reproducible, and infrastructure tools are available. We are working hard on a web interface (and REST API - think of it as "Docker Hub for MirageOS unikernels"), and more tooling to verify reproducibility.

        -

        Conex - securing the supply chain

        +

        Conex - securing the supply chain

        Another funding from the OCSF is to continue development and deploy conex - to bring trust into opam-repository. This is a great combination with the reproducible build efforts, and will bring much more trust into retrieving OCaml packages and using MirageOS unikernels.

        -

        MirageOS 4.0

        +

        MirageOS 4.0

        Mirage so far still uses ocamlbuild and ocamlfind for compiling the virtual machine binary. But the switch to dune is close, a lot of effort has been done. This will make the developer experience of MirageOS much more smooth, with a per-unikernel monorepo workflow where you can push your changes to the individual libraries.

        -

        Footer

        +

        If you want to support our work on MirageOS unikernels, please donate to robur. I'm interested in feedback, either via twitter, hannesm@mastodon.social or via eMail.

      \ No newline at end of file diff --git a/Posts/OCaml b/Posts/OCaml index e7335f1..a7cdf0c 100644 --- a/Posts/OCaml +++ b/Posts/OCaml @@ -1,5 +1,5 @@ -Why OCaml

      Why OCaml

      Written by hannes
      Classified under: overviewbackground
      Published: 2016-04-17 (last updated: 2021-11-19)

      Programming

      +Why OCaml

      Why OCaml

      Written by hannes
      Classified under: overviewbackground
      Published: 2016-04-17 (last updated: 2021-11-19)

      Programming

      For me, programming is fun. I enjoy doing it, every single second. All the way from designing over experimenting to debugging why it does not do what I want. In the end, the computer is dumb and executes only what you (or code from @@ -17,7 +17,7 @@ allows a developer to encode invariants, without the need to defer to runtime assertions. Type systems differ in their expressive power (dependent typing are the hot research area at the moment). Tooling depends purely on the community size, natural selection will prevail the useful tools (community size gives inertia to other factors: demand for libraries, package manager, activity on stack overflow, etc.).

      -

      Why OCaml?

      +

      Why OCaml?

      As already mentioned in other articles here, it is a combination of sufficiently large community, runtime stability and performance, modularity, @@ -50,7 +50,7 @@ code might end up in an error state (common for parsers which process input from the network), return a variant type as value (type ('a, 'b) result = Ok of 'a | Error of 'b). That way, the caller has to handle both the success and failure case explicitly.

      -

      Where to start?

      +

      Where to start?

      The OCaml website contains a variety of tutorials and examples, including introductionary @@ -105,7 +105,7 @@ which watches lots of MirageOS-related repositories.

      specification and implementation. I'm interested in feedback, either via twitter or via eMail.

      -

      Other updates in the MirageOS ecosystem

      +

      Other updates in the MirageOS ecosystem

      • Canopy now sends out appropriate content type HTTP headers
      • diff --git a/Posts/OpamMirror b/Posts/OpamMirror index 74c4fb4..2dd4848 100644 --- a/Posts/OpamMirror +++ b/Posts/OpamMirror @@ -1,22 +1,22 @@ Mirroring the opam repository and all tarballs

        Mirroring the opam repository and all tarballs

        Written by hannes
        Classified under: mirageosdeploymentopam
        Published: 2022-09-29 (last updated: 2022-10-11)

        We at robur developed opam-mirror in the last month and run a public opam mirror at https://opam.robur.coop (updated hourly).

        -

        What is opam and why should I care?

        +

        What is opam and why should I care?

        Opam is the OCaml package manager (also used by other projects such as coq). It is a source based system: the so-called repository contains the metadata (url to source tarballs, build dependencies, author, homepage, development repository) of all packages. The main repository is hosted on GitHub as ocaml/opam-repository, where authors of OCaml software can contribute (as pull request) their latest releases.

        When opening a pull request, automated systems attempt to build not only the newly released package on various platforms and OCaml versions, but also all reverse dependencies, and also with dependencies with the lowest allowed version numbers. That's crucial since neither semantic versioning has been adapted across the OCaml ecosystem (which is tricky, for example due to local opens any newly introduced binding will lead to a major version bump), neither do many people add upper bounds of dependencies when releasing a package (nobody is keen to state "my package will not work with cmdliner in version 1.2.0").

        So, the opam-repository holds the metadata of lots of OCaml packages (around 4000 at the moment this article was written) with lots of versions (in total 25000) that have been released. It is used by the opam client to figure out which packages to install or upgrade (using a solver that takes the version bounds into consideration).

        Of course, opam can use other repositories (overlays) or forks thereof. So nothing stops you from using any other opam repository. The url to the source code of each package may be a tarball, or a git repository or other version control systems.

        The vast majority of opam packages released to the opam-repository include a link to the source tarball and a cryptographic hash of the tarball. This is crucial for security (under the assumption the opam-repository has been downloaded from a trustworthy source - check back later this year for updates on conex). At the moment, there are some weak spots in respect to security: md5 is still allowed, and the hash and the tarball are downloaded from the same server: anyone who is in control of that server can inject arbitrary malicious data. As outlined above, we're working on infrastructure which fixes the latter issue.

        -

        How does the opam client work?

        +

        How does the opam client work?

        Opam, after initialisation, downloads the index.tar.gz from https://opam.ocaml.org/index.tar.gz, and uses this as the local opam universe. An opam install cmdliner will resolve the dependencies, and download all required tarballs. The download is first tried from the cache, and if that failed, the URL in the package file is used. The download from the cache uses the base url, appends the archive-mirror, followed by the hash algorithm, the first two characters of the has of the tarball, and the hex encoded hash of the archive, i.e. for cmdliner 1.1.1 which specifies its sha512: https://opam.ocaml.org/cache/sha512/54/5478ad833da254b5587b3746e3a8493e66e867a081ac0f653a901cc8a7d944f66e4387592215ce25d939be76f281c4785702f54d4a74b1700bc8838a62255c9e.

        -

        How does the opam repository work?

        +

        How does the opam repository work?

        According to DNS, opam.ocaml.org is a machine at amazon. It likely, apart from the website, uses opam admin index periodically to create the index tarball and the cache. There's an observable delay between a package merge in the opam-repository and when it shows up at opam.ocaml.org. Recently, there was a reported downtime.

        Apart from being a single point of failure, if you're compiling a lot of opam projects (e.g. a continuous integration / continuous build system), it makes sense from a network usage (and thus sustainability perspective) to move the cache closer to where you need the source archives. We're also organising the MirageOS hack retreats in a northern African country with poor connectivity - so if you gather two dozen camels you better bring your opam repository cache with you to reduce the bandwidth usage (NB: this requires at the moment cooperation of all participants to configure their default opam repository accordingly).

        -

        Re-developing "opam admin create" as MirageOS unikernel

        +

        Re-developing "opam admin create" as MirageOS unikernel

        The need for a local opam cache at our reproducible build infrastructure and the retreats, we decided to develop opam-mirror as a MirageOS unikernel. Apart from a useful showcase using persistent storage (that won't fit into memory), and having fun while developing it, our aim was to reduce our time spent on system administration (the opam admin index is only one part of the story, it needs a Unix system and a webserver next to it - plus remote access for doing software updates - which has quite some attack surface.

        Another reason for re-developing the functionality was that the opam code (what opam admin index actually does) is part of the opam source code, which totals to 50_000 lines of code -- looking up whether one or all checksums are verified before adding the tarball to the cache, was rather tricky.

        In earlier years, we avoided persistent storage and block devices in MirageOS (by embedding it into the source code with crunch, or using a remote git repository), but recent development, e.g. of chamelon sparked some interest in actually using file systems and figuring out whether MirageOS is ready in that area. A month ago we started the opam-mirror project.

        Opam-mirror takes a remote repository URL, and downloads all referenced archives. It serves as a cache and opam-repository - and does periodic updates from the remote repository. The idea is to validate all available checksums and store the tarballs only once, and store overlays (as maps) from the other hash algorithms.

        -

        Code development and improvements

        +

        Code development and improvements

        Initially, our plan was to use ocaml-git for pulling the repository, chamelon for persistent storage, and httpaf as web server. With ocaml-tar recent support of gzip we should be all set, and done within a few days.

        There is already a gap in the above plan: which http client to use - in the best case something similar to our http-lwt-client - in MirageOS: it should support HTTP 1.1 and HTTP 2, TLS (with certificate validation), and using happy-eyeballs to seemlessly support both IPv6 and legacy IPv4. Of course it should follow redirect, without that we won't get far in the current Internet.

        On the path (over the last month), we fixed file descriptor leaks (memory leaks) in paf -- which is used as a runtime for httpaf and h2.

        @@ -26,7 +26,7 @@

        Since neither git state nor the maps are suitable for tar's append-only semantics, and we didn't want to investigate yet another file system - such as fat may just work fine, but the code looks slightly bitrot, and the reported issues and non-activity doesn't make this package very trustworthy from our point of view. Instead, we developed mirage-block-partition to partition a block device into two. Then we just store the maps and the git state at the end - the end of a tar archive is 2 blocks of zeroes, so stuff at the far end aren't considered by any tooling. Extending the tar archive is also possible, only the maps and git state needs to be moved to the end (or recomputed). As file system, we developed oneffs which stores a single value on the block device.

        We observed a high memory usage, since each requested archive was first read from the block device into memory, and then sent out. Thanks to Pierre Alains recent enhancements of the mirage-kv API, there is a get_partial, that we use to chunk-wise read the archive and send it via HTTP. Now, the memory usage is around 20MB (the git repository and the generated tarball are kept in memory).

        What is next? Downloading and writing to the tar archive could be done chunk-wise as well; also dumping and restoring the git state is quite CPU intensive, we would like to improve that. Adding the TLS frontend (currently done on our site by our TLS termination proxy tlstunnel) similar to how unipi does it, including let's encrypt provisioning -- should be straightforward (drop us a note if you'd be interesting in that feature).

        -

        Conclusion

        +

        Conclusion

        To conclude, we managed within a month to develop this opam-mirror cache from scratch. It has a reasonable footprint (CPU and memory-wise), is easy to maintain and easy to update - if you want to use it, we also provide reproducible binaries for solo5-hvt. You can use our opam mirror with opam repository set-url default https://opam.robur.coop (revert to the other with opam repository set-url default https://opam.ocaml.org) or use it as a backup with opam repository add robur --rank 2 https://opam.robur.coop.

        Please reach out to us (at team AT robur DOT coop) if you have feedback and suggestions. We are a non-profit company, and rely on donations for doing our work - everyone can contribute.

        \ No newline at end of file diff --git a/Posts/OperatingSystem b/Posts/OperatingSystem index 2011799..72c064d 100644 --- a/Posts/OperatingSystem +++ b/Posts/OperatingSystem @@ -1,6 +1,6 @@ Operating systems

        Operating systems

        Written by hannes
        Published: 2016-04-09 (last updated: 2021-11-19)

        Sorry to be late with this entry, but I had to fix some issues.

        -

        What is an operating system?

        +

        What is an operating system?

        Wikipedia says: "An operating system (OS) is system software that manages computer hardware and software resources and provides common services for computer programs." Great. In other terms, it is an abstraction layer. @@ -39,7 +39,7 @@ the MMU).

        This ominous "cloud" uses hypervisors on huge amount of physical machines, and executes off-the-shelf operating systems as virtual machines on top. Accounting is done by resource usage (time, bandwidth, storage).

        -

        From scratch

        +

        From scratch

        Ok, now we have hypervisors which already deals with memory and scheduling. Why should we have the very same functionality again in the (general purpose) operating system running as virtual machine?

        @@ -55,7 +55,7 @@ a strange urge to spend some time on Dylan, which is really weird..."

        TCP/IP stack in Dylan), and as mentioned earlier I went into formal methods and mechanised proofs of full functional correctness properties.

        -

        MirageOS

        +

        MirageOS

        At the end of 2013, David pointed me to MirageOS, an operating system developed from scratch in the functional and statically typed language OCaml. I've not @@ -131,7 +131,7 @@ access control.

        I hope I gave some insight into what the purpose of an operating systems is, and how MirageOS fits into the picture. I'm interested in feedback, either via twitter or via eMail.

        -

        Other updates in the MirageOS ecosystem

        +

        Other updates in the MirageOS ecosystem

        • this website is based on Canopy, the content is stored as markdown in a git repository
        • diff --git a/Posts/Pinata b/Posts/Pinata index 4407c05..9ac8b7e 100644 --- a/Posts/Pinata +++ b/Posts/Pinata @@ -1,5 +1,5 @@ -The Bitcoin Piñata - no candy for you

          The Bitcoin Piñata - no candy for you

          Written by hannes
          Classified under: mirageossecuritybitcoin
          Published: 2018-04-18 (last updated: 2021-11-19)

          History

          +The Bitcoin Piñata - no candy for you

          The Bitcoin Piñata - no candy for you

          Written by hannes
          Classified under: mirageossecuritybitcoin
          Published: 2018-04-18 (last updated: 2021-11-19)

          History

          On February 10th 2015 David Kaloper-Meršinjak and Hannes Mehnert launched (read also Amir's description) our bug bounty @@ -14,7 +14,7 @@ Piñata.

          The 10 Bitcoin in the Piñata were fluctuating in price over time, at peak worth 165000€.

          From the start of the Piñata project, we published the source code, the virtual machine image, and the versions of the used libraries in a git repository. Everybody could develop their exploits locally before launching them against our Piñata. The Piñata provides TLS endpoints, which require private keys and certificates. These are generated by the Piñata at startup, and the secret for the Bitcoin wallet is provided as a command line argument.

          Initially the Piñata was deployed on a Linux/Xen machine, later it was migrated to a FreeBSD host using BHyve and VirtIO with solo5, and in December 2017 it was migrated to native BHyve (using ukvm-bin and solo5). We also changed the Piñata code to accomodate for updates, such as the MirageOS 3.0 release, and the discontinuation of floating point numbers for timestamps (asn1-combinators 0.2.0, x509 0.6.0, tls 0.9.0).

          -

          Motivation

          +

          Motivation

          We built the Piñata for many purposes: to attract security professionals to evaluate our from-scratch developed TLS stack, to gather empirical data for our Usenix Security 15 paper, and as an improvement to current bug bounty programs.

          Most bug bounty programs require communication via forms and long wait times for human experts to evaluate the potential bug. This evaluation is subjective, @@ -27,7 +27,7 @@ to read arbitrary memory. Everyone can track transactions of the blockchain and see if the wallet still contains the bounty.

          Of course, the Piñata can't prove that our stack is secure, and it can't prove that the access to the wallet is actually inside. But trust us, it is!

          -

          Observations

          +

          Observations

          I still remember vividly the first nights in February 2015, being so nervous that I woke up every two hours and checked the blockchain. Did the Piñata still have the Bitcoins? I was familiar with the code of the Piñata and was afraid there might be a bug which allows to bypass authentication or leak the private key. So far, this doesn't seem to be the case.

          In April 2016 we stumbled upon an information disclosure in the virtual network device driver for Xen in MirageOS. Given enough diff --git a/Posts/ReproducibleOPAM b/Posts/ReproducibleOPAM index 62bd83b..3269b4b 100644 --- a/Posts/ReproducibleOPAM +++ b/Posts/ReproducibleOPAM @@ -1,19 +1,19 @@ -Reproducible MirageOS unikernel builds

          Reproducible MirageOS unikernel builds

          Written by hannes
          Published: 2019-12-16 (last updated: 2021-11-19)

          Reproducible builds summit

          +Reproducible MirageOS unikernel builds

          Reproducible MirageOS unikernel builds

          Written by hannes
          Published: 2019-12-16 (last updated: 2021-11-19)

          Reproducible builds summit

          I'm just back from the Reproducible builds summit 2019. In 2018, several people developing OCaml and opam and MirageOS, attended the Reproducible builds summit in Paris. The notes from last year on opam reproducibility and MirageOS reproducibility are online. After last years workshop, Raja started developing the opam reproducibilty builder orb, which I extended at and after this years summit. This year before and after the facilitated summit there were hacking days, which allowed further interaction with participants, writing some code and conduct experiments. I had this year again an exciting time at the summit and hacking days, thanks to our hosts, organisers, and all participants.

          -

          Goal

          +

          Goal

          Stepping back a bit, first look on the goal of reproducible builds: when compiling source code multiple times, the produced binaries should be identical. It should be sufficient if the binaries are behaviourally equal, but this is pretty hard to check. It is much easier to check bit-wise identity of binaries, and relaxes the burden on the checker -- checking for reproducibility is reduced to computing the hash of the binaries. Let's stick to the bit-wise identical binary definition, which also means software developers have to avoid non-determinism during compilation in their toolchains, dependent libraries, and developed code.

          A checklist of potential things leading to non-determinism has been written up by the reproducible builds project. Examples include recording the build timestamp into the binary, ordering of code and embedded data. The reproducible builds project also developed disorderfs for testing reproducibility and diffoscope for comparing binaries with file-dependent readers, falling back to objdump and hexdump. A giant test infrastructure with lots of variations between the builds, mostly using Debian, has been setup over the years.

          Reproducibility is a precondition for trustworthy binaries. See why does it matter. If there are no instructions how to get from the published sources to the exact binary, why should anyone trust and use the binary which claims to be the result of the sources? It may as well contain different code, including a backdoor, bitcoin mining code, outputting the wrong results for specific inputs, etc. Reproducibility does not imply the software is free of security issues or backdoors, but instead of a audit of the binary - which is tedious and rarely done - the source code can be audited - but the toolchain (compiler, linker, ..) used for compilation needs to be taken into account, i.e. trusted or audited to not be malicious. I will only ever publish binaries if they are reproducible.

          My main interest at the summit was to enhance existing tooling and conduct some experiments about the reproducibility of MirageOS unikernels -- a unikernel is a statically linked ELF binary to be run as Unix process or virtual machine. MirageOS heavily uses OCaml and opam, the OCaml package manager, and is an opam package itself. Thus, checking reproducibility of a MirageOS unikernel is the same problem as checking reproducibility of an opam package.

          -

          Reproducible builds with opam

          +

          Reproducible builds with opam

          Testing for reproducibility is achieved by taking the sources and compile them twice independently. Afterwards the equality of the resulting binaries can be checked. In trivial projects, the sources is just a single file, or originate from a single tarball. In OCaml, opam uses a community repository where OCaml developers publish their package releases to, but can also use custom repositores, and in addition pin packages to git remotes (url including branch or commit), or a directory on the local filesystem. Manually tracking and updating all dependent packages of a MirageOS unikernel is not feasible: our hello-world compiled for hvt (kvm/BHyve) already has 79 opam dependencies, including the OCaml compiler which is distribued as opam package. The unikernel serving this website depends on 175 opam packages.

          Conceptually there should be two tools, the initial builder, which takes the latest opam packages which do not conflict, and exports exact package versions used during the build, as well as hashes of binaries. The other tool is a rebuilder, which imports the export, conducts a build, and outputs the hashes of the produced binaries.

          Opam has the concept of a switch, which is an environment where a package set is installed. Switches are independent of each other, and can already be exported and imported. Unfortunately the export is incomplete: if a package includes additional patches as part of the repository -- sometimes needed for fixing releases where the actual author or maintainer of a package responds slowly -- these package neither the patches end up in the export. Also, if a package is pinned to a git branch, the branch appears in the export, but this may change over time by pushing more commits or even force-pushing to that branch. In PR #4040 (under discussion and review), also developed during the summit, I propose to embed the additional files as base64 encoded values in the opam file. To solve the latter issue, I modified the export mechanism to embed the git commit hash (PR #4055), and avoid sources from a local directory and which do not have a checksum.

          So the opam export contains the information required to gather the exact same sources and build instructions of the opam packages. If the opam repository would be self-contained (i.e. not depend on any other tools), this would be sufficient. But opam does not run in thin air, it requires some system utilities such as /bin/sh, sed, a GNU make, commonly git, a C compiler, a linker, an assembler. Since opam is available on various operating systems, the plugin depext handles host system dependencies, e.g. if your opam package requires gmp to be installed, this requires slightly different names depending on host system or distribution, take a look at conf-gmp. This also means, opam has rather good information about both the opam dependencies and the host system dependencies for each package. Please note that the host system packages used during compilation are not yet recorded (i.e. which gmp package was installed and used during the build, only that a gmp package has to be installed). The base utilities mentioned above (C compiler, linker, shell) are also not recorded yet.

          Operating system information available in opam (such as architecture, distribution, version), which in some cases maps to exact base utilities, is recorded in the build-environment, a separate artifact. The environment variable SOURCE_DATE_EPOCH, used for communicating the same timestamp when software is required to record a timestamp into the resulting binary, is also captured in the build environment.

          Additional environment variables may be captured or used by opam packages to produce different output. To avoid this, both the initial builder and the rebuilder are run with minimal environment variables: only PATH (normalised to a whitelist of /bin, /usr/bin, /usr/local/bin and /opt/bin) and HOME are defined. Missing information at the moment includes CPU features: some libraries (gmp?, nocrypto) emit different code depending on the CPU feature.

          -

          Tooling

          +

          Tooling

          TL;DR: A build builds an opam package, and outputs .opam-switch, .build-hashes.N, and .build-environment.N. A rebuild uses these artifacts as input, builds the package and outputs another .build-hashes.M and .build-environment.M.

          The command-line utility orb can be installed and used:

          $ opam pin add orb git+https://github.com/hannesm/orb.git#active
          @@ -22,7 +22,7 @@ $ orb build --twice --keep-build-dir --diffoscope <your-favourite-opam-packag
           

          It provides two subcommands build and rebuild. The build command takes a list of local opam --repos where to take opam packages from (defaults to default), a compiler (either a variant --compiler=4.09.0+flambda, a version --compiler=4.06.0, or a pin to a local development version --compiler-pin=~/ocaml), and optionally an existing switch --use-switch. It creates a switch, builds the packages, and emits the opam export, hashes of all files installed by these packages, and the build environment. The flags --keep-build retains the build products, opam's --keep-build-dir in addition temporary build products and generated source code. If --twice is provided, a rebuild (described next) is executed after the initial build.

          The rebuild command takes a directory with the opam export and build environment to build the opam package. It first compares the build-environment with the host system, sets the SOURCE_DATE_EPOCH and switch location accordingly and executes the import. Once the build is finished, it compares the hashes of the resulting files with the previous run. On divergence, if build directories were kept in the previous build, and if diffoscope is available and --diffoscope was provided, diffoscope is run on the diverging files. If --keep-build-dir was provided as well, diff -ur can be used to compare the temporary build and sources, including build logs.

          The builds are run in parallel, as opam does, this parallelism does not lead to different binaries in my experiments.

          -

          Results and discussion

          +

          Results and discussion

          All MirageOS unikernels I have deployed are reproducible \o/. Also, several binaries such as orb itself, opam, solo5-hvt, and all albatross utilities are reproducible.

          The unikernel range from hello world, web servers (e.g. this blog, getting its data on startup via a git clone to memory), authoritative DNS servers, CalDAV server. They vary in size between 79 and 200 opam packages, resulting in 2MB - 16MB big ELF binaries (including debug symbols). The unikernel opam repository contains some reproducible unikernels used for testing. Some work-in-progress enhancements are needed to achieve this:

          At the moment, the opam package of a MirageOS unikernel is automatically generated by mirage configure, but only used for tracking opam dependencies. I worked on mirage PR #1022 to extend the generated opam package with build and install instructions.

          @@ -31,7 +31,7 @@ $ orb build --twice --keep-build-dir --diffoscope <your-favourite-opam-packag

          In functoria, a tool used to configure MirageOS devices and their dependencies, can emit a list of opam packages which were required to build the unikernel. This uses opam list --required-by --installed --rec <pkgs>, which uses the cudf graph (thanks to Raja for explanation), that is during the rebuild dropping some packages. The PR #189 avoids by not using the --rec argument, but manually computing the fixpoint.

          Certainly, the choice of environment variables, and whether to vary them (as debian does) or to not define them (or normalise) while building, is arguably. Since MirageOS does neither support time zone nor internationalisation, there is no need to prematurely solving this issue. On related note, even with different locale settings, MirageOS unikernels are reproducible apart from an issue in ocamlgraph #90 embedding the output of date, which is different depending on LANG and locale (LC_*) settings.

          Prior art in reproducible MirageOS unikernels is the mirage-qubes-firewall. Since early 2017 it is reproducible. Their approach is different by building in a docker container with the opam repository pinned to an exact git commit.

          -

          Further work

          +

          Further work

          I only tested a certain subset of opam packages and MirageOS unikernels, mainly on a single machine (my laptop) running FreeBSD, and am happy if others will test reproducibility of their OCaml programs with the tools provided. There could as well be CI machines rebuilding opam packages and reporting results to a central repository. I'm pretty sure there are more reproducibility issues in the opam ecosystem. I developed an reproducible testing opam repository with opam packages that do not depend on OCaml, mainly for further tooling development. Some tests were also conducted on a Debian system with the same result. The variations, apart from build time, were using a different user, and different locale settings.

          As mentioned above, more environment, such as the CPU features, and external system packages, should be captured in the build environment.

          When comparing OCaml libraries, some output files (cmt / cmti / cma / cmxa) are not deterministic, but contain minimal diverge where I was not able to spot the root cause. It would be great to fix this, likely in the OCaml compiler distribution. Since the final result, the binary I'm interested in, is not affected by non-identical intermediate build products, I hope someone (you?) is interested in improving on this side. OCaml bytecode output also seems to be non-deterministic. There is a discussion on the coq issue tracker which may be related.

          diff --git a/Posts/Solo5 b/Posts/Solo5 index 4a8275e..2e06863 100644 --- a/Posts/Solo5 +++ b/Posts/Solo5 @@ -7,14 +7,14 @@
        • Update (2017-02-23): no more extra remotes, Mirage3 is released!
        -

        What?

        +

        What?

        As described earlier, MirageOS is a library operating system developed in OCaml. The code size is already pretty small, deployments are so far either as a UNIX binary or as a Xen virtual machine.

        Xen is a hypervisor, providing concrete device drivers for the actual hardware of a physical machine, memory management, scheduling, etc. The initial release of Xen was done in 2003, since then the code size and code complexity of Xen is growing. It also has various different mechanisms for virtualisation, hardware assisted ones or purely software based ones, some where the guest operating system needs to cooperate others where it does not need to cooperate.

        Since 2005, Intel CPUs (as well as AMD CPUs) provide hardware assistance for virtualisation (the VT-x extension), since 2008 extended page tables (EPT) are around which allow a guest to safely access the MMU. Those features gave rise to much smaller hypervisors, such as KVM (mainly Linux), bhyve (FreeBSD), xhyve (MacOSX), vmm (OpenBSD), which do not need to emulate the MMU and other things in software. The boot sequence in those hypervisors uses kexec or multiboot, instead of doing all the 16 bit, 32 bit, 64 bit mode changes manually.

        MirageOS initially targeted only Xen, in 2015 there was a port to use rumpkernel (a modularised NetBSD), and 2016 solo5 emerged where you can run MirageOS on. Solo5 comes in two shapes, either as ukvm on top of KVM, or as a multiboot image using virtio interfaces (block and network, plus a serial console). Solo5 is only ~1000 lines of code (plus dlmalloc), and ISC-licensed.

        A recent paper describes the advantages of a tiny virtual machine monitor in detail, namely no more venom like security issues since there is no legacy hardware emulated. Also, each virtual machine monitor can be customised to the unikernel running on top of it: if the unikernel does not need a block device, the monitor shouldn't contain code for it. The idea is to have one customised monitor for each unikernel.

        While lots of people seem to like KVM and Linux, I still prefer FreeBSD, their jails, and nowadays bhyve. I finally found some time, thanks to various cleanups to the solo5 code base, to finally look into porting solo5 to FreeBSD/bhyve. It runs and can output to console.

        -

        How?

        +

        How?

        These instructions are still slightly bumpy. If you've a FreeBSD with bhyve (I use FreeBSD-CURRENT), and OCaml and opam (>=1.2.2) installed, it is pretty straightforward to get solo5 running. First, I'd suggest to use a fresh opam switch in case you work on other OCaml projects: opam switch -A 4.04.0 solo5 (followed by eval `opam config env` to setup some environment variables).

        You need some software from the ports: devel/pkgconf, devel/gmake, devel/binutils, and sysutils/grub2-bhyve.

        An opam install mirage mirage-logs solo5-kernel-virtio mirage-bootvar-solo5 mirage-solo5 should provide you with a basic set of libraries.

        @@ -48,7 +48,7 @@ Kernel done. Goodbye!

        Network and TLS stack works as well (tested 30th October).

        -

        Open issues

        +

        Open issues

        • I'm not happy to require ld from the ports (but the one in base does not produce sensible binaries with -z max-page-size=0x1000 related)
        • @@ -57,10 +57,10 @@ Goodbye!
        • Debugging via gdb should be doable somehow, bhyve has some support for gdb, but it is unclear to me what I need to do to enter the debugger (busy looping in the VM and a gdb remote to the port opened by bhyve does not work).
        -

        Conclusion

        +

        Conclusion

        I managed to get solo5 to work with bhyve. I even use clang instead of gcc and don't need to link libgcc.a. :) It is great to see further development in hypervisors and virtual machine monitors. Especially thanks to Martin Lucina for getting things sorted.

        I'm interested in feedback, either via twitter or via eMail.

        -

        Other updates in the MirageOS ecosystem

        +

        Other updates in the MirageOS ecosystem

        There were some busy times, several pull requests are still waiting to get merged (e.g. some cosmetics in mirage as preconditions for treemaps and dependency diagrams), I proposed to use sleep_ns : int64 -> unit io instead of the sleep : float -> unit io (nobody wants floating point numbers); also an RFC for random, Matt Gray proposed to get rid of CLOCK (and have a PCLOCK and a MCLOCK instead). Soon there will be a major MirageOS release which breaks all the previous unikernels! :)

        \ No newline at end of file diff --git a/Posts/Summer2019 b/Posts/Summer2019 index 54a7b8d..52eefaa 100644 --- a/Posts/Summer2019 +++ b/Posts/Summer2019 @@ -1,12 +1,12 @@ -Summer 2019

        Summer 2019

        Written by hannes
        Published: 2019-07-08 (last updated: 2021-11-19)

        Working at robur

        +Summer 2019

        Summer 2019

        Written by hannes
        Published: 2019-07-08 (last updated: 2021-11-19)

        Working at robur

        As announced previously, I started to work at robur early 2018. We're a collective of five people, distributed around Europe and the US, with the goal to deploy MirageOS unikernels. We do this by developing bespoke MirageOS unikernels which provide useful services, and deploy them for ourselves. We also develop new libraries and enhance existing ones and other components of MirageOS. Example unikernels include our website which uses Canopy, a CalDAV server that stores entries in a git remote, and DNS servers (the latter two are further described below).

        Robur is part of the non-profit company Center for the Cultivation of Technology, who are managing the legal and administrative sides for us. We're ourselves responsible to acquire funding to pay ourselves reasonable salaries. We received funding for CalDAV from prototypefund and further funding from Tarides, for TLS 1.3 from OCaml Labs; security-audited an OCaml codebase, and received donations, also in the form of Bitcoins. We're looking for further funded collaborations and also contracting, mail us at team@robur.io. Please donate (tax-deductible in EU), so we can accomplish our goal of putting robust and sustainable MirageOS unikernels into production, replacing insecure legacy system that emit tons of CO2.

        -

        Deploying MirageOS unikernels

        +

        Deploying MirageOS unikernels

        While several examples are running since years (the MirageOS website, Bitcoin Piñata, TLS demo server, etc.), and some shell-scripts for cloud providers are floating around, it is not (yet) streamlined.

        Service deployment is complex: you have to consider its configuration, exfiltration of logs and metrics, provisioning with valid key material (TLS certificate, hmac shared secret) and authenticators (CA certificate, ssh key fingerprint). Instead of requiring millions lines of code during orchestration (such as Kubernetes), creating the images (docker), or provisioning (ansible), why not minimise the required configuration and dependencies?

        Earlier in this blog I introduced Albatross, which serves in an enhanced version as our deployment platform on a physical machine (running 15 unikernels at the moment), I won't discuss more detail thereof in this article.

        -

        CalDAV

        +

        CalDAV

        Steffi and I developed in 2018 a CalDAV server. Since November 2018 we have a test installation for robur, initially running as a Unix process on a virtual machine and persisting data to files on the disk. Mid-June 2019 we migrated it to a MirageOS unikernel, thanks to great efforts in git and irmin, unikernels can push to a remote git repository. We extended the ssh library with a ssh client and use this in git. This also means our CalDAV server is completely immutable (does not carry state across reboots, apart from the data in the remote repository) and does not have persistent state in the form of a block device. Its configuration is mainly done at compile time by the selection of libraries (syslog, monitoring, ...), and boot arguments passed to the unikernel at startup.

        We monitored the resource usage when migrating our CalDAV server from Unix process to a MirageOS unikernel. The unikernel size is just below 10MB. The workload is some clients communicating with the server on a regular basis. We use Grafana with a influx time series database to monitor virtual machines. Data is collected on the host system (rusage sysctl, kinfo_mem sysctl, ifdata sysctl, vm_get_stats BHyve statistics), and our unikernels these days emit further metrics (mostly counters: gc statistics, malloc statistics, tcp sessions, http requests and status codes).

        @@ -14,18 +14,18 @@

        A MirageOS unikernel, apart from a smaller attack surface, indeed uses fewer resources and actually emits less CO2 than the same service on a Unix virtual machine. So we're doing something good for the environment! :)

        Our calendar server contains at the moment 63 events, the git repository had around 500 commits in the past month: nearly all of them from the CalDAV server itself when a client modified data via CalDAV, and two manual commits: the initial data imported from the file system, and one commit for fixing a bug of the encoder in our icalendar library.

        Our CalDAV implementation is very basic, scheduling, adding attendees (which requires sending out eMail), is not supported. But it works well for us, we have individual calendars and a shared one which everyone can write to. On the client side we use macOS and iOS iCalendar, Android DAVdroid, and Thunderbird. If you like to try our CalDAV server, have a look at our installation instructions. Please report issues if you find issues or struggle with the installation.

        -

        DNS

        +

        DNS

        There has been more work on our DNS implementation, now here. We included a DNS client library, and some example unikernels are available. They as well require our opam repository overlay. Please report issues if you run into trouble while experimenting with that.

        Most prominently is primary-git, a unikernel which acts as a primary authoritative DNS server (UDP and TCP). On startup, it fetches a remote git repository that contains zone files and shared hmac secrets. The zones are served, and secondary servers are notified with the respective serial numbers of the zones, authenticated using TSIG with the shared secrets. The primary server provides dynamic in-protocol updates of DNS resource records (nsupdate), and after successful authentication pushes the change to the remote git. To change the zone, you can just edit the zonefile and push to the git remote - with the proper pre- and post-commit-hooks an authenticated notify is send to the primary server which then pulls the git remote.

        Another noteworthy unikernel is letsencrypt, which acts as a secondary server, and whenever a TLSA record with custom type (0xFF) and a DER-encoded certificate signing request is observed, it requests a signature from letsencrypt by solving the DNS challenge. The certificate is pushed to the DNS server as TLSA record as well. The DNS implementation provides ocertify and dns-mirage-certify which use the above mechanism to retrieve valid let's encrypt certificates. The caller (unikernel or Unix command-line utility) either takes a private key directly or generates one from a (provided) seed and generates a certificate signing request. It then looks in DNS for a certificate which is still valid and matches the public key and the hostname. If such a certificate is not present, the certificate signing request is pushed to DNS (via the nsupdate protocol), authenticated using TSIG with a given secret. This way our public facing unikernels (website, this blog, TLS demo server, ..) block until they got a certificate via DNS on startup - we avoid embedding of the certificate into the unikernel image.

        -

        Monitoring

        +

        Monitoring

        We like to gather statistics about the resource usage of our unikernels to find potential bottlenecks and observe memory leaks ;) The base for the setup is the metrics library, which is similarly in design to the logs library: libraries use the core to gather metrics. A different aspect is the reporter, which is globally registered and responsible for exfiltrating the data via their favourite protocol. If no reporter is registered, the work overhead is negligible.

        This is a dashboard which combines both statistics gathered from the host system and various metrics from the MirageOS unikernel. The monitoring branch of our opam repository overlay is used together with monitoring-experiments. The logs errors counter (middle right) was the icalendar parser which tried to parse its badly emitted ics (the bug is now fixed, the dashboard is from last month).

        -

        OCaml libraries

        +

        OCaml libraries

        The domain-name library was developed to handle RFC 1035 domain names and host names. It initially was part of the DNS code, but is now freestanding to be used in other core libraries (such as ipaddr) with a small dependency footprint.

        The GADT map is a normal OCaml Map structure, but takes key-dependent value types by using a GADT. This library also was part of DNS, but is more broadly useful, we already use it in our icalendar (the data format for calendar entries in CalDAV) library, our OpenVPN configuration parser uses it as well, and also x509 - which got reworked quite a bit recently (release pending), and there's preliminary PKCS12 support (which deserves its own article). TLS 1.3 is available on a branch, but is not yet merged. More work is underway, hopefully with sufficient time to write more articles about it.

        -

        Conclusion

        +

        Conclusion

        More projects are happening as we speak, it takes time to upstream all the changes, such as monitoring, new core libraries, getting our DNS implementation released, pushing Conex into production, more features such as DNSSec, ...

        I'm interested in feedback, either via twitter hannesm@mastodon.social or via eMail.

        \ No newline at end of file diff --git a/Posts/Syslog b/Posts/Syslog index 1a060fb..317e83d 100644 --- a/Posts/Syslog +++ b/Posts/Syslog @@ -2,7 +2,7 @@ Exfiltrating log data using syslog

        Exfiltrating log data using syslog

        Written by hannes
        Classified under: mirageosprotocollogging
        Published: 2016-11-05 (last updated: 2021-11-19)

        It has been a while since my last entry... I've been busy working on too many projects in parallel, and was also travelling on several continents. I hope to get back to a biweekly cycle.

        -

        What is syslog?

        +

        What is syslog?

        According to Wikipedia, syslog is a standard for message logging. Syslog permits separation of the software which generates, stores, reports, and analyses the message. A syslog message contains @@ -59,7 +59,7 @@ into your /var/log/messages.

        This is a good first step, but we want more: on the one side integration into MirageOS, and a more reliable log stream (what about authentication and encryption?). I'll cover both topics in the rest of this article.

        -

        MirageOS integration

        +

        MirageOS integration

        Since Mirage3, syslog is integrated (see documentation). Some additions to your config.ml are needed, see ns @@ -77,7 +77,7 @@ let () = foreign ~deps:[abstract logger] ... -

        Reliable syslog

        +

        Reliable syslog

        The old BSD syslog RFC is obsoleted by RFC 5424, which describes a new wire format, and also a transport over TCP, and TLS in @@ -119,7 +119,7 @@ interface (also TLS). The code size is below 500 lines in total.

        -

        MirageOS syslog in production

        +

        MirageOS syslog in production

        As collector I use syslog-ng, which is capable of receiving both the new and the old syslog messages on all three transports. The configuration snippet for a BSD syslog TLS collector is as following:

        diff --git a/Posts/Traceroute b/Posts/Traceroute index 60f356a..495464d 100644 --- a/Posts/Traceroute +++ b/Posts/Traceroute @@ -1,5 +1,5 @@ -Traceroute

        Traceroute

        Written by hannes
        Classified under: mirageosprotocol
        Published: 2020-06-24 (last updated: 2021-11-19)

        Traceroute

        +Traceroute

        Traceroute

        Written by hannes
        Classified under: mirageosprotocol
        Published: 2020-06-24 (last updated: 2021-11-19)

        Traceroute

        Is a diagnostic utility which displays the route and measures transit delays of packets across an Internet protocol (IP) network.

        $ doas solo5-hvt --net:service=tap0 -- traceroute.hvt --ipv4=10.0.42.2/24 --ipv4-gateway=10.0.42.1 --host=198.167.222.207
        diff --git a/Posts/VMM b/Posts/VMM
        index a9a8f5b..e203a81 100644
        --- a/Posts/VMM
        +++ b/Posts/VMM
        @@ -1,6 +1,6 @@
         
         Albatross - provisioning, deploying, managing, and monitoring virtual machines

        Albatross - provisioning, deploying, managing, and monitoring virtual machines

        Written by hannes
        Published: 2017-07-10 (last updated: 2023-05-16)

        EDIT (2023-05-16): Please take a look at the updated article.

        -

        How to deploy unikernels?

        +

        How to deploy unikernels?

        MirageOS has a pretty good story on how to compose your OCaml libraries into a virtual machine image. The mirage command line utility contains all the knowledge about which backend requires which library. This enables it to write a @@ -37,7 +37,7 @@ waits for (mutually!) authenticated network connections, and provides the desired commands; to create a new virtual machine, to acquire a block device of a given size, to destroy a virtual machine, to stream the console output of a virtual machine.

        -

        System design

        +

        System design

        The system bears minimalistic characteristics. The single interface to the outside world is a TLS stream over TCP. Internally, there is a family of processes, one of which has superuser privileges, communicating via unix domain @@ -102,7 +102,7 @@ are destroyed.

        The maximum size of a virtual machine image embedded into a X.509 certificate transferred over TLS is 2 ^ 24 - 1 bytes, roughly 16 MB. If this turns out to be not sufficient, compression may help. Or staging of deployment.

        -

        An example

        +

        An example

        Instructions on how to setup vmmd and the certificate authority are in the README file of the albatross git repository. Here is some (stripped) terminal output:

        @@ -190,7 +190,7 @@ virtual machine is started or not:

        > vmm_client cacert.pem hello.bundle hello.key localhost:1025
         success VM started
         
        -

        Sharing is caring

        +

        Sharing is caring

        Deploying unikernels is now easier for myself on my physical machine. That's fine. Another aspect comes for free by reusing X.509: further delegation (and limiting thereof). Within a delegation certificate, the basic constraints @@ -207,7 +207,7 @@ and vmmd will only start up to 2 virtual machines using 2GB of memory in total issued delegation (using a revocation certificate described above) to free up some resources for herself. I don't need to interact when Alice or Dan share their delegated resources further.

        -

        Security

        +

        Security

        There are several security properties preserved by vmmd, such as the virtual machine image is never transmitted in clear. Only properly authenticated clients can create, destroy, gather statistics of their virtual machines.

        @@ -244,7 +244,7 @@ stored in a memory-backed file system. A virtual machine with a lots of disk operation may only delay or starve revocation list updates - if this turns out to be a problem, the solution may be to use separate physical block devices for the revocation lists and virtual block devices for clients.

        -

        Conclusion

        +

        Conclusion

        I showed a minimalistic system to provision, deploy, and manage virtual machine images. It also allows to delegate resources (CPU, disk, ..) further. I'm pretty satisfied with the security properties of the system.

        diff --git a/Posts/X50907 b/Posts/X50907 index c2cc310..b7d3faf 100644 --- a/Posts/X50907 +++ b/Posts/X50907 @@ -1,7 +1,7 @@ -X509 0.7

        X509 0.7

        Written by hannes
        Classified under: mirageossecuritytls
        Published: 2019-08-15 (last updated: 2021-11-19)

        Cryptographic material

        +X509 0.7

        X509 0.7

        Written by hannes
        Classified under: mirageossecuritytls
        Published: 2019-08-15 (last updated: 2021-11-19)

        Cryptographic material

        Once a private and public key pair is generated (doesn't matter whether it is plain RSA, DSA, ECC on any curve), this is fine from a scientific point of view, and can already be used for authenticating and encrypting. From a practical point of view, the public parts need to be exchanged and verified (usually a fingerprint or hash thereof). This leads to the struggle how to encode this cryptographic material, and how to embed an identity (or multiple), capabilities, and other information into it. X.509 is a standard to solve this encoding and embedding, and provides more functionality, such as establishing chains of trust and revocation of invalidated or compromised material. X.509 uses certificates, which contain the public key, and additional information (in a extensible key-value store), and are signed by an issuer, either the private key corresponding to the public key - a so-called self-signed certificate - or by a different private key, an authority one step up the chain. A rather long, but very good introduction to certificates by Mike Malone is available here.

        -

        OCaml ecosystem evolving

        +

        OCaml ecosystem evolving

        More than 5 years ago David Kaloper and I released the initial ocaml-x509 package as part of our TLS stack, which contained code for decoding and encoding certificates, and path validation of a certificate chain (as described in RFC 5280). The validation logic and the decoder/encoder, based on the ASN.1 grammar specified in the RFC, implemented using David's asn1-combinators library changed much over time.

        The OCaml ecosystem evolved over the years, which lead to some changes:

          @@ -28,27 +28,27 @@
        • Usage of the alcotest unit testing framework (instead of oUnit).
        -

        More use cases for X.509

        +

        More use cases for X.509

        Initially, we designed and used ocaml-x509 for providing TLS server endpoints and validation in TLS clients - mostly on the public web, where each operating system ships a set of ~100 trust anchors to validate any web server certificate against. But once you have a X.509 implementation, every authentication problem can be solved by applying it.

        -

        Authentication with path building

        +

        Authentication with path building

        It turns out that the trust anchor sets are not equal across operating systems and versions, thus some web servers serve sets, instead of chains, of certificates - as described in RFC 4158, where the client implementation needs to build valid paths and accept a connection if any path can be validated. The path building was initially in 0.5.2 slightly wrong, but fixed quickly in 0.5.3.

        -

        Fingerprint authentication

        +

        Fingerprint authentication

        The chain of trust validation is useful for the open web, where you as software developer don't know to which remote endpoint your software will ever connect to - as long as the remote has a certificate signed (via intermediates) by any of the trust anchors. In the early days, before let's encrypt was launched and embedded as trust anchors (or cross-signed by already deployed trust anchors), operators needed to pay for a certificate - a business model where some CAs did not bother to check the authenticity of a certificate signing request, and thus random people owning valid certificates for microsoft.com or google.com.

        Instead of using the set of trust anchors, the fingerprint of the server certificate, or preferably the fingerprint of the public key of the certificate, can be used for authentication, as optionally done since some years in jackline, an XMPP client. Support for this certificate / public key pinning was added in x509 0.2.1 / 0.5.0.

        -

        Certificate signing requests

        +

        Certificate signing requests

        Until x509 0.4.0 there was no support for generating certificate signing requests (CSR), as defined in PKCS 10, which are self-signed blobs containing a public key, an identity, and possibly extensions. Such as CSR is sent to the certificate authority, and after validation of ownership of the identity and paying a fee, the certificate is issued. Let's encrypt specified the ACME protocol which automates the proof of ownership: they provide a HTTP API for requesting a challenge, providing the response (the proof of ownership) via HTTP or DNS, and then allow the submission of a CSR and downloading the signed certificate. The ocaml-x509 library provides operations for creating such a CSR, and also for signing a CSR to generate a certificate.

        Mindy developed the command-line utility certify which uses these operations from the ocaml-x509 library and acts as a swiss-army knife purely in OCaml for these required operations.

        Maker developed a let's encrypt library which implements the above mentioned ACME protocol for provisioning CSR to certificates, also using our ocaml-x509 library.

        To complete the required certificate authority functionality, in x509 0.6.0 certificate revocation lists, both validation and signing, was implemented.

        -

        Deploying unikernels

        +

        Deploying unikernels

        As described in another post, I developed albatross, an orchestration system for MirageOS unikernels. This uses ASN.1 for internal socket communication and allows remote management via a TLS connection which is mutually authenticated with a X.509 client certificate. To encrypt the X.509 client certificate, first a TLS handshake where the server authenticates itself to the client is established, and over that connection another TLS handshake is established where the client certificate is requested. Note that this mechanism can be dropped with TLS 1.3, since there the certificates are transmitted over an already encrypted channel.

        The client certificate already contains the command to execute remotely - as a custom extension, being it "show me the console output", or "destroy the unikernel with name = YYY", or "deploy the included unikernel image". The advantage is that the commands are already authenticated, and there is no need for developing an ad-hoc protocol on top of the TLS session. The resource limits, assigned by the authority, are also part of the certificate chain - i.e. the number of unikernels, access to network bridges, available accumulated memory, accumulated size for block devices, are constrained by the certificate chain presented to the server, and currently running unikernels. The names of the chain are used for access control - if Alice and Bob have intermediate certificates from the same CA, neither Alice may manage Bob's unikernels, nor Bob may manage Alice's unikernels. I'm using albatross since 2.5 years in production on two physical machines with ~20 unikernels total (multiple users, multiple administrative domains), and it works stable and is much nicer to deal with than scp and custom hacked shell scripts.

        -

        Why 0.7?

        +

        Why 0.7?

        There are still some missing pieces in our ocaml-x509 implementation, namely modern ECC certificates (depending on elliptic curve primitives not yet available in OCaml), RSA-PSS signing (should be straightforward), PKCS 12 (there is a pull request, but this should wait until asn1-combinators supports the ANY defined BY construct to cleanup the code), ... Once these features are supported, the library should likely be named PKCS since it supports more than X.509, and released as 1.0.

        The 0.7 release series moved a lot of modules and function names around, thus it is a major breaking release. By using a map instead of lists for extensions, GeneralName, ..., the API was further revised - invariants that each extension key (an ASN.1 object identifier) may occur at most once are now enforced. By not leaking exceptions through the public interface, the API is easier to use safely - see let's encrypt, openvpn, certify, tls, capnp, albatross.

        I intended in 0.7.0 to have much more precise types, esp. for the SubjectAlternativeName (SAN) extension that uses a GeneralName, but it turns out the GeneralName is as well used for NameConstraints (NC) in a different way -- IP in SAN is an IPv4 or IPv6 address, in CN it is the IP/netmask; DNS is a domain name in SAN, in CN it is a name starting with a leading dot (i.e. ".example.com"), which is not a valid domain name. In 0.7.1, based on a bug report, I had to revert these variants and use less precise types.

        -

        Conclusion

        +

        Conclusion

        The work on X.509 was sponsored by OCaml Labs. You can support our work at robur by a donation, which we will use to work on our OCaml and MirageOS projects. You can also reach out to us to realize commercial products.

        I'm interested in feedback, either via twitter hannesm@mastodon.social or via eMail.

        \ No newline at end of file diff --git a/Posts/nqsbWebsite b/Posts/nqsbWebsite index 1c60e9e..f9bca02 100644 --- a/Posts/nqsbWebsite +++ b/Posts/nqsbWebsite @@ -1,12 +1,12 @@ -Fitting the things together

        Fitting the things together

        Written by hannes
        Classified under: mirageoshttptlsprotocol
        Published: 2016-04-24 (last updated: 2021-11-19)

        Task

        +Fitting the things together

        Fitting the things together

        Written by hannes
        Classified under: mirageoshttptlsprotocol
        Published: 2016-04-24 (last updated: 2021-11-19)

        Task

        Our task is to build a small unikernel which provides a project website. On our way we will wade through various layers using code examples. The website itself contains a few paragraphs of text, some link lists, and our published papers in pdf form.

        Spoiler alert final result can be seen here, the full code here.

        -

        A first idea

        +

        A first idea

        We could go all the way to use conduit for wrapping connections, and mirage-http (using cohttp, a very lightweight HTTP server). We'd just need to write routing code which in the end reads from a virtual file system, and some HTML and CSS for the actual site.

        Turns out, the conduit library is already 1.7 MB in size and depends on 34 libraries, cohttp is another 3.7 MB and 40 dependent libraries. Both libraries are actively developed, combined there were 25 releases within the last year.

        -

        Plan

        +

        Plan

        Let me state our demands more clearly:

        • easy to maintain @@ -17,7 +17,7 @@ Both libraries are actively developed, combined there were 25 releases within th

        To achieve easy maintenance we keep build and run time dependencies small, use a single virtual machine image to ease deployment. We try to develop only little new code. Our general approach to performance is to do as little work as we can on each request, and precompute at compile time or once at startup as much as we can.

        -

        HTML code

        +

        HTML code

        From the tyxml description: "Tyxml provides a set of combinators to build Html5 and Svg documents. These combinators use the OCaml type-system to ensure the validity of the generated Html5 and Svg." A tutorial is available.

        You can plug elements (or attributes) inside each other only if the HTML specification allows this (no <body> inside of a <body>). An example that can be rendered to a div with pcdata inside.

        If you use utop (as interactive read-eval-print-loop), you first need to load tyxml by #require "tyxml".

        @@ -51,7 +51,7 @@ Our full page source (CSS embedding is done via a string, no fancy types there ( (body [ mycontent ]) ; Cstruct.of_string @@ Buffer.contents buf
        -

        Binary data

        +

        Binary data

        There are various ways how to embed binary data into MirageOS:

        • connect an external (FAT) disk image; upside: works for large data, independent, can be shared with other systems; downside: an extra file to distribute onto the production machine, lots of code (block storage and file system access) which can contain directory traversals and other issues @@ -86,7 +86,7 @@ let start kv =

          The funny >>= syntax notes that something is doing input/output, which might be blocking or interrupted or failing. It composes effects using an imperative style (semicolon in other languages, another term is monadic bind). The Page.render function is described above, and is pure, thus no >>=.

          We now have all the resources we wanted available inside our MirageOS unikernel. There is some cost during configuration (converting binary into code), and startup (concatenating lists, lookups, rendering HTML into string representation).

          -

          Building a HTTP response

          +

          Building a HTTP response

          HTTP consists of headers and data, we already have the data. A HTTP header contains of an initial status line (HTTP/1.1 200 OK), and a list of keys-values, each of the form key + ": " + value + "\r\n" (+ is string concatenation). The header is separated with "\r\n\r\n" from the data:

          let http_header ~status xs =
             let headers = List.map (fun (k, v) -> k ^ ": " ^ v) xs in
          @@ -98,7 +98,7 @@ let header content_type =
           

          We also know statically (at compile time) which headers to send: content-type should be text/html for our main page, and application/pdf for the pdf files. The status code 200 is used in HTTP to signal that the request is successful. We can combine the headers and the data during startup, because our single communication channel is HTTP, and thus we don't need access to the data or headers separately (support for HTTP caching etc. is out of scope).

          We are now finished with the response side of HTTP, and can emit three different resources. Now, we need to handle incoming HTTP requests and dispatch them to the resource. Let's first to a brief detour to HTTPS (and thus TLS).

          -

          Security via HTTPS

          +

          Security via HTTPS

          Transport layer security is a protocol on top of TCP providing an end-to-end encrypted and authenticated channel. In our setting, our web server has a certificate and a private key to authenticate itself to the clients.

          A certificate is a token containing a public key, a name, a validity period, and a signature from the authority which issued the certificate. The authority is crucial here: this infrastructure only works if the client trusts the public key of the authority (and thus can verify their signature on our certificate). I used let's encrypt (actually the letsencrypt.sh client (would be great to have one natively in OCaml) to get a signed certificate, which is widely accepted by web browsers.

          The MirageOS interface for TLS is that it takes a FLOW (byte stream, e.g. TCP) and provides a FLOW. Libraries can be written to be agnostic whether they use a TCP stream or a TLS session to carry data.

          @@ -117,14 +117,14 @@ let header content_type =

          The server_of_flow is provided by the Mirage TLS layer. It can fail, if the client and server do not speak a common protocol suite, ciphersuite, or if one of the sides behaves non-protocol compliant.

          To wrap up, we managed to listen on the HTTPS port and establish TLS sessions for each incoming request. We have our resources available, and now need to dispatch the request onto the resource.

          -

          HTTP request handling

          +

          HTTP request handling

          HTTP is a string based protocol, the first line of the request contains the method and resource which the client wants to access, GET / HTTP/1.1. We could read a single line from the client, and cut the part between GET and HTTP/1.1 to deliver the resource.

          This would either need some (brittle?) string processing, or a full-blown HTTP library on our side. "I'm sorry Dave, I can't do that". There is no way we'll do string processing on data received from the network for this.

          Looking a bit deeper into TLS, there is a specification for server name indication from 2003. The main purpose is to run on a single IP(v4) address multiple TLS services. The client indicates in the TLS handshake what the server name is it wants to talk to. This extension looks in wikipedia widely enough deployed.

          During the TLS handshake there is already some server name information exposed, and we have a very small set of available resources. Thanks to let's encrypt, generating certificates is easy and free of cost.

          And, if we're down to a single resource, we can use the same technique used by David in the BTC Piñata: just send the resource back without waiting for a request.

          -

          Putting it all together

          +

          Putting it all together

          What we need is a hostname for each resource, and certificates and private keys for them, or a single certificate with all hostnames as alternative names.

          Our TLS library supports to select a certificate chain based on the requested name (look here). The following snippet is a setup to use the nqsb.io certificate chain by default (if no SNI is provided, or none matches), and also have a usenix15 and a tron certificate chain.

          let start stack keys kv =
          @@ -171,7 +171,7 @@ the BTC Piñata: just send the resource
             S.listen stack
           

          That's it, the nqsb.io contains slightly more code to log onto a console, and to redirect requests on port 80 (HTTP) to port 443 (by signaling a 301 Moved permanently HTTP status code).

          -

          Conclusion

          +

          Conclusion

          A comparison using Firefox builtin network diagnostics shows that the waiting before receiving data is minimal (3ms, even spotted 0ms).

          We do not render HTML for each request, we do not splice data together, we don't even read the client request. And I'm sure we can improve the performance even more by profiling.

          @@ -180,7 +180,7 @@ the BTC Piñata: just send the resource

          For a start in MirageOS unikernels, look into our mirage-skeleton project, and into the /dev/winter presentation by Matt Gray.

          I'm interested in feedback, either via twitter or via eMail.

          -

          Other updates in the MirageOS ecosystem

          +

          Other updates in the MirageOS ecosystem

          • Canopy improvements: no bower anymore, HTTP caching support (via etags), listings now include dates, dates are now in big-endian (y-m-d)
          • diff --git a/atom b/atom index 4b708c8..1bf21b2 100644 --- a/atom +++ b/atom @@ -1,8 +1,8 @@ -urn:uuid:981361ca-e71d-4997-a52c-baeee78e4156full stack engineer2023-05-16T17:21:47-00:00<p>fleet management for MirageOS unikernels using a mutually authenticated TLS handshake</p> +urn:uuid:981361ca-e71d-4997-a52c-baeee78e4156full stack engineer2023-05-16T17:21:47-00:00<p>fleet management for MirageOS unikernels using a mutually authenticated TLS handshake</p> 2022-11-17T12:41:11-00:00<p>EDIT (2023-05-16): Updated with albatross release version 2.0.0.</p> -<h2>Deploying MirageOS unikernels</h2> +<h2 id="deploying-mirageos-unikernels">Deploying MirageOS unikernels</h2> <p>More than five years ago, I posted <a href="/Posts/VMM">how to deploy MirageOS unikernels</a>. My motivation to work on this topic is that I'm convinced of reduced complexity, improved security, and more sustainable resource footprint of MirageOS unikernels, and want to ease deployment thereof. More than one year ago, I described <a href="/Posts/Deploy">how to deploy reproducible unikernels</a>.</p> -<h2>Albatross</h2> +<h2 id="albatross">Albatross</h2> <p>In recent months we worked hard on the underlying infrastructure: <a href="https://github.com/roburio/albatross">albatross</a>. Albatross is the orchestration system for MirageOS unikernels that use solo5 with <a href="https://github.com/Solo5/solo5/blob/master/docs/architecture.md">hvt or spt tender</a>. It deals with three tasks:</p> <ul> <li>unikernel creation (destroyal, restart) @@ -13,50 +13,50 @@ </li> </ul> <p>An addition to the above is dealing with multiple tenants on the same machine: remote management of your unikernel fleet via TLS, and resource policies.</p> -<h2>History</h2> +<h2 id="history">History</h2> <p>The initial commit of albatross was in May 2017. Back then it replaced the shell scripts and manual <code>scp</code> of unikernel images to the server. Over time it evolved and adapted to new environments. Initially a solo5 unikernel would only know of a single network interface, these days there can be multiple distinguished by name. Initially there was no support for block devices. Only FreeBSD was supported in the early days. Nowadays we built daily packages for Debian, Ubuntu, FreeBSD, and have support for NixOS, and the client side is supported on macOS as well.</p> -<h3>ASN.1</h3> +<h3 id="asn.1">ASN.1</h3> <p>The communication format between the albatross daemons and clients was changed multiple times. I'm glad that albatross uses ASN.1 as communication format, which makes extension with optional fields easy, and also allows &quot;choice&quot; (the sum type) to be not tagged (the binary is the same as no choice type), thus adding choice to an existing grammar, and preserving the old in the default (untagged) case is a decent solution.</p> <p>So, if you care about backward and forward compatibility, as we do, since we may be in control of which albatross servers are deployed on our machine, but not what albatross versions the clients are using -- it may be wise to look into ASN.1. Recent efforts (json with schema, ...) may solve similar issues, but ASN.1 is as well very tiny in size.</p> -<h2>What resources does a unikernel need?</h2> +<h2 id="what-resources-does-a-unikernel-need">What resources does a unikernel need?</h2> <p>A unikernel is just an operating system for a single service, there can't be much it can need.</p> -<h3>Name</h3> +<h3 id="name">Name</h3> <p>So, first of all a unikernel has a name, or a handle. This is useful for reporting statistics, but also to specify which console output you're interested in. The name is a string with printable ASCII characters (and dash '-' and dot '.'), with a length up to 64 characters - so yes, you can use an UUID if you like.</p> -<h3>Memory</h3> +<h3 id="memory">Memory</h3> <p>Another resource is the amount of memory assigned to the unikernel. This is specified in megabyte (as solo5 does), with the range being 10 (below not even a hello world wants to start) to 1024.</p> -<h3>Arguments</h3> +<h3 id="arguments">Arguments</h3> <p>Of course, you can pass via albatross boot parameters to the unikernel. Albatross doesn't impose any restrictions here, but the lower levels may.</p> -<h3>CPU</h3> +<h3 id="cpu">CPU</h3> <p>Due to multiple tenants, and side channel attacks, it looked right at the beginning like a good idea to restrict each unikernel to a specific CPU. This way, one tenant may use CPU 5, and another CPU 9 - and they'll not starve each other (best to make sure that these CPUs are in different packages). So, albatross takes a number as the CPU, and executes the solo5 tender within <code>taskset</code>/<code>cpuset</code>.</p> -<h3>Fail behaviour</h3> +<h3 id="fail-behaviour">Fail behaviour</h3> <p>In normal operations, exceptional behaviour may occur. I have to admit that I've seen MirageOS unikernels that suffer from not freeing all the memory they have allocated. To avoid having to get up at 4 AM just to start the unikernel that went out of memory, there's the possibility to restart the unikernel when it exited. You can even specify on which exit codes it should be restarted (the exit code is the only piece of information we have from the outside what caused the exit). This feature was implemented in October 2019, and has been very precious since then. :)</p> -<h3>Network</h3> +<h3 id="network">Network</h3> <p>This becomes a bit more complex: a MirageOS unikernel can have network interfaces, and solo5 specifies a so-called manifest with a list of these (name and type, and type is so far always basic). Then, on the actual server there are bridges (virtual switches) configured. Now, these may have the same name, or may need to be mapped. And of course, the unikernel expects a tap interface that is connected to such a bridge, not the bridge itself. Thus, albatross creates tap devices, attaches these to the respective bridges, and takes care about cleaning them up on teardown. The albatross client verifies that for each network interface in the manifest, there is a command-line argument specified (<code>--net service:my_bridge</code> or just <code>--net service</code> if the bridge is named service). The tap interface name is not really of interest to the user, and will not be exposed.</p> -<h3>Block devices</h3> +<h3 id="block-devices">Block devices</h3> <p>On the host system, it's just a file, and passed to the unikernel. There's the need to be able to create one, dump it, and ensure that each file is only used by one unikernel. That's all that is there.</p> -<h2>Metrics</h2> +<h2 id="metrics">Metrics</h2> <p>Everyone likes graphs, over time, showing how much traffic or CPU or memory or whatever has been used by your service. Some of these statistics are only available in the host system, and it is also crucial for development purposes to compare whether the bytes sent in the unikernel sum up to the same on the host system's tap interface.</p> <p>The albatross-stats daemon collects metrics from three sources: network interfaces, getrusage (of a child process), VMM debug counters (to count VM exits etc.). Since the recent 1.5.3, albatross-stats now connects at startup to the albatross-daemon and then retrieves the information which unikernels are up and running, and starts periodically collecting data in memory.</p> <p>Other clients, being it a dump on your console window, a write into an rrd file (good old MRTG times), or a push to influx, can use the stats data to correlate and better analyse what is happening on the grand scale of things. This helped a lot by running several unikernels with different opam package sets to figure out which opam packages leave their hands on memory over time.</p> <p>As a side note, if you make the unikernel name also available in the unikernel, it can tag its own metrics with the same identifier, and you can correlate high-level events (such as amount of HTTP requests) with low-level things &quot;allocated more memory&quot; or &quot;consumed a lot of CPU&quot;.</p> -<h2>Console</h2> +<h2 id="console">Console</h2> <p>There's not much to say about the console, just that the albatross-console daemon is running with low privileges, and reading from a FIFO that the unikernel writes to. It never writes anything to disk, but keeps the last 1000 lines in memory, available from a client asking for it.</p> -<h2>The daemons</h2> +<h2 id="the-daemons">The daemons</h2> <p>So, the main albatross-daemon runs with superuser privileges to create virtual machines, and opens a unix domain socket where the clients and other daemons are connecting to. The other daemons are executed with normal user privileges, and never write anything to disk.</p> <p>The albatross-daemon keeps state about the running unikernels, and if it is restarted, the unikernels are started again. Maybe worth to mention that this lead sometimes to headaches (due to data being dumped to disk, and the old format should always be supported), but was also a huge relief to not have to care about creating all the unikernels just because albatross-daemon was killed.</p> -<h2>Remote management</h2> +<h2 id="remote-management">Remote management</h2> <p>There's one more daemon program: albatross-tls-endpoint. It accepts clients via a remote TCP connection, and establish a mutual-authenticated TLS handshake. When done, the command is forwarded to the respective Unix domain socket, and the reply is sent back to the client.</p> <p>The daemon itself has a X.509 certificate to authenticate, but the client is requested to show its certificate chain as well. This by now requires TLS 1.3, so the client certificates are sent over the encrypted channel.</p> <p>A step back, X.509 certificate contains a public key and a signature from one level up. When the server knows about the root (or certificate authority (CA)) certificate, and following the chain can verify that the leaf certificate is valid. Additionally, a X.509 certificate is a ASN.1 structure with some fixed fields, but also contains extensions, a key-value store where the keys are object identifiers, and the values are key-dependent data. Also note that this key-value store is cryptographically signed.</p> <p>Albatross uses the object identifier, assigned to Camelus Dromedarius (MirageOS - 1.3.6.1.4.1.49836.42) to encode the command to be executed. This means that once the TLS handshake is established, the command to be executed is already transferred.</p> <p>In the leaf certificate, there may be the &quot;create unikernel&quot; command with the unikernel image, it's boot parameters, and other resources. Or a &quot;read the console of my unikernel&quot;. In the intermediate certificates (from root to leaf), resource policies are encoded (this path may only have X unikernels running with a total of Y MB memory, and Z MB of block storage, using CPUs A and B, accessing bridges C and D). From the root downwards these policies may only decrease. When a unikernel should be created (or other commands are executed), the policies are verified to hold. If they do not, an error is reported.</p> -<h2>Fleet management</h2> +<h2 id="fleet-management">Fleet management</h2> <p>Of course it is very fine to create your locally compiled unikernel to your albatross server, go for it. But in terms of &quot;what is actually running here?&quot; and &quot;does this unikernel need to be updated because some opam package had a security issues?&quot;, this is not optimal.</p> <p>Since we provide <a href="https://builds.robur.coop">daily reproducible builds</a> with the current HEAD of the main opam-repository, and these unikernels have no configuration embedded (but take everything as boot parameters), we just deploy them. They come with the information what opam packages contributed to the binary, which environment variables were set, and which system packages were installed with which versions.</p> <p>The whole result of reproducible builds for us means: we have a hash of a unikernel image that we can lookup in our build infrastructure, and take a look whether there is a newer image for the same job. And if there is, we provide a diff between the packages contributed to the currently running unikernel and the new image. That is what the albatross-client update command is all about.</p> <p>Of course, your mileage may vary and you want automated deployments where each git commit triggers recompilation and redeployment. The downside would be that sometimes only dependencies are updated and you've to cope with that.</p> <p>There is a client <code>albatross-client</code>, depending on arguments either connects to a local Unix domain socket, or to a remote albatross instance via TCP and TLS, or outputs a certificate signing request for later usage. Data, such as the unikernel ELF image, is compressed in certificates.</p> -<h2>Installation</h2> +<h2 id="installation">Installation</h2> <p>For Debian and Ubuntu systems, we provide package repositories. Browse the dists folder for one matching your distribution, and add it to <code>/etc/apt/sources.list</code>:</p> <pre><code>$ wget -q -O /etc/apt/trusted.gpg.d/apt.robur.coop.gpg https://apt.robur.coop/gpg.pub $ echo &quot;deb https://apt.robur.coop ubuntu-20.04 main&quot; &gt;&gt; /etc/apt/sources.list # replace ubuntu-20.04 with e.g. debian-11 on a debian buster machine @@ -77,28 +77,28 @@ $ pkg install solo5 albatross </code></pre> <p>Please ensure to have at least version 2.0.0 of albatross installed.</p> <p>For other distributions and systems we do not (yet?) provide binary packages. You can compile and install them using opam (<code>opam install solo5 albatross</code>). Get in touch if you're keen on adding some other distribution to our reproducible build infrastructure.</p> -<h2>Conclusion</h2> +<h2 id="conclusion">Conclusion</h2> <p>After five years of development and operating albatross, feel free to get it and try it out. Or read the code, discuss issues and shortcomings with us - either at the issue tracker or via eMail.</p> <p>Please reach out to us (at team AT robur DOT coop) if you have feedback and suggestions. We are a non-profit company, and rely on <a href="https://robur.coop/Donate">donations</a> for doing our work - everyone can contribute.</p> -urn:uuid:1f354218-e8c3-5136-a2ca-c88f3c2878d8Deploying reproducible unikernels with albatross2023-05-16T17:21:47-00:00hannes<p>Re-developing an opam cache from scratch, as a MirageOS unikernel</p> +urn:uuid:1f354218-e8c3-5136-a2ca-c88f3c2878d8Deploying reproducible unikernels with albatross2023-05-16T17:21:47-00:00hannes<p>Re-developing an opam cache from scratch, as a MirageOS unikernel</p> 2022-09-29T13:04:14-00:00<p>We at <a href="https://robur.coop">robur</a> developed <a href="https://git.robur.io/robur/opam-mirror">opam-mirror</a> in the last month and run a public opam mirror at https://opam.robur.coop (updated hourly).</p> -<h1>What is opam and why should I care?</h1> +<h1 id="what-is-opam-and-why-should-i-care">What is opam and why should I care?</h1> <p><a href="https://opam.ocaml.org">Opam</a> is the OCaml package manager (also used by other projects such as <a href="https://coq.inria.fr">coq</a>). It is a source based system: the so-called repository contains the metadata (url to source tarballs, build dependencies, author, homepage, development repository) of all packages. The main repository is hosted on GitHub as <a href="https://github.com/ocaml/opam-repository">ocaml/opam-repository</a>, where authors of OCaml software can contribute (as pull request) their latest releases.</p> <p>When opening a pull request, automated systems attempt to build not only the newly released package on various platforms and OCaml versions, but also all reverse dependencies, and also with dependencies with the lowest allowed version numbers. That's crucial since neither semantic versioning has been adapted across the OCaml ecosystem (which is tricky, for example due to local opens any newly introduced binding will lead to a major version bump), neither do many people add upper bounds of dependencies when releasing a package (nobody is keen to state &quot;my package will not work with <a href="https://erratique.ch/software/cmdliner">cmdliner</a> in version 1.2.0&quot;).</p> <p>So, the opam-repository holds the metadata of lots of OCaml packages (around 4000 at the moment this article was written) with lots of versions (in total 25000) that have been released. It is used by the opam client to figure out which packages to install or upgrade (using a solver that takes the version bounds into consideration).</p> <p>Of course, opam can use other repositories (overlays) or forks thereof. So nothing stops you from using any other opam repository. The url to the source code of each package may be a tarball, or a git repository or other version control systems.</p> <p>The vast majority of opam packages released to the opam-repository include a link to the source tarball and a cryptographic hash of the tarball. This is crucial for security (under the assumption the opam-repository has been downloaded from a trustworthy source - check back later this year for updates on <a href="/Posts/Conex">conex</a>). At the moment, there are some weak spots in respect to security: md5 is still allowed, and the hash and the tarball are downloaded from the same server: anyone who is in control of that server can inject arbitrary malicious data. As outlined above, we're working on infrastructure which fixes the latter issue.</p> -<h1>How does the opam client work?</h1> +<h1 id="how-does-the-opam-client-work">How does the opam client work?</h1> <p>Opam, after initialisation, downloads the <code>index.tar.gz</code> from <code>https://opam.ocaml.org/index.tar.gz</code>, and uses this as the local opam universe. An <code>opam install cmdliner</code> will resolve the dependencies, and download all required tarballs. The download is first tried from the cache, and if that failed, the URL in the package file is used. The download from the cache uses the base url, appends the archive-mirror, followed by the hash algorithm, the first two characters of the has of the tarball, and the hex encoded hash of the archive, i.e. for cmdliner 1.1.1 which specifies its sha512: <code>https://opam.ocaml.org/cache/sha512/54/5478ad833da254b5587b3746e3a8493e66e867a081ac0f653a901cc8a7d944f66e4387592215ce25d939be76f281c4785702f54d4a74b1700bc8838a62255c9e</code>.</p> -<h1>How does the opam repository work?</h1> +<h1 id="how-does-the-opam-repository-work">How does the opam repository work?</h1> <p>According to DNS, opam.ocaml.org is a machine at amazon. It likely, apart from the website, uses <code>opam admin index</code> periodically to create the index tarball and the cache. There's an observable delay between a package merge in the opam-repository and when it shows up at opam.ocaml.org. Recently, there was <a href="https://discuss.ocaml.org/t/opam-ocaml-org-is-currently-down-is-that-where-indices-are-kept-still/">a reported downtime</a>.</p> <p>Apart from being a single point of failure, if you're compiling a lot of opam projects (e.g. a continuous integration / continuous build system), it makes sense from a network usage (and thus sustainability perspective) to move the cache closer to where you need the source archives. We're also organising the MirageOS <a href="http://retreat.mirage.io">hack retreats</a> in a northern African country with poor connectivity - so if you gather two dozen camels you better bring your opam repository cache with you to reduce the bandwidth usage (NB: this requires at the moment cooperation of all participants to configure their default opam repository accordingly).</p> -<h1>Re-developing &quot;opam admin create&quot; as MirageOS unikernel</h1> +<h1 id="re-developing-opam-admin-create-as-mirageos-unikernel">Re-developing &quot;opam admin create&quot; as MirageOS unikernel</h1> <p>The need for a local opam cache at our <a href="https://builds.robur.coop">reproducible build infrastructure</a> and the retreats, we decided to develop <a href="https://git.robur.io/robur/opam-mirror">opam-mirror</a> as a <a href="https://mirage.io">MirageOS unikernel</a>. Apart from a useful showcase using persistent storage (that won't fit into memory), and having fun while developing it, our aim was to reduce our time spent on system administration (the <code>opam admin index</code> is only one part of the story, it needs a Unix system and a webserver next to it - plus remote access for doing software updates - which has quite some attack surface.</p> <p>Another reason for re-developing the functionality was that the opam code (what opam admin index actually does) is part of the opam source code, which totals to 50_000 lines of code -- looking up whether one or all checksums are verified before adding the tarball to the cache, was rather tricky.</p> <p>In earlier years, we avoided persistent storage and block devices in MirageOS (by embedding it into the source code with <a href="https://github.com/mirage/ocaml-crunch">crunch</a>, or using a remote git repository), but recent development, e.g. of <a href="https://somerandomidiot.com/blog/2022/03/04/chamelon/">chamelon</a> sparked some interest in actually using file systems and figuring out whether MirageOS is ready in that area. A month ago we started the opam-mirror project.</p> <p>Opam-mirror takes a remote repository URL, and downloads all referenced archives. It serves as a cache and opam-repository - and does periodic updates from the remote repository. The idea is to validate all available checksums and store the tarballs only once, and store overlays (as maps) from the other hash algorithms.</p> -<h1>Code development and improvements</h1> +<h1 id="code-development-and-improvements">Code development and improvements</h1> <p>Initially, our plan was to use <a href="https://github.com/mirage/ocaml-git">ocaml-git</a> for pulling the repository, <a href="https://github.com/yomimono/chamelon">chamelon</a> for persistent storage, and <a href="https://github.com/inhabitedtype/httpaf">httpaf</a> as web server. With <a href="https://github.com/mirage/ocaml-tar">ocaml-tar</a> recent support of <a href="https://github.com/mirage/ocaml-tar/pull/88">gzip</a> we should be all set, and done within a few days.</p> <p>There is already a gap in the above plan: which http client to use - in the best case something similar to our <a href="https://github.com/roburio/http-lwt-client">http-lwt-client</a> - in MirageOS: it should support HTTP 1.1 and HTTP 2, TLS (with certificate validation), and using <a href="https://github.com/roburio/happy-eyeballs">happy-eyeballs</a> to seemlessly support both IPv6 and legacy IPv4. Of course it should follow redirect, without that we won't get far in the current Internet.</p> <p>On the path (over the last month), we fixed file descriptor leaks (memory leaks) in <a href="https://github.com/dinosaure/paf-le-chien">paf</a> -- which is used as a runtime for httpaf and h2.</p> @@ -108,20 +108,20 @@ $ pkg install solo5 albatross <p>Since neither git state nor the maps are suitable for tar's append-only semantics, and we didn't want to investigate yet another file system - such as <a href="https://github.com/mirage/ocaml-fat">fat</a> may just work fine, but the code looks slightly bitrot, and the reported issues and non-activity doesn't make this package very trustworthy from our point of view. Instead, we developed <a href="https://github.com/reynir/mirage-block-partition">mirage-block-partition</a> to partition a block device into two. Then we just store the maps and the git state at the end - the end of a tar archive is 2 blocks of zeroes, so stuff at the far end aren't considered by any tooling. Extending the tar archive is also possible, only the maps and git state needs to be moved to the end (or recomputed). As file system, we developed <a href="https://git.robur.io/reynir/oneffs">oneffs</a> which stores a single value on the block device.</p> <p>We observed a high memory usage, since each requested archive was first read from the block device into memory, and then sent out. Thanks to Pierre Alains <a href="https://github.com/mirage/mirage-kv/pull/28">recent enhancements</a> of the mirage-kv API, there is a <code>get_partial</code>, that we use to chunk-wise read the archive and send it via HTTP. Now, the memory usage is around 20MB (the git repository and the generated tarball are kept in memory).</p> <p>What is next? Downloading and writing to the tar archive could be done chunk-wise as well; also dumping and restoring the git state is quite CPU intensive, we would like to improve that. Adding the TLS frontend (currently done on our site by our TLS termination proxy <a href="https://github.com/roburio/tlstunnel">tlstunnel</a>) similar to how <a href="https://github.com/roburio/unipi">unipi</a> does it, including let's encrypt provisioning -- should be straightforward (drop us a note if you'd be interesting in that feature).</p> -<h1>Conclusion</h1> +<h1 id="conclusion">Conclusion</h1> <p>To conclude, we managed within a month to develop this opam-mirror cache from scratch. It has a reasonable footprint (CPU and memory-wise), is easy to maintain and easy to update - if you want to use it, we also provide <a href="https://builds.robur.coop/job/opam-mirror">reproducible binaries</a> for solo5-hvt. You can use our opam mirror with <code>opam repository set-url default https://opam.robur.coop</code> (revert to the other with <code>opam repository set-url default https://opam.ocaml.org</code>) or use it as a backup with <code>opam repository add robur --rank 2 https://opam.robur.coop</code>.</p> <p>Please reach out to us (at team AT robur DOT coop) if you have feedback and suggestions. We are a non-profit company, and rely on <a href="https://robur.coop/Donate">donations</a> for doing our work - everyone can contribute.</p> -urn:uuid:0dbd251f-32c7-57bd-8e8f-7392c0833a09Mirroring the opam repository and all tarballs2022-10-11T12:14:07-00:00hannes<p>How to monitor your MirageOS unikernel with albatross and monitoring-experiments</p> -2022-03-08T11:26:31-00:00<h1>Introduction to monitoring</h1> +urn:uuid:0dbd251f-32c7-57bd-8e8f-7392c0833a09Mirroring the opam repository and all tarballs2022-10-11T12:14:07-00:00hannes<p>How to monitor your MirageOS unikernel with albatross and monitoring-experiments</p> +2022-03-08T11:26:31-00:00<h1 id="introduction-to-monitoring">Introduction to monitoring</h1> <p>At <a href="https://robur.coop">robur</a> we use a range of MirageOS unikernels. Recently, we worked on improving the operations story thereof. One part is shipping binaries using our <a href="https://builds.robur.coop">reproducible builds infrastructure</a>. Another part is, once deployed we want to observe what is going on.</p> <p>I first got into touch with monitoring - collecting and graphing metrics - with <a href="https://oss.oetiker.ch/mrtg/">MRTG</a> and <a href="https://munin-monitoring.org/">munin</a> - and the simple network management protocol <a href="https://en.wikipedia.org/wiki/Simple_Network_Management_Protocol">SNMP</a>. From the whole system perspective, I find it crucial that the monitoring part of a system does not add pressure. This favours a push-based design, where reporting is done at the disposition of the system.</p> <p>The rise of monitoring where graphs are done dynamically (such as <a href="https://grafana.com/">Grafana</a>) and can be programmed (with a query language) by the operator are very neat, it allows to put metrics in relation after they have been recorded - thus if there's a thesis why something went berserk, you can graph the collected data from the past and prove or disprove the thesis.</p> -<h1>Monitoring a MirageOS unikernel</h1> +<h1 id="monitoring-a-mirageos-unikernel">Monitoring a MirageOS unikernel</h1> <p>From the operational perspective, taking security into account - either the data should be authenticated and integrity-protected, or being transmitted on a private network. We chose the latter, there's a private network interface only for monitoring. Access to that network is only granted to the unikernels and metrics collector.</p> <p>For MirageOS unikernels, we use the <a href="https://github.com/mirage/metrics">metrics</a> library - which design shares the idea of <a href="https://erratique.ch/software/logs">logs</a> that only if there's a reporter registered, work is performed. We use the Influx line protocol via TCP to report via <a href="https://www.influxdata.com/time-series-platform/telegraf/">Telegraf</a> to <a href="https://www.influxdata.com/">InfluxDB</a>. But due to the design of <a href="https://github.com/mirage/metrics">metrics</a>, other reporters can be developed and used -- prometheus, SNMP, your-other-favourite are all possible.</p> <p>Apart from monitoring metrics, we use the same network interface for logging via syslog. Since the logs library separates the log message generation (in the OCaml libraries) from the reporting, we developed <a href="https://github.com/hannesm/logs-syslog">logs-syslog</a>, which registers a log reporter sending each log message to a syslog sink.</p> <p>We developed a small library for metrics reporting of a MirageOS unikernel into the <a href="https://github.com/roburio/monitoring-experiments">monitoring-experiments</a> package - which also allows to dynamically adjust log level and disable or enable metrics sources.</p> -<h2>Required components</h2> +<h2 id="required-components">Required components</h2> <p>Install from your operating system the packages providing telegraf, influxdb, and grafana.</p> <p>Setup telegraf to contain a socket listener:</p> <pre><code>[[inputs.socket_listener]] @@ -131,7 +131,7 @@ $ pkg install solo5 albatross </code></pre> <p>Use a unikernel that reports to Influx (below the heading &quot;Unikernels (with metrics reported to Influx)&quot; on <a href="https://builds.robur.coop">builds.robur.coop</a>) and provide <code>--monitor=192.168.42.14</code> as boot parameter. Conventionally, these unikernels expect a second network interface (on the &quot;management&quot; bridge) where telegraf (and a syslog sink) are running. You'll need to pass <code>--net=management</code> and <code>--arg='--management-ipv4=192.168.42.x/24'</code> to albatross-client.</p> <p>Albatross provides a <code>albatross-influx</code> daemon that reports information from the host system about the unikernels to influx. Start it with <code>--influx=192.168.42.14</code>.</p> -<h2>Adding monitoring to your unikernel</h2> +<h2 id="adding-monitoring-to-your-unikernel">Adding monitoring to your unikernel</h2> <p>If you want to extend your own unikernel with metrics, follow along these lines.</p> <p>An example is the <a href="https://github.com/roburio/dns-primary-git">dns-primary-git</a> unikernel, where on the branch <code>future</code> we have a single commit ahead of main that adds monitoring. The difference is in the unikernel configuration and the main entry point. See the <a href="https://builds.robur.coop/job/dns-primary-git-monitoring/build/latest/">binary builts</a> in contrast to the <a href="https://builds.robur.coop/job/dns-primary-git/build/latest/">non-monitoring builts</a>.</p> <p>In config, three new command line arguments are added: <code>--monitor=IP</code>, <code>--monitor-adjust=PORT</code> <code>--syslog=IP</code> and <code>--name=STRING</code>. In addition, the package <code>monitoring-experiments</code> is required. And a second network interface <code>management_stack</code> using the prefix <code>management</code> is required and passed to the unikernel. Since the syslog reporter requires a console (to report when logging fails), also a console is passed to the unikernel. Each reported metrics includes a tag <code>vm=&lt;name&gt;</code> that can be used to distinguish several unikernels reporting to the same InfluxDB.</p> @@ -225,19 +225,19 @@ _stack.V4V6) (_ : sig end) (Management : Mirage_stack.V4V6) = struct <p>With this, your unikernel will report metrics using the influx protocol to 192.168.42.14 on port 8094 (every 10 seconds), and syslog messages via UDP to 192.168.0.10 (port 514). You should see your InfluxDB getting filled and syslog server receiving messages.</p> <p>When you configure <a href="https://grafana.com/docs/grafana/latest/getting-started/getting-started-influxdb/">Grafana to use InfluxDB</a>, you'll be able to see the data in the data sources.</p> <p>Please reach out to us (at team AT robur DOT coop) if you have feedback and suggestions.</p> -urn:uuid:b8f1fa5b-d8dd-5a54-a9e4-064b9dcd053eAll your metrics belong to influx2023-05-16T17:21:47-00:00hannes<p>Finally, we provide reproducible binary MirageOS unikernels together with packages to reproduce them and setup your own builder</p> -2021-06-30T13:13:37-00:00<h2>Introduction</h2> +urn:uuid:b8f1fa5b-d8dd-5a54-a9e4-064b9dcd053eAll your metrics belong to influx2023-05-16T17:21:47-00:00hannes<p>Finally, we provide reproducible binary MirageOS unikernels together with packages to reproduce them and setup your own builder</p> +2021-06-30T13:13:37-00:00<h2 id="introduction">Introduction</h2> <p>MirageOS development focus has been a lot on tooling and the developer experience, but to accomplish <a href="https://robur.coop">our</a> goal to &quot;get MirageOS into production&quot;, we need to lower the barrier. This means for us to release binary unikernels. As described <a href="/Posts/NGI">earlier</a>, we received a grant for &quot;Deploying MirageOS&quot; from <a href="https://pointer.ngi.eu">NGI Pointer</a> to work on the required infrastructure. This is joint work with <a href="https://reynir.dk/">Reynir</a>.</p> <p>We provide at <a href="https://builds.robur.coop">builds.robur.coop</a> binary unikernel images (and supplementary software). Doing binary releases of MirageOS unikernels is challenging in two aspects: firstly to be useful for everyone, a binary unikernel should not contain any configuration (such as private keys, certificates, etc.). Secondly, the binaries should be <a href="https://reproducible-builds.org">reproducible</a>. This is crucial for security; everyone can reproduce the exact same binary and verify that our build service did only use the sources. No malware or backdoors included.</p> <p>This post describes how you can deploy MirageOS unikernels without compiling it from source, then dives into the two issues outlined above - configuration and reproducibility - and finally describes how to setup your own reproducible build infrastructure for MirageOS, and how to bootstrap it.</p> -<h2>Deploying MirageOS unikernels from binary</h2> +<h2 id="deploying-mirageos-unikernels-from-binary">Deploying MirageOS unikernels from binary</h2> <p>To execute a MirageOS unikernel, apart from a hypervisor (Xen/KVM/Muen), a tender (responsible for allocating host system resources and passing these to the unikernel) is needed. Using virtio, this is conventionally done with qemu on Linux, but its code size (and attack surface) is huge. For MirageOS, we develop <a href="https://github.com/solo5/solo5">Solo5</a>, a minimal tender. It supports <em>hvt</em> - hardware virtualization (Linux KVM, FreeBSD BHyve, OpenBSD VMM), <em>spt</em> - sandboxed process (a tight seccomp ruleset (only a handful of system calls allowed, no hardware virtualization needed), Linux only). Apart from that, <a href="https://muen.sk"><em>muen</em></a> (a hypervisor developed in Ada), <em>virtio</em> (for some cloud deployments), and <em>xen</em> (PVHv2 or Qubes 4.0) - <a href="https://github.com/Solo5/solo5/blob/master/docs/building.md">read more</a>. We deploy our unikernels as hvt with FreeBSD BHyve as hypervisor.</p> <p>On <a href="https://builds.robur.coop">builds.robur.coop</a>, next to the unikernel images, <a href="https://builds.robur.coop/job/solo5-hvt/"><em>solo5-hvt</em> packages</a> are provided - download the binary and install it. A <a href="https://github.com/NixOS/nixpkgs/tree/master/pkgs/os-specific/solo5">NixOS package</a> is already available - please note that <a href="https://github.com/Solo5/solo5/pull/494">soon</a> packaging will be much easier (and we will work on packages merged into distributions).</p> <p>When the tender is installed, download a unikernel image (e.g. the <a href="https://builds.robur.coop/job/traceroute/build/latest/">traceroute</a> described in <a href="/Posts/Traceroute">an earlier post</a>), and execute it:</p> <pre><code>$ solo5-hvt --net:service=tap0 -- traceroute.hvt --ipv4=10.0.42.2/24 --ipv4-gateway=10.0.42.1 </code></pre> <p>If you plan to orchestrate MirageOS unikernels, you may be interested in <a href="https://github.com/roburio/albatross">albatross</a> - we provide <a href="https://builds.robur.coop/job/albatross/">binary packages as well for albatross</a>. An upcoming post will go into further details of how to setup albatross.</p> -<h2>MirageOS configuration</h2> +<h2 id="mirageos-configuration">MirageOS configuration</h2> <p>A MirageOS unikernel has a specific purpose - composed of OCaml libraries - selected at compile time, which allows to only embed the required pieces. This reduces the attack surface drastically. At the same time, to be widely useful to multiple organisations, no configuration data must be embedded into the unikernel.</p> <p>Early MirageOS unikernels such as <a href="https://github.com/mirage/mirage-www">mirage-www</a> embed content (blog posts, ..) and TLS certificates and private keys in the binary (using <a href="https://github.com/mirage/ocaml-crunch">crunch</a>). The <a href="https://github.com/mirage/qubes-mirage-firewall">Qubes firewall</a> (read the <a href="http://roscidus.com/blog/blog/2016/01/01/a-unikernel-firewall-for-qubesos/">blog post by Thomas</a> for more information) used to include the firewall rules until <a href="https://github.com/mirage/qubes-mirage-firewall/releases/tag/v0.6">v0.6</a> in the binary, since <a href="https://github.com/mirage/qubes-mirage-firewall/tree/v0.7">v0.7</a> the rules are read dynamically from QubesDB. This is big usability improvement.</p> <p>We have several possibilities to provide configuration information in MirageOS, on the one hand via boot parameters (can be pre-filled at development time, and further refined at configuration time, but those passed at boot time take precedence). Boot parameters have a length limitation.</p> @@ -245,7 +245,7 @@ _stack.V4V6) (_ : sig end) (Management : Mirage_stack.V4V6) = struct <p>Several other unikernels, such as <a href="https://github.com/Engil/Canopy">this website</a> and <a href="https://github.com/roburio/caldav">our CalDAV server</a>, store the content in a remote git repository. The git URI and credentials (private key seed, host key fingerprint) are passed via boot parameter.</p> <p>Finally, another option that we take advantage of is to introduce a post-link step that rewrites the binary to embed configuration. The tool <a href="https://github.com/dinosaure/caravan">caravan</a> developed by Romain that does this rewrite is used by our <a href="https://github.com/roburio/openvpn/tree/robur/mirage-router">openvpn router</a> (<a href="https://builds.robur.coop/job/openvpn-router/build/latest/">binary</a>).</p> <p>In the future, some configuration information - such as monitoring system, syslog sink, IP addresses - may be done via DHCP on one of the private network interfaces - this would mean that the DHCP server has some global configuration option, and the unikernels no longer require that many boot parameters. Another option we want to investigate is where the tender shares a file as read-only memory-mapped region from the host system to the guest system - but this is tricky considering all targets above (especially virtio and muen).</p> -<h2>Behind the scenes: reproducible builds</h2> +<h2 id="behind-the-scenes-reproducible-builds">Behind the scenes: reproducible builds</h2> <p>To provide a high level of assurance and trust, if you distribute binaries in 2021, you should have a recipe how they can be reproduced in a bit-by-bit identical way. This way, different organisations can run builders and rebuilders, and a user can decide to only use a binary if it has been reproduced by multiple organisations in different jurisdictions using different physical machines - to avoid malware being embedded in the binary.</p> <p>For a reproduction to be successful, you need to collect the checksums of all sources that contributed to the built, together with other things (host system packages, environment variables, etc.). Of course, you can record the entire OS and sources as a tarball (or file system snapshot) and distribute that - but this may be suboptimal in terms of bandwidth requirements.</p> <p>With opam, we already have precise tracking which opam packages are used, and since opam 2.1 the <code>opam switch export</code> includes <a href="https://github.com/ocaml/opam/pull/4040">extra-files (patches)</a> and <a href="https://github.com/ocaml/opam/pull/4055">records the VCS version</a>. Based on this functionality, <a href="https://github.com/roburio/orb">orb</a>, an alternative command line application using the opam-client library, can be used to collect (a) the switch export, (b) host system packages, and (c) the environment variables. Only required environment variables are kept, all others are unset while conducting a build. The only required environment variables are <code>PATH</code> (sanitized with an allow list, <code>/bin</code>, <code>/sbin</code>, with <code>/usr</code>, <code>/usr/local</code>, and <code>/opt</code> prefixes), and <code>HOME</code>. To enable Debian's <code>apt</code> to install packages, <code>DEBIAN_FRONTEND</code> is set to <code>noninteractive</code>. The <code>SWITCH_PATH</code> is recorded to allow orb to use the same path during a rebuild. The <code>SOURCE_DATE_EPOCH</code> is set to enable tools that record a timestamp to use a static one. The <code>OS*</code> variables are only used for recording the host OS and version.</p> @@ -278,15 +278,15 @@ _stack.V4V6) (_ : sig end) (Management : Mirage_stack.V4V6) = struct </li> </ul> <p>These tools are themselves reproducible, and built on a daily basis. The infrastructure executing the build jobs installs the most recent packages of orb and builder before conducting a build. This means that our build infrastructure is reproducible as well, and uses the latest code when it is released.</p> -<h2>Conclusion</h2> +<h2 id="conclusion">Conclusion</h2> <p>Thanks to NGI funding we now have reproducible MirageOS binary builds available at <a href="https://builds.robur.coop">builds.robur.coop</a>. The underlying infrastructure is reproducible, available for multiple platforms (Ubuntu using docker, FreeBSD using jails), and can be easily bootstrapped from source (once you have OCaml and opam working, getting builder and orb should be easy). All components are open source software, mostly with permissive licenses.</p> <p>We also have an index over sha-256 checksum of binaries - in the case you find a running unikernel image where you forgot which exact packages were used, you can do a reverse lookup.</p> <p>We are aware that the web interface can be improved (PRs welcome). We will also work on the rebuilder setup and run some rebuilds.</p> <p>Please reach out to us (at team AT robur DOT coop) if you have feedback and suggestions.</p> -urn:uuid:331831d8-6093-5dd7-9164-445afff953cbDeploying binary MirageOS unikernels2021-11-15T11:17:23-00:00hannes<p>Elliptic curves (ECDSA/ECDH) are supported in a maintainable and secure way.</p> -2021-04-23T13:33:06-00:00<h2>Introduction</h2> +urn:uuid:331831d8-6093-5dd7-9164-445afff953cbDeploying binary MirageOS unikernels2021-11-15T11:17:23-00:00hannes<p>Elliptic curves (ECDSA/ECDH) are supported in a maintainable and secure way.</p> +2021-04-23T13:33:06-00:00<h2 id="introduction">Introduction</h2> <p>Tl;DR: mirage-crypto-ec, with x509 0.12.0, and tls 0.13.0, provide fast and secure elliptic curve support in OCaml and MirageOS - using the verified <a href="https://github.com/mit-plv/fiat-crypto/">fiat-crypto</a> stack (Coq to OCaml to executable which generates C code that is interfaced by OCaml). In x509, a long standing issue (countryName encoding), and archive (PKCS 12) format is now supported, in addition to EC keys. In tls, ECDH key exchanges are supported, and ECDSA and EdDSA certificates.</p> -<h2>Elliptic curve cryptography</h2> +<h2 id="elliptic-curve-cryptography">Elliptic curve cryptography</h2> <p><a href="https://mirage.io/blog/tls-1-3-mirageos">Since May 2020</a>, our <a href="https://usenix15.nqsb.io">OCaml-TLS</a> stack supports TLS 1.3 (since tls version 0.12.0 on opam).</p> <p>TLS 1.3 requires elliptic curve cryptography - which was not available in <a href="https://github.com/mirage/mirage-crypto">mirage-crypto</a> (the maintained fork of <a href="https://github.com/mirleft/ocaml-nocrypto">nocrypto</a>).</p> <p>There are two major uses of elliptic curves: <a href="https://en.wikipedia.org/wiki/Elliptic-curve_Diffie%E2%80%93Hellman">key exchange (ECDH)</a> for establishing a shared secret over an insecure channel, and <a href="https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm">digital signature (ECDSA)</a> for authentication, integrity, and non-repudiation. (Please note that the construction of digital signatures on Edwards curves (Curve25519, Ed448) is called EdDSA instead of ECDSA.)</p> @@ -294,70 +294,70 @@ _stack.V4V6) (_ : sig end) (Management : Mirage_stack.V4V6) = struct <p>In addition, to use the code in MirageOS, it should be boring C code: no heap allocations, only using a very small amount of C library functions -- the code needs to be compiled in an environment with <a href="https://github.com/mirage/ocaml-freestanding/tree/v0.6.4/nolibc">nolibc</a>.</p> <p>Two projects started in semantics, to solve the issue from the grounds up: <a href="https://github.com/mit-plv/fiat-crypto/">fiat-crypto</a> and <a href="https://github.com/project-everest/hacl-star/">hacl-star</a>: their approach is to use a proof system (<a href="https://coq.inria.fr">Coq</a> or <a href="https://www.fstar-lang.org/">F*</a> to verify that the code executes in constant time, not depending on data input. Both projects provide as output of their proof systems C code.</p> <p>For our initial TLS 1.3 stack, <a href="https://github.com/pascutto/">Clément</a>, <a href="https://github.com/NathanReb/">Nathan</a> and <a href="https://github.com/emillon/">Etienne</a> developed <a href="https://github.com/mirage/fiat">fiat-p256</a> and <a href="https://github.com/mirage/hacl">hacl_x5519</a>. Both were one-shot interfaces for a narrow use case (ECDH for NIST P-256 and X25519), worked well for their purpose, and allowed to gather some experience from the development side.</p> -<h3>Changed requirements</h3> +<h3 id="changed-requirements">Changed requirements</h3> <p>Revisiting our cryptography stack with the elliptic curve perspective had several reasons, on the one side the customer project <a href="https://www.nitrokey.com/products/nethsm">NetHSM</a> asked for feasibility of ECDSA/EdDSA for various elliptic curves, on the other side <a href="https://github.com/mirage/ocaml-dns/pull/251">DNSSec</a> uses elliptic curve cryptography (ECDSA), and also <a href="https://www.wireguard.com/">wireguard</a> relies on elliptic curve cryptography. The number of X.509 certificates using elliptic curves is increasing, and we don't want to leave our TLS stack in a state where it can barely talk to a growing number of services on the Internet.</p> <p>Looking at <a href="https://github.com/project-everest/hacl-star/"><em>hacl-star</em></a>, their <a href="https://hacl-star.github.io/Supported.html">support</a> is limited to P-256 and Curve25519, any new curve requires writing F*. Another issue with hacl-star is C code quality: their C code does neither <a href="https://github.com/mirage/hacl/issues/46">compile with older C compilers (found on Oracle Linux 7 / CentOS 7)</a>, nor when enabling all warnings (&gt; 150 are generated). We consider the C compiler as useful resource to figure out undefined behaviour (and other problems), and when shipping C code we ensure that it compiles with <code>-Wall -Wextra -Wpedantic --std=c99 -Werror</code>. The hacl project <a href="https://github.com/mirage/hacl/tree/master/src/kremlin">ships</a> a bunch of header files and helper functions to work on all platforms, which is a clunky <code>ifdef</code> desert. The hacl approach is to generate a whole algorithm solution: from arithmetic primitives, group operations, up to cryptographic protocol - everything included.</p> <p>In contrast, <a href="https://github.com/mit-plv/fiat-crypto/"><em>fiat-crypto</em></a> is a Coq development, which as part of compilation (proof verification) generates executables (via OCaml code extraction from Coq). These executables are used to generate modular arithmetic (as C code) given a curve description. The <a href="https://github.com/mirage/mirage-crypto/tree/main/ec/native">generated C code</a> is highly portable, independent of platform (word size is taken as input) - it only requires a <code>&lt;stdint.h&gt;</code>, and compiles with all warnings enabled (once <a href="https://github.com/mit-plv/fiat-crypto/pull/906">a minor PR</a> got merged). Supporting a new curve is simple: generate the arithmetic code using fiat-crypto with the new curve description. The downside is that group operations and protocol needs to implemented elsewhere (and is not part of the proven code) - gladly this is pretty straightforward to do, especially in high-level languages.</p> -<h3>Working with fiat-crypto</h3> +<h3 id="working-with-fiat-crypto">Working with fiat-crypto</h3> <p>As mentioned, our initial <a href="https://github.com/mirage/fiat">fiat-p256</a> binding provided ECDH for the NIST P-256 curve. Also, BoringSSL uses fiat-crypto for ECDH, and developed the code for group operations and cryptographic protocol on top of it.</p> <p>The work needed was (a) ECDSA support and (b) supporting more curves (let's focus on NIST curves). For ECDSA, the algorithm requires modular arithmetics in the field of the group order (in addition to the prime). We generate these primitives with fiat-crypto (named <code>npYYY_AA</code>) - that required <a href="https://github.com/mit-plv/fiat-crypto/commit/e31a36d5f1b20134e67ccc5339d88f0ff3cb0f86">a small fix in decoding hex</a>. Fiat-crypto also provides inversion <a href="https://github.com/mit-plv/fiat-crypto/pull/670">since late October 2020</a>, <a href="https://eprint.iacr.org/2021/549">paper</a> - which allowed to reduce our code base taken from BoringSSL. The ECDSA protocol was easy to implement in OCaml using the generated arithmetics.</p> <p>Addressing the issue of more curves was also easy to achieve, the C code (group operations) are macros that are instantiated for each curve - the OCaml code are functors that are applied with each curve description.</p> <p>Thanks to the test vectors (as structured data) from <a href="https://github.com/google/wycheproof/">wycheproof</a> (and again thanks to Etienne, Nathan, and Clément for their OCaml code decodin them), I feel confident that our elliptic curve code works as desired.</p> <p>What was left is X25519 and Ed25519 - dropping the hacl dependency entirely felt appealing (less C code to maintain from fewer projects). This turned out to require more C code, which we took from BoringSSL. It may be desirable to reduce the imported C code, or to wait until a project on top of fiat-crypto which provides proven cryptographic protocols is in a usable state.</p> <p>To avoid performance degradation, I distilled some <a href="https://github.com/mirage/mirage-crypto/pull/107#issuecomment-799701703">X25519 benchmarks</a>, turns out the fiat-crypto and hacl performance is very similar.</p> -<h3>Achievements</h3> +<h3 id="achievements">Achievements</h3> <p>The new opam package <a href="https://mirage.github.io/mirage-crypto/doc/mirage-crypto-ec/Mirage_crypto_ec/index.html">mirage-crypto-ec</a> is released, which includes the C code generated by fiat-crypto (including <a href="https://github.com/mit-plv/fiat-crypto/pull/670">inversion</a>), <a href="https://github.com/mirage/mirage-crypto/blob/main/ec/native/point_operations.h">point operations</a> from BoringSSL, and some <a href="https://github.com/mirage/mirage-crypto/blob/main/ec/mirage_crypto_ec.ml">OCaml code</a> for invoking these functions and doing bounds checks, and whether points are on the curve. The OCaml code are some functors that take the curve description (consisting of parameters, C function names, byte length of value) and provide Diffie-Hellman (Dh) and digital signature algorithm (Dsa) modules. The nonce for ECDSA is computed deterministically, as suggested by <a href="https://tools.ietf.org/html/rfc6979">RFC 6979</a>, to avoid private key leakage.</p> <p>The code has been developed in <a href="https://github.com/mirage/mirage-crypto/pull/101">NIST curves</a>, <a href="https://github.com/mirage/mirage-crypto/pull/106">removing blinding</a> (since we use operations that are verified to be constant-time), <a href="https://github.com/mirage/mirage-crypto/pull/108">added missing length checks</a> (reported by <a href="https://github.com/greg42">Greg</a>), <a href="https://github.com/mirage/mirage-crypto/pull/107">curve25519</a>, <a href="https://github.com/mirage/mirage-crypto/pull/117">a fix for signatures that do not span the entire byte size (discovered while adapting X.509)</a>, <a href="https://github.com/mirage/mirage-crypto/pull/118">fix X25519 when the input has offset &lt;&gt; 0</a>. It works on x86 and arm, both 32 and 64 bit (checked by CI). The development was partially sponsored by Nitrokey.</p> <p>What is left to do, apart from further security reviews, is <a href="https://github.com/mirage/mirage-crypto/issues/109">performance improvements</a>, <a href="https://github.com/mirage/mirage-crypto/issues/112">Ed448/X448 support</a>, and <a href="https://github.com/mirage/mirage-crypto/issues/105">investigating deterministic k for P521</a>. Pull requests are welcome.</p> <p>When you use the code, and encounter any issues, please <a href="https://github.com/mirage/mirage-crypto/issues">report them</a>.</p> -<h2>Layer up - X.509 now with ECDSA / EdDSA and PKCS 12 support, and a long-standing issue fixed</h2> +<h2 id="layer-up---x.509-now-with-ecdsa-eddsa-and-pkcs-12-support-and-a-long-standing-issue-fixed">Layer up - X.509 now with ECDSA / EdDSA and PKCS 12 support, and a long-standing issue fixed</h2> <p>With the sign and verify primitives, the next step is to interoperate with other tools that generate and use these public and private keys. This consists of serialisation to and deserialisation from common data formats (ASN.1 DER and PEM encoding), and support for handling X.509 certificates with elliptic curve keys. Since X.509 0.12.0, it supports EC private and public keys, including certificate validation and issuance.</p> <p>Releasing X.509 also included to go through the issue tracker and attempt to solve the existing issues. This time, the <a href="https://github.com/mirleft/ocaml-x509/issues/69">&quot;country name is encoded as UTF8String, while RFC demands PrintableString&quot;</a> filed more than 5 years ago by <a href="https://github.com/reynir">Reynir</a>, re-reported by <a href="https://github.com/paurkedal">Petter</a> in early 2017, and again by <a href="https://github.com/NightBlues">Vadim</a> in late 2020, <a href="https://github.com/mirleft/ocaml-x509/pull/140">was fixed by Vadim</a>.</p> <p>Another long-standing pull request was support for <a href="https://tools.ietf.org/html/rfc7292">PKCS 12</a>, the archive format for certificate and private key bundles. This has <a href="https://github.com/mirleft/ocaml-x509/pull/114">been developed and merged</a>. PKCS 12 is a widely used and old format (e.g. when importing / exporting cryptographic material in your browser, used by OpenVPN, ...). Its specification uses RC2 and 3DES (see <a href="https://unmitigatedrisk.com/?p=654">this nice article</a>), which are the default algorithms used by <code>openssl pkcs12</code>.</p> -<h2>One more layer up - TLS</h2> +<h2 id="one-more-layer-up---tls">One more layer up - TLS</h2> <p>In TLS we are finally able to use ECDSA (and EdDSA) certificates and private keys, this resulted in slightly more complex configuration - the constraints between supported groups, signature algorithms, ciphersuite, and certificates are intricate:</p> <p>The ciphersuite (in TLS before 1.3) specifies which key exchange mechanism to use, but also which signature algorithm to use (RSA/ECDSA). The supported groups client hello extension specifies which elliptic curves are supported by the client. The signature algorithm hello extension (TLS 1.2 and above) specifies the signature algorithm. In the end, at load time the TLS configuration is validated and groups, ciphersuites, and signature algorithms are condensed depending on configured server certificates. At session initiation time, once the client reports what it supports, these parameters are further cut down to eventually find some suitable cryptographic parameters for this session.</p> <p>From the user perspective, earlier the certificate bundle and private key was a pair of <code>X509.Certificate.t list</code> and <code>Mirage_crypto_pk.Rsa.priv</code>, now the second part is a <code>X509.Private_key.t</code> - all provided constructors have been updates (notably <code>X509_lwt.private_of_pems</code> and <code>Tls_mirage.X509.certificate</code>).</p> -<h2>Finally, conduit and mirage</h2> +<h2 id="finally-conduit-and-mirage">Finally, conduit and mirage</h2> <p>Thanks to <a href="https://github.com/dinosaure">Romain</a>, conduit* 4.0.0 was released which supports the modified API of X.509 and TLS. Romain also developed patches and released mirage 3.10.3 which supports the above mentioned work.</p> -<h2>Conclusion</h2> +<h2 id="conclusion">Conclusion</h2> <p>Elliptic curve cryptography is now available in OCaml using verified cryptographic primitives from the fiat-crypto project - <code>opam install mirage-crypto-ec</code>. X.509 since 0.12.0 and TLS since 0.13.0 and MirageOS since 3.10.3 support this new development which gives rise to smaller EC keys. Our old bindings, fiat-p256 and hacl_x25519 have been archived and will no longer be maintained.</p> <p>Thanks to everyone involved on this journey: reporting issues, sponsoring parts of the work, helping with integration, developing initial prototypes, and keep motivating me to continue this until the release is done.</p> <p>In the future, it may be possible to remove zarith and gmp from the dependency chain, and provide EC-only TLS servers and clients for MirageOS. The benefit will be much less C code (libgmp-freestanding.a is 1.5MB in size) in our trusted code base.</p> <p>Another potential project that is very close now is a certificate authority developed in MirageOS - now that EC keys, PKCS 12, revocation lists, ... are implemented.</p> -<h2>Footer</h2> +<h2 id="footer">Footer</h2> <p>If you want to support our work on MirageOS unikernels, please <a href="https://robur.coop/Donate">donate to robur</a>. I'm interested in feedback, either via <a href="https://twitter.com/h4nnes">twitter</a>, <a href="https://mastodon.social/@hannesm">hannesm@mastodon.social</a> or via eMail.</p> -urn:uuid:16427713-5da1-50cd-b17c-ca5b5cca431dCryptography updates in OCaml and MirageOS2021-11-19T18:04:52-00:00hannes<p>Home office, MirageOS unikernels, 2020 recap, 2021 tbd</p> -2021-01-25T12:45:54-00:00<h2>Introduction</h2> +urn:uuid:16427713-5da1-50cd-b17c-ca5b5cca431dCryptography updates in OCaml and MirageOS2021-11-19T18:04:52-00:00hannes<p>Home office, MirageOS unikernels, 2020 recap, 2021 tbd</p> +2021-01-25T12:45:54-00:00<h2 id="introduction">Introduction</h2> <p>2020 was an intense year. I hope you're healthy and keep being healthy. I am privileged (as lots of software engineers and academics are) to be able to work from home during the pandemic. Let's not forget people in less privileged situations, and let’s try to give them as much practical, psychological and financial support as we can these days. And as much joy as possible to everyone around :)</p> <p>I cancelled the autumn MirageOS retreat due to the pandemic. Instead I collected donations for our hosts in Marrakech - they were very happy to receive our financial support, since they had a difficult year, since their income is based on tourism. I hope that in autumn 2021 we'll have an on-site retreat again.</p> <p>For 2021, we (at <a href="https://robur.coop">robur</a>) got a grant from the EU (via <a href="https://pointer.ngi.eu">NGI pointer</a>) for &quot;Deploying MirageOS&quot; (more details below), and another grant from <a href="https://ocaml-sf.org">OCaml software foundation</a> for securing the opam supply chain (using <a href="https://github.com/hannesm/conex">conex</a>). Some long-awaited releases for MirageOS libraries, namely a <a href="https://discuss.ocaml.org/t/ann-first-release-of-awa-ssh">ssh implementation</a> and a rewrite of our <a href="https://discuss.ocaml.org/t/ann-release-of-ocaml-git-v3-0-duff-encore-decompress-etc/">git implementation</a> have already been published.</p> <p>With my MirageOS view, 2020 was a pretty successful year, where we managed to add more features, fixed lots of bugs, and paved the road ahead. I want to thank <a href="https://ocamllabs.io/">OCamlLabs</a> for funding work on MirageOS maintenance.</p> -<h2>Recap 2020</h2> +<h2 id="recap-2020">Recap 2020</h2> <p>Here is a very subjective random collection of accomplishments in 2020, where I was involved with some degree.</p> -<h3>NetHSM</h3> +<h3 id="nethsm">NetHSM</h3> <p><a href="https://www.nitrokey.com/products/nethsm">NetHSM</a> is a hardware security module in software. It is a product that uses MirageOS for security, and is based on the <a href="https://muen.sk">muen</a> separation kernel. We at <a href="https://robur.coop">robur</a> were heavily involved in this product. It already has been security audited by an external team. You can pre-order it from Nitrokey.</p> -<h3>TLS 1.3</h3> +<h3 id="tls-1.3">TLS 1.3</h3> <p>Dating back to 2016, at the <a href="https://www.ndss-symposium.org/ndss2016/tron-workshop-programme/">TRON</a> (TLS 1.3 Ready or NOt), we developed a first draft of a 1.3 implementation of <a href="https://github.com/mirleft/ocaml-tls">OCaml-TLS</a>. Finally in May 2020 we got our act together, including ECC (ECDH P256 from <a href="https://github.com/mit-plv/fiat-crypto/">fiat-crypto</a>, X25519 from <a href="https://project-everest.github.io/">hacl</a>) and testing with <a href="https://github.com/tlsfuzzer/tlsfuzzer">tlsfuzzer</a>, and release tls 0.12.0 with TLS 1.3 support. Later we added <a href="https://github.com/mirleft/ocaml-tls/pull/414">ECC ciphersuites to TLS version 1.2</a>, implemented <a href="https://github.com/mirleft/ocaml-tls/pull/414">ChaCha20/Poly1305</a>, and fixed an <a href="https://github.com/mirleft/ocaml-tls/pull/424">interoperability issue with Go's implementation</a>.</p> <p><a href="https://github.com/mirage/mirage-crypto">Mirage-crypto</a> provides the underlying cryptographic primitives, initially released in March 2020 as a fork of <a href="https://github.com/mirleft/ocaml-nocrypto">nocrypto</a> -- huge thanks to <a href="https://github.com/pqwy">pqwy</a> for his great work. Mirage-crypto detects <a href="https://github.com/mirage/mirage-crypto/pull/53">CPU features at runtime</a> (thanks to <a href="https://github.com/Julow">Julow</a>) (<a href="https://github.com/mirage/mirage-crypto/pull/96">bugfix for bswap</a>), using constant time modular exponentation (powm_sec) and hardens against Lenstra's CRT attack, supports <a href="https://github.com/mirage/mirage-crypto/pull/39">compilation on Windows</a> (thanks to <a href="https://github.com/avsm">avsm</a>), <a href="https://github.com/mirage/mirage-crypto/pull/90">async entropy harvesting</a> (thanks to <a href="https://github.com/seliopou">seliopou</a>), <a href="https://github.com/mirage/mirage-crypto/pull/65">32 bit support</a>, <a href="https://github.com/mirage/mirage-crypto/pull/72">chacha20/poly1305</a> (thanks to <a href="https://github.com/abeaumont">abeaumont</a>), <a href="https://github.com/mirage/mirage-crypto/pull/84">cross-compilation</a> (thanks to <a href="https://github.com/EduardoRFS">EduardoRFS</a>) and <a href="https://github.com/mirage/mirage-crypto/pull/78">various</a> <a href="https://github.com/mirage/mirage-crypto/pull/81">bug</a> <a href="https://github.com/mirage/mirage-crypto/pull/83">fixes</a>, even <a href="https://github.com/mirage/mirage-crypto/pull/95">memory leak</a> (thanks to <a href="https://github.com/talex5">talex5</a> for reporting several of these issues), and <a href="https://github.com/mirage/mirage-crypto/pull/99">RSA</a> <a href="https://github.com/mirage/mirage-crypto/pull/100">interoperability</a> (thanks to <a href="https://github.com/psafont">psafont</a> for investigation and <a href="https://github.com/mattjbray">mattjbray</a> for reporting). This library feels very mature now - being used by multiple stakeholders, and lots of issues have been fixed in 2020.</p> -<h3>Qubes Firewall</h3> +<h3 id="qubes-firewall">Qubes Firewall</h3> <p>The <a href="https://github.com/mirage/qubes-mirage-firewall/">MirageOS based Qubes firewall</a> is the most widely used MirageOS unikernel. And it got major updates: in May <a href="https://github.com/linse">Steffi</a> <a href="https://groups.google.com/g/qubes-users/c/Xzplmkjwa5Y">announced</a> her and <a href="https://github.com/yomimono">Mindy's</a> work on improving it for Qubes 4.0 - including <a href="https://www.qubes-os.org/doc/vm-interface/#firewall-rules-in-4x">dynamic firewall rules via QubesDB</a>. Thanks to <a href="https://prototypefund.de/project/portable-firewall-fuer-qubesos/">prototypefund</a> for sponsoring.</p> <p>In October 2020, we released <a href="https://mirage.io/blog/announcing-mirage-39-release">Mirage 3.9</a> with PVH virtualization mode (thanks to <a href="https://github.com/mato">mato</a>). There's still a <a href="https://github.com/mirage/qubes-mirage-firewall/issues/120">memory leak</a> to be investigated and fixed.</p> -<h3>IPv6</h3> +<h3 id="ipv6">IPv6</h3> <p>In December, with <a href="https://mirage.io/blog/announcing-mirage-310-release">Mirage 3.10</a> we got the IPv6 code up and running. Now MirageOS unikernels have a dual stack available, besides IPv4-only and IPv6-only network stacks. Thanks to <a href="https://github.com/nojb">nojb</a> for the initial code and <a href="https://github.com/MagnusS">MagnusS</a>.</p> <p>Turns out this blog, but also robur services, are now available via IPv6 :)</p> -<h3>Albatross</h3> +<h3 id="albatross">Albatross</h3> <p>Also in December, I pushed an initial release of <a href="https://github.com/roburio/albatross">albatross</a>, a unikernel orchestration system with remote access. <em>Deploy your unikernel via a TLS handshake -- the unikernel image is embedded in the TLS client certificates.</em></p> <p>Thanks to <a href="https://github.com/reynir">reynir</a> for statistics support on Linux and improvements of the systemd service scripts. Also thanks to <a href="https://github.com/cfcs">cfcs</a> for the initial Linux port.</p> -<h3>CA certs</h3> +<h3 id="ca-certs">CA certs</h3> <p>For several years I postponed the problem of how to actually use the operating system trust anchors for OCaml-TLS connections. Thanks to <a href="https://github.com/emillon">emillon</a> for initial code, there are now <a href="https://github.com/mirage/ca-certs">ca-certs</a> and <a href="https://github.com/mirage/ca-certs-nss">ca-certs-nss</a> opam packages (see <a href="https://discuss.ocaml.org/t/ann-ca-certs-and-ca-certs-nss">release announcement</a>) which fills this gap.</p> -<h2>Unikernels</h2> +<h2 id="unikernels">Unikernels</h2> <p>I developed several useful unikernels in 2020, and also pushed <a href="https://mirage.io/wiki/gallery">a unikernel gallery</a> to the Mirage website:</p> -<h3>Traceroute in MirageOS</h3> +<h3 id="traceroute-in-mirageos">Traceroute in MirageOS</h3> <p>I already wrote about <a href="/Posts/Traceroute">traceroute</a> which traces the routing to a given remote host.</p> -<h3>Unipi - static website hosting</h3> +<h3 id="unipi---static-website-hosting">Unipi - static website hosting</h3> <p><a href="https://github.com/roburio/unipi">Unipi</a> is a static site webserver which retrieves the content from a remote git repository. Let's encrypt certificate provisioning and dynamic updates via a webhook to be executed for every push.</p> -<h4>TLSTunnel - TLS demultiplexing</h4> +<h4 id="tlstunnel---tls-demultiplexing">TLSTunnel - TLS demultiplexing</h4> <p>The physical machine this blog and other robur infrastructure runs on has been relocated from Sweden to Germany mid-December. Thanks to UPS! Fewer IPv4 addresses are available in the new data center, which motivated me to develop <a href="https://github.com/roburio/tlstunnel">tlstunnel</a>.</p> <p>The new behaviour is as follows (see the <code>monitoring</code> branch):</p> <ul> @@ -372,9 +372,9 @@ _stack.V4V6) (_ : sig end) (Management : Mirage_stack.V4V6) = struct <li>setting up a new service is very straightforward: only the new name needs to be registered with tlstunnel together with the TCP backend, and everything will just work </li> </ul> -<h2>2021</h2> +<h2 id="section">2021</h2> <p>The year started with a release of <a href="https://discuss.ocaml.org/t/ann-first-release-of-awa-ssh">awa</a>, a SSH implementation in OCaml (thanks to <a href="https://github.com/haesbaert">haesbaert</a> for initial code). This was followed by a <a href="https://discuss.ocaml.org/t/ann-release-of-ocaml-git-v3-0-duff-encore-decompress-etc/">git 3.0 release</a> (thanks to <a href="https://github.com/dinosaure">dinosaure</a>).</p> -<h3>Deploying MirageOS - NGI Pointer</h3> +<h3 id="deploying-mirageos---ngi-pointer">Deploying MirageOS - NGI Pointer</h3> <p>For 2021 we at robur received funding from the EU (via <a href="https://pointer.ngi.eu/">NGI pointer</a>) for &quot;Deploying MirageOS&quot;, which boils down into three parts:</p> <ul> <li>reproducible binary releases of MirageOS unikernels, @@ -387,14 +387,14 @@ _stack.V4V6) (_ : sig end) (Management : Mirage_stack.V4V6) = struct <p>Of course this will all be available open source. Please get in touch via eMail (team aT robur dot coop) if you're eager to integrate MirageOS unikernels into your infrastructure.</p> <p>We discovered at an initial meeting with an infrastructure provider that a DNS resolver is of interest - even more now that dnsmasq suffered from <a href="https://www.jsof-tech.com/wp-content/uploads/2021/01/DNSpooq_Technical-Whitepaper.pdf">dnspooq</a>. We are already working on an <a href="https://github.com/mirage/ocaml-dns/pull/251">implementation of DNSSec</a>.</p> <p>MirageOS unikernels are binary reproducible, and <a href="https://github.com/rjbou/orb/pull/1">infrastructure tools are available</a>. We are working hard on a web interface (and REST API - think of it as &quot;Docker Hub for MirageOS unikernels&quot;), and more tooling to verify reproducibility.</p> -<h3>Conex - securing the supply chain</h3> +<h3 id="conex---securing-the-supply-chain">Conex - securing the supply chain</h3> <p>Another funding from the <a href="http://ocaml-sf.org/">OCSF</a> is to continue development and deploy <a href="https://github.com/hannesm/conex">conex</a> - to bring trust into opam-repository. This is a great combination with the reproducible build efforts, and will bring much more trust into retrieving OCaml packages and using MirageOS unikernels.</p> -<h3>MirageOS 4.0</h3> +<h3 id="mirageos-4.0">MirageOS 4.0</h3> <p>Mirage so far still uses ocamlbuild and ocamlfind for compiling the virtual machine binary. But the switch to dune is <a href="https://github.com/mirage/mirage/issues/1195">close</a>, a lot of effort has been done. This will make the developer experience of MirageOS much more smooth, with a per-unikernel monorepo workflow where you can push your changes to the individual libraries.</p> -<h2>Footer</h2> +<h2 id="footer">Footer</h2> <p>If you want to support our work on MirageOS unikernels, please <a href="https://robur.coop/Donate">donate to robur</a>. I'm interested in feedback, either via <a href="https://twitter.com/h4nnes">twitter</a>, <a href="https://mastodon.social/@hannesm">hannesm@mastodon.social</a> or via eMail.</p> -urn:uuid:bc7675a5-47d0-5ce1-970c-01ed07fdf404The road ahead for MirageOS in 20212021-11-19T18:04:52-00:00hannes<p>A MirageOS unikernel which traces the path between itself and a remote host.</p> -2020-06-24T10:38:10-00:00<h2>Traceroute</h2> +urn:uuid:bc7675a5-47d0-5ce1-970c-01ed07fdf404The road ahead for MirageOS in 20212021-11-19T18:04:52-00:00hannes<p>A MirageOS unikernel which traces the path between itself and a remote host.</p> +2020-06-24T10:38:10-00:00<h2 id="traceroute">Traceroute</h2> <p>Is a diagnostic utility which displays the route and measures transit delays of packets across an Internet protocol (IP) network.</p> <pre><code class="language-bash">$ doas solo5-hvt --net:service=tap0 -- traceroute.hvt --ipv4=10.0.42.2/24 --ipv4-gateway=10.0.42.1 --host=198.167.222.207 @@ -738,16 +738,16 @@ $ solo5-hvt --net:service=tap0 -- traceroute.hvt ... <p>If you develop enhancements you'd like to share, please sent a pull request to the git repository.</p> <p>Motivation for this traceroute unikernel was while talking with <a href="https://twitter.com/networkservice">Aaron</a> and <a href="https://github.com/phaer">Paul</a>, who contributed several patches to the IP stack which pass the ttl through.</p> <p>If you want to support our work on MirageOS unikernels, please <a href="https://robur.coop/Donate">donate to robur</a>. I'm interested in feedback, either via <a href="https://twitter.com/h4nnes">twitter</a>, <a href="https://mastodon.social/@hannesm">hannesm@mastodon.social</a> or via eMail.</p> -urn:uuid:ed3036f6-83d2-5e80-b3da-4ccbedb5ae9eTraceroute2021-11-19T18:04:52-00:00hannes<p>A tutorial how to deploy authoritative name servers, let's encrypt, and updating entries from unix services.</p> -2019-12-23T21:30:53-00:00<h2>Goal</h2> +urn:uuid:ed3036f6-83d2-5e80-b3da-4ccbedb5ae9eTraceroute2021-11-19T18:04:52-00:00hannes<p>A tutorial how to deploy authoritative name servers, let's encrypt, and updating entries from unix services.</p> +2019-12-23T21:30:53-00:00<h2 id="goal">Goal</h2> <p>Have your domain served by OCaml-DNS authoritative name servers. Data is stored in a git remote, and let's encrypt certificates can be requested to DNS. This software is deployed since more than two years for several domains such as <code>nqsb.io</code> and <code>robur.coop</code>. This present the authoritative server side, and certificate library of the OCaml-DNS implementation formerly known as <a href="/Posts/DNS">µDNS</a>.</p> -<h2>Prerequisites</h2> +<h2 id="prerequisites">Prerequisites</h2> <p>You need to own a domain, and be able to delegate the name service to your own servers. You also need two spare public IPv4 addresses (in different /24 networks) for your name servers. A git server or remote repository reachable via git over ssh. Servers which support <a href="https://github.com/solo5/solo5">solo5</a> guests, and have the corresponding tender installed. A computer with <a href="https://opam.ocaml.org">opam</a> (&gt;= 2.0.0) installed.</p> -<h2>Data preparation</h2> +<h2 id="data-preparation">Data preparation</h2> <p>Figure out a way to get the DNS entries of your domain in a <a href="https://tools.ietf.org/html/rfc1034">&quot;master file format&quot;</a>, i.e. what bind uses.</p> <p>This is a master file for the <code>mirage</code> domain, defining <code>$ORIGIN</code> to avoid typing the domain name after each hostname (use <code>@</code> if you need the domain name only; if you need to refer to a hostname in a different domain end it with a dot (<code>.</code>), i.e. <code>ns2.foo.com.</code>). The default time to live <code>$TTL</code> is an hour (3600 seconds). The zone contains a <a href="https://tools.ietf.org/html/rfc1035#section-3.3.13">start of authority (<code>SOA</code>) record</a> containing the nameserver, hostmaster, serial, refresh, retry, expiry, and minimum. @@ -761,7 +761,7 @@ ns1 A 127.0.0.1 www A 1.1.1.1 git-repo&gt; git add mirage &amp;&amp; git commit -m initial &amp;&amp; git push </code></pre> -<h2>Installation</h2> +<h2 id="installation">Installation</h2> <p>On your development machine, you need to install various OCaml packages. You don't need privileged access if common tools (C compiler, make, libgmp) are already installed. You have <code>opam</code> installed.</p> <p>Let's create a fresh <code>switch</code> for the DNS journey:</p> <pre><code class="language-shell">$ opam init @@ -771,7 +771,7 @@ $ opam switch create udns 4.14.1 $ eval `opam env` #sets some environment variables </code></pre> <p>The last command set environment variables in your current shell session, please use the same shell for the commands following (or run <code>eval $(opam env)</code> in another shell and proceed in there - the output of <code>opam switch</code> sohuld point to <code>udns</code>).</p> -<h3>Validation of our zonefile</h3> +<h3 id="validation-of-our-zonefile">Validation of our zonefile</h3> <p>First let's check that OCaml-DNS can parse our zonefile:</p> <pre><code class="language-shell">$ opam install dns-cli #installs ~/.opam/udns/bin/ozone and other binaries $ ozone &lt;git-repo&gt;/mirage # see ozone --help @@ -779,7 +779,7 @@ successfully checked zone </code></pre> <p>Great. Error reporting is not great, but line numbers are indicated (<code>ozone: zone parse problem at line 3: syntax error</code>), <a href="https://github.com/mirage/ocaml-dns/tree/v4.2.0/zone">lexer and parser are lex/yacc style</a> (PRs welcome).</p> <p>FWIW, <code>ozone</code> accepts <code>--old &lt;filename&gt;</code> to check whether an update from the old zone to the new is fine. This can be used as <a href="https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks">pre-commit hook</a> in your git repository to avoid bad parse states in your name servers.</p> -<h3>Getting the primary up</h3> +<h3 id="getting-the-primary-up">Getting the primary up</h3> <p>The next step is to compile the primary server and run it to serve the domain data. Since the git-via-ssh client is not yet released, we need to add a custom opam repository to this switch.</p> <pre><code class="language-shell"># get the `mirage` application via opam $ opam install lwt mirage @@ -810,7 +810,7 @@ $ dig any mirage @127.0.0.1 # a DNS packet printout with all records available for mirage </code></pre> <p>That's exciting, the DNS server serving answers from a remote git repository.</p> -<h3>Securing the git access with ssh</h3> +<h3 id="securing-the-git-access-with-ssh">Securing the git access with ssh</h3> <p>Let's authenticate the access by using ssh, so we feel ready to push data there as well. The primary-git unikernel already includes an experimental <a href="https://github.com/haesbaert/awa-ssh">ssh client</a>, all we need to do is setting up credentials - in the following a RSA keypair and the server fingerprint.</p> <pre><code class="language-shell"># collect the RSA host key fingerprint $ ssh-keyscan &lt;git-server&gt; &gt; /tmp/git-server-public-keys @@ -831,7 +831,7 @@ $ ./primary-git --authenticator=SHA256:a5kkkuo7MwTBkW+HDt4km0gGPUAX0y1bFcPMXKxBa # started up, you can try the host and dig commands from above if you like </code></pre> <p>To wrap up, we now have a primary authoritative name server for our zone running as Unix process, which clones a remote git repository via ssh on startup and then serves it.</p> -<h3>Authenticated data updates</h3> +<h3 id="authenticated-data-updates">Authenticated data updates</h3> <p>Our remote git repository is the source of truth, if you need to add a DNS entry to the zone, you git pull, edit the zone file, remember to increase the serial in the SOA line, run <code>ozone</code>, git commit and push to the repository.</p> <p>So, the <code>primary-git</code> needs to be informed of git pushes. This requires a communication channel from the git server (or somewhere else, e.g. your laptop) to the DNS server. I prefer in-protocol solutions over adding yet another protocol stack, no way my DNS server will talk HTTP REST.</p> <p>The DNS protocol has an extension for <a href="https://tools.ietf.org/html/rfc1996">notifications of zone changes</a> (as a DNS packet), usually used between the primary and secondary servers. The <code>primary-git</code> accepts these notify requests (i.e. bends the standard slightly), and upon receival pulls the remote git repository, and serves the fresh zone files. Since a git pull may be rather excessive in terms of CPU cycles and network bandwidth, only authenticated notifications are accepted.</p> @@ -857,7 +857,7 @@ $ onotify 127.0.0.1 mirage --key=personal._update.mirage:SHA256:kJJqipaQHQWqZL31 # further changes to the hmac secrets don't require a restart anymore, a notify packet is sufficient :D </code></pre> <p>Ok, this onotify command line could be setup as a git post-commit hook, or run manually after each manual git push.</p> -<h3>Secondary</h3> +<h3 id="secondary">Secondary</h3> <p>It's time to figure out how to integrate the secondary name server. An already existing bind or something else that accepts notifications and issues zone transfers with hmac-sha256 secrets should work out of the box. If you encounter interoperability issues, please get in touch with me.</p> <p>The <code>secondary</code> unikernel is available from another git repository:</p> <pre><code class="language-shell"># get the secondary sources @@ -870,7 +870,7 @@ $ cd dns-secondary $ make $ ./dist/secondary </code></pre> -<h3>IP addresses and routing</h3> +<h3 id="ip-addresses-and-routing">IP addresses and routing</h3> <p>Both primary and secondary serve the data on the DNS port (53) on UDP and TCP. To run both on the same machine and bind them to different IP addresses, we'll use a layer 2 network (ethernet frames) with a host system software switch (bridge interface <code>service</code>), the unikernels as virtual machines (or seccomp-sandboxed) via the <a href="https://github.com/solo5/solo5">solo5</a> backend. Using xen is possible as well. As IP address range we'll use 10.0.42.0/24, and the host system uses the 10.0.42.1.</p> <p>The primary git needs connectivity to the remote git repository, thus on a laptop in a private network we need network address translation (NAT) from the bridge where the unikernels speak to the Internet where the git repository resides.</p> <pre><code class="language-shell"># on FreeBSD: @@ -896,7 +896,7 @@ tap1 # add them to the bridge $ ifconfig service addm tap0 addm tap1 </code></pre> -<h3>Primary and secondary setup</h3> +<h3 id="primary-and-secondary-setup">Primary and secondary setup</h3> <p>Let's update our zone slightly to reflect the IP changes.</p> <pre><code class="language-shell">git-repo&gt; cat mirage $ORIGIN mirage. @@ -948,7 +948,7 @@ $ dig foo.mirage @10.0.42.2 # primary $ dig foo.mirage @10.0.42.3 # secondary got notified and transferred the zone </code></pre> <p>You can also check the behaviour when restarting either of the VMs, whenever the primary is available the zone is synchronised. If the primary is down, the secondary still serves the zone. When the secondary is started while the primary is down, it won't serve any data until the primary is online (the secondary polls periodically, the primary sends notifies on startup).</p> -<h3>Dynamic data updates via DNS, pushed to git</h3> +<h3 id="dynamic-data-updates-via-dns-pushed-to-git">Dynamic data updates via DNS, pushed to git</h3> <p>DNS is a rich protocol, and it also has builtin <a href="https://tools.ietf.org/html/rfc2136">updates</a> that are supported by OCaml DNS, again authenticated with hmac-sha256 and shared secrets. Bind provides the command-line utility <code>nsupdate</code> to send these update packets, a simple <code>oupdate</code> unix utility is available as well (i.e. for integration of dynamic DNS clients). You know the drill, add a shared secret to the primary, git push, notify the primary, and voila we can dynamically in-protocol update. An update received by the primary via this way will trigger a git push to the remote git repository, and notifications to the secondary servers as described above.</p> <pre><code class="language-shell"># being lazy, I reuse the key above $ oupdate 10.0.42.2 personal._update.mirage:SHA256:kJJqipaQHQWqZL31Raar6uPnepGFIdtpjkXot9rv2xg= my-other.mirage 1.2.3.4 @@ -963,7 +963,7 @@ $ dig my-other.mirage @10.0.42.2 $ dig my-other.mirage @10.0.42.3 </code></pre> <p>So we can deploy further <code>oupdate</code> (or <code>nsupdate</code>) clients, distribute hmac secrets, and have the DNS zone updated. The source of truth is still the git repository, where the primary-git pushes to. Merge conflicts and timing of pushes is not yet dealt with. They are unlikely to happen since the primary is notified on pushes and should have up-to-date data in storage. Sorry, I'm unsure about the error semantics, try it yourself.</p> -<h3>Let's encrypt!</h3> +<h3 id="lets-encrypt">Let's encrypt!</h3> <p><a href="https://letsencrypt.org/">Let's encrypt</a> is a certificate authority (CA), which certificate is shipped as trust anchor in web browsers. They specified a protocol for <a href="https://tools.ietf.org/html/draft-ietf-acme-acme-05">automated certificate management environment (ACME)</a>, used to get X509 certificates for your services. In the protocol, a certificate signing request (publickey and hostname) is sent to let's encrypt servers, which sends a challenge to proof the ownership of the hostnames. One widely-used way to solve this challenge is running a web server, another is to serve it as text record from the authoritative DNS server.</p> <p>Since I avoid persistent storage when possible, and also don't want to integrate a HTTP client stack in the primary server, I developed a third unikernel that acts as (hidden) secondary server, performs the tedious HTTP communication with let's encrypt servers, and stores all data in the public DNS zone.</p> <p>For encoding of certificates, the DANE working group specified <a href="https://tools.ietf.org/html/rfc6698.html#section-7.1">TLSA</a> records in DNS. They are quadruples of usage, selector, matching type, and ASN.1 DER-encoded material. We set usage to 3 (domain-issued certificate), matching type to 0 (no hash), and selector to 0 (full certificate) or 255 (private usage) for certificate signing requests. The interaction is as follows:</p> @@ -1008,27 +1008,27 @@ $ ocertify 10.0.42.2 foo.mirage </code></pre> <p>For actual testing with let's encrypt servers you need to have the primary and secondary deployed on your remote hosts, and your domain needs to be delegated to these servers. Good luck. And ensure you have backup your git repository.</p> <p>As fine print, while this tutorial was about the <code>mirage</code> zone, you can stick any number of zones into the git repository. If you use a <code>_keys</code> file (without any domain prefix), you can configure hmac secrets for all zones, i.e. something to use in your let's encrypt unikernel and secondary unikernel. Dynamic addition of zones is supported, just create a new zonefile and notify the primary, the secondary will be notified and pick it up. The primary responds to a signed SOA for the root zone (i.e. requested by the secondary) with the SOA response (not authoritative), and additionally notifications for all domains of the primary.</p> -<h3>Conclusion and thanks</h3> +<h3 id="conclusion-and-thanks">Conclusion and thanks</h3> <p>This tutorial presented how to use the OCaml DNS based unikernels to run authoritative name servers for your domain, using a git repository as the source of truth, dynamic authenticated updates, and let's encrypt certificate issuing.</p> <p>There are further steps to take, such as monitoring (<code>mirage configure --monitoring</code>), which use a second network interface for reporting syslog and metrics to telegraf / influx / grafana. Some DNS features are still missing, most prominently DNSSec.</p> <p>I'd like to thank all people involved in this software stack, without other key components, including <a href="https://github.com/mirage/ocaml-git">git</a>, <a href="https://github.com/mirage/mirage-crypto">mirage-crypto</a>, <a href="https://github.com/mirage/awa-ssh">awa-ssh</a>, <a href="https://github.com/solo5/sol5">solo5</a>, <a href="https://github.com/mirage/mirage">mirage</a>, <a href="https://github.com/mmaker/ocaml-letsencrypt">ocaml-letsencrypt</a>, and more.</p> <p>If you want to support our work on MirageOS unikernels, please <a href="https://robur.coop/Donate">donate to robur</a>. I'm interested in feedback, either via <a href="https://twitter.com/h4nnes">twitter</a>, <a href="https://mastodon.social/@hannesm">hannesm@mastodon.social</a> or via eMail.</p> -urn:uuid:e3d4fd9e-e379-5c86-838e-46034ddd435dDeploying authoritative OCaml-DNS servers as MirageOS unikernels2023-03-02T17:20:44-00:00hannes<p>MirageOS unikernels are reproducible :)</p> -2019-12-16T18:29:30-00:00<h2>Reproducible builds summit</h2> +urn:uuid:e3d4fd9e-e379-5c86-838e-46034ddd435dDeploying authoritative OCaml-DNS servers as MirageOS unikernels2023-03-02T17:20:44-00:00hannes<p>MirageOS unikernels are reproducible :)</p> +2019-12-16T18:29:30-00:00<h2 id="reproducible-builds-summit">Reproducible builds summit</h2> <p>I'm just back from the <a href="https://reproducible-builds.org/events/Marrakesh2019/">Reproducible builds summit 2019</a>. In 2018, several people developing <a href="https://ocaml.org">OCaml</a> and <a href="https://opam.ocaml.org">opam</a> and <a href="https://mirage.io">MirageOS</a>, attended <a href="https://reproducible-builds.org/events/paris2018/">the Reproducible builds summit in Paris</a>. The notes from last year on <a href="https://reproducible-builds.org/events/paris2018/report/#Toc11410_331763073">opam reproducibility</a> and <a href="https://reproducible-builds.org/events/paris2018/report/#Toc11681_331763073">MirageOS reproducibility</a> are online. After last years workshop, Raja started developing the opam reproducibilty builder <a href="https://github.com/rjbou/orb">orb</a>, which I extended at and after this years summit. This year before and after the facilitated summit there were hacking days, which allowed further interaction with participants, writing some code and conduct experiments. I had this year again an exciting time at the summit and hacking days, thanks to our hosts, organisers, and all participants.</p> -<h2>Goal</h2> +<h2 id="goal">Goal</h2> <p>Stepping back a bit, first look on the <a href="https://reproducible-builds.org/">goal of reproducible builds</a>: when compiling source code multiple times, the produced binaries should be identical. It should be sufficient if the binaries are behaviourally equal, but this is pretty hard to check. It is much easier to check <strong>bit-wise identity of binaries</strong>, and relaxes the burden on the checker -- checking for reproducibility is reduced to computing the hash of the binaries. Let's stick to the bit-wise identical binary definition, which also means software developers have to avoid non-determinism during compilation in their toolchains, dependent libraries, and developed code.</p> <p>A <a href="https://reproducible-builds.org/docs/test-bench/">checklist</a> of potential things leading to non-determinism has been written up by the reproducible builds project. Examples include recording the build timestamp into the binary, ordering of code and embedded data. The reproducible builds project also developed <a href="https://packages.debian.org/sid/disorderfs">disorderfs</a> for testing reproducibility and <a href="https://diffoscope.org/">diffoscope</a> for comparing binaries with file-dependent readers, falling back to <code>objdump</code> and <code>hexdump</code>. A giant <a href="https://tests.reproducible-builds.org/">test infrastructure</a> with <a href="https://tests.reproducible-builds.org/debian/index_variations.html">lots of variations</a> between the builds, mostly using Debian, has been setup over the years.</p> <p>Reproducibility is a precondition for trustworthy binaries. See <a href="https://reproducible-builds.org/#why-does-it-matter">why does it matter</a>. If there are no instructions how to get from the published sources to the exact binary, why should anyone trust and use the binary which claims to be the result of the sources? It may as well contain different code, including a backdoor, bitcoin mining code, outputting the wrong results for specific inputs, etc. Reproducibility does not imply the software is free of security issues or backdoors, but instead of a audit of the binary - which is tedious and rarely done - the source code can be audited - but the toolchain (compiler, linker, ..) used for compilation needs to be taken into account, i.e. trusted or audited to not be malicious. <strong>I will only ever publish binaries if they are reproducible</strong>.</p> <p>My main interest at the summit was to enhance existing tooling and conduct some experiments about the reproducibility of <a href="https://mirage.io">MirageOS unikernels</a> -- a unikernel is a statically linked ELF binary to be run as Unix process or <a href="https://github.com/solo5/solo5">virtual machine</a>. MirageOS heavily uses <a href="https://ocaml.org">OCaml</a> and <a href="https://opam.ocaml.org">opam</a>, the OCaml package manager, and is an opam package itself. Thus, <em>checking reproducibility of a MirageOS unikernel is the same problem as checking reproducibility of an opam package</em>.</p> -<h2>Reproducible builds with opam</h2> +<h2 id="reproducible-builds-with-opam">Reproducible builds with opam</h2> <p>Testing for reproducibility is achieved by taking the sources and compile them twice independently. Afterwards the equality of the resulting binaries can be checked. In trivial projects, the sources is just a single file, or originate from a single tarball. In OCaml, opam uses <a href="https://github.com/ocaml/opam-repository">a community repository</a> where OCaml developers publish their package releases to, but can also use custom repositores, and in addition pin packages to git remotes (url including branch or commit), or a directory on the local filesystem. Manually tracking and updating all dependent packages of a MirageOS unikernel is not feasible: our hello-world compiled for hvt (kvm/BHyve) already has 79 opam dependencies, including the OCaml compiler which is distribued as opam package. The unikernel serving this website depends on 175 opam packages.</p> <p>Conceptually there should be two tools, the <em>initial builder</em>, which takes the latest opam packages which do not conflict, and exports exact package versions used during the build, as well as hashes of binaries. The other tool is a <em>rebuilder</em>, which imports the export, conducts a build, and outputs the hashes of the produced binaries.</p> <p>Opam has the concept of a <code>switch</code>, which is an environment where a package set is installed. Switches are independent of each other, and can already be exported and imported. Unfortunately the export is incomplete: if a package includes additional patches as part of the repository -- sometimes needed for fixing releases where the actual author or maintainer of a package responds slowly -- these package neither the patches end up in the export. Also, if a package is pinned to a git branch, the branch appears in the export, but this may change over time by pushing more commits or even force-pushing to that branch. In <a href="https://github.com/ocaml/opam/pull/4040">PR #4040</a> (under discussion and review), also developed during the summit, I propose to embed the additional files as base64 encoded values in the opam file. To solve the latter issue, I modified the export mechanism to <a href="https://github.com/ocaml/opam/pull/4055">embed the git commit hash (PR #4055)</a>, and avoid sources from a local directory and which do not have a checksum.</p> <p>So the opam export contains the information required to gather the exact same sources and build instructions of the opam packages. If the opam repository would be self-contained (i.e. not depend on any other tools), this would be sufficient. But opam does not run in thin air, it requires some system utilities such as <code>/bin/sh</code>, <code>sed</code>, a GNU make, commonly <code>git</code>, a C compiler, a linker, an assembler. Since opam is available on various operating systems, the plugin <code>depext</code> handles host system dependencies, e.g. if your opam package requires <code>gmp</code> to be installed, this requires slightly different names depending on host system or distribution, take a look at <a href="https://github.com/ocaml/opam-repository/blob/master/packages/conf-gmp/conf-gmp.1/opam">conf-gmp</a>. This also means, opam has rather good information about both the opam dependencies and the host system dependencies for each package. Please note that the host system packages used during compilation are not yet recorded (i.e. which <code>gmp</code> package was installed and used during the build, only that a <code>gmp</code> package has to be installed). The base utilities mentioned above (C compiler, linker, shell) are also not recorded yet.</p> <p>Operating system information available in opam (such as architecture, distribution, version), which in some cases maps to exact base utilities, is recorded in the build-environment, a separate artifact. The environment variable <a href="https://reproducible-builds.org/specs/source-date-epoch/"><code>SOURCE_DATE_EPOCH</code></a>, used for communicating the same timestamp when software is required to record a timestamp into the resulting binary, is also captured in the build environment.</p> <p>Additional environment variables may be captured or used by opam packages to produce different output. To avoid this, both the initial builder and the rebuilder are run with minimal environment variables: only <code>PATH</code> (normalised to a whitelist of <code>/bin</code>, <code>/usr/bin</code>, <code>/usr/local/bin</code> and <code>/opt/bin</code>) and <code>HOME</code> are defined. Missing information at the moment includes CPU features: some libraries (gmp?, nocrypto) emit different code depending on the CPU feature.</p> -<h2>Tooling</h2> +<h2 id="tooling">Tooling</h2> <p><em>TL;DR: A <strong>build</strong> builds an opam package, and outputs <code>.opam-switch</code>, <code>.build-hashes.N</code>, and <code>.build-environment.N</code>. A <strong>rebuild</strong> uses these artifacts as input, builds the package and outputs another <code>.build-hashes.M</code> and <code>.build-environment.M</code>.</em></p> <p>The command-line utility <code>orb</code> can be installed and used:</p> <pre><code class="language-sh">$ opam pin add orb git+https://github.com/hannesm/orb.git#active @@ -1037,7 +1037,7 @@ $ orb build --twice --keep-build-dir --diffoscope &lt;your-favourite-opam-pa <p>It provides two subcommands <code>build</code> and <code>rebuild</code>. The <code>build</code> command takes a list of local opam <code>--repos</code> where to take opam packages from (defaults to <code>default</code>), a compiler (either a variant <code>--compiler=4.09.0+flambda</code>, a version <code>--compiler=4.06.0</code>, or a pin to a local development version <code>--compiler-pin=~/ocaml</code>), and optionally an existing switch <code>--use-switch</code>. It creates a switch, builds the packages, and emits the opam export, hashes of all files installed by these packages, and the build environment. The flags <code>--keep-build</code> retains the build products, opam's <code>--keep-build-dir</code> in addition temporary build products and generated source code. If <code>--twice</code> is provided, a rebuild (described next) is executed after the initial build.</p> <p>The <code>rebuild</code> command takes a directory with the opam export and build environment to build the opam package. It first compares the build-environment with the host system, sets the <code>SOURCE_DATE_EPOCH</code> and switch location accordingly and executes the import. Once the build is finished, it compares the hashes of the resulting files with the previous run. On divergence, if build directories were kept in the previous build, and if diffoscope is available and <code>--diffoscope</code> was provided, diffoscope is run on the diverging files. If <code>--keep-build-dir</code> was provided as well, <code>diff -ur</code> can be used to compare the temporary build and sources, including build logs.</p> <p>The builds are run in parallel, as opam does, this parallelism does not lead to different binaries in my experiments.</p> -<h2>Results and discussion</h2> +<h2 id="results-and-discussion">Results and discussion</h2> <p><strong>All MirageOS unikernels I have deployed are reproducible \o/</strong>. Also, several binaries such as <code>orb</code> itself, <code>opam</code>, <code>solo5-hvt</code>, and all <code>albatross</code> utilities are reproducible.</p> <p>The unikernel range from hello world, web servers (e.g. this blog, getting its data on startup via a git clone to memory), authoritative DNS servers, CalDAV server. They vary in size between 79 and 200 opam packages, resulting in 2MB - 16MB big ELF binaries (including debug symbols). The <a href="https://github.com/roburio/reproducible-unikernel-repo">unikernel opam repository</a> contains some reproducible unikernels used for testing. Some work-in-progress enhancements are needed to achieve this:</p> <p>At the moment, the opam package of a MirageOS unikernel is automatically generated by <code>mirage configure</code>, but only used for tracking opam dependencies. I worked on <a href="https://github.com/mirage/mirage/pull/1022">mirage PR #1022</a> to extend the generated opam package with build and install instructions.</p> @@ -1046,7 +1046,7 @@ $ orb build --twice --keep-build-dir --diffoscope &lt;your-favourite-opam-pa <p>In functoria, a tool used to configure MirageOS devices and their dependencies, can emit a list of opam packages which were required to build the unikernel. This uses <code>opam list --required-by --installed --rec &lt;pkgs&gt;</code>, which uses the cudf graph (<a href="https://github.com/mirage/functoria/pull/189#issuecomment-566696426">thanks to Raja for explanation</a>), that is during the rebuild dropping some packages. The <a href="https://github.com/mirage/functoria/pull/189">PR #189</a> avoids by not using the <code>--rec</code> argument, but manually computing the fixpoint.</p> <p>Certainly, the choice of environment variables, and whether to vary them (as <a href="https://tests.reproducible-builds.org/debian/index_variations.html">debian does</a>) or to not define them (or normalise) while building, is arguably. Since MirageOS does neither support time zone nor internationalisation, there is no need to prematurely solving this issue. On related note, even with different locale settings, MirageOS unikernels are reproducible apart from an <a href="https://github.com/backtracking/ocamlgraph/pull/90">issue in ocamlgraph #90</a> embedding the output of <a href="https://pubs.opengroup.org/onlinepubs/9699919799/utilities/date.html"><code>date</code></a>, which is different depending on <code>LANG</code> and locale (<code>LC_*</code>) settings.</p> <p>Prior art in reproducible MirageOS unikernels is the <a href="https://github.com/mirage/qubes-mirage-firewall/">mirage-qubes-firewall</a>. Since <a href="https://github.com/mirage/qubes-mirage-firewall/commit/07ff3d61477383860216c69869a1ffee59145e45">early 2017</a> it is reproducible. Their approach is different by building in a docker container with the opam repository pinned to an exact git commit.</p> -<h2>Further work</h2> +<h2 id="further-work">Further work</h2> <p>I only tested a certain subset of opam packages and MirageOS unikernels, mainly on a single machine (my laptop) running FreeBSD, and am happy if others will test reproducibility of their OCaml programs with the tools provided. There could as well be CI machines rebuilding opam packages and reporting results to a central repository. I'm pretty sure there are more reproducibility issues in the opam ecosystem. I developed an <a href="https://github.com/roburio/reproducible-testing-repo">reproducible testing opam repository</a> with opam packages that do not depend on OCaml, mainly for further tooling development. Some tests were also conducted on a Debian system with the same result. The variations, apart from build time, were using a different user, and different locale settings.</p> <p>As mentioned above, more environment, such as the CPU features, and external system packages, should be captured in the build environment.</p> <p>When comparing OCaml libraries, some output files (cmt / cmti / cma / cmxa) are not deterministic, but contain minimal diverge where I was not able to spot the root cause. It would be great to fix this, likely in the OCaml compiler distribution. Since the final result, the binary I'm interested in, is not affected by non-identical intermediate build products, I hope someone (you?) is interested in improving on this side. OCaml bytecode output also seems to be non-deterministic. There is <a href="https://github.com/coq/coq/issues/11229">a discussion on the coq issue tracker</a> which may be related.</p> @@ -1055,10 +1055,10 @@ $ orb build --twice --keep-build-dir --diffoscope &lt;your-favourite-opam-pa <p>What was fun was to compare the unikernel when built on Linux with gcc against a built on FreeBSD with clang and lld - spoiler: they emit debug sections with different dwarf versions, it is pretty big. Other fun differences were between OCaml compiler versions: the difference between minor versions (4.08.0 vs 4.08.1) is pretty small (~100kB as human-readable output), while the difference between major version (4.08.1 vs 4.09.0) is rather big (~900kB as human-readable diff).</p> <p>An item on my list for the future is to distribute the opam export, build hashes and build environment artifacts in a authenticated way. I want to integrate this as <a href="https://in-toto.io/">in-toto</a> style into <a href="https://github.com/hannesm/conex">conex</a>, my not-yet-deployed implementation of <a href="https://theupdateframework.github.io/">tuf</a> for opam that needs further development and a test installation, hopefully in 2020.</p> <p>If you want to support our work on MirageOS unikernels, please <a href="https://robur.coop/Donate">donate to robur</a>. I'm interested in feedback, either via <a href="https://twitter.com/h4nnes">twitter</a>, <a href="https://mastodon.social/@hannesm">hannesm@mastodon.social</a> or via eMail.</p> -urn:uuid:09922d6b-56c8-595d-8086-5aef9656cbc4Reproducible MirageOS unikernel builds2021-11-19T18:04:52-00:00hannes<p>Five years since ocaml-x509 initial release, it has been reworked and used more widely</p> -2019-08-15T11:21:30-00:00<h2>Cryptographic material</h2> +urn:uuid:09922d6b-56c8-595d-8086-5aef9656cbc4Reproducible MirageOS unikernel builds2021-11-19T18:04:52-00:00hannes<p>Five years since ocaml-x509 initial release, it has been reworked and used more widely</p> +2019-08-15T11:21:30-00:00<h2 id="cryptographic-material">Cryptographic material</h2> <p>Once a private and public key pair is generated (doesn't matter whether it is plain RSA, DSA, ECC on any curve), this is fine from a scientific point of view, and can already be used for authenticating and encrypting. From a practical point of view, the public parts need to be exchanged and verified (usually a fingerprint or hash thereof). This leads to the struggle how to encode this cryptographic material, and how to embed an identity (or multiple), capabilities, and other information into it. <a href="https://en.wikipedia.org/wiki/X.509">X.509</a> is a standard to solve this encoding and embedding, and provides more functionality, such as establishing chains of trust and revocation of invalidated or compromised material. X.509 uses certificates, which contain the public key, and additional information (in a extensible key-value store), and are signed by an issuer, either the private key corresponding to the public key - a so-called self-signed certificate - or by a different private key, an authority one step up the chain. A rather long, but very good introduction to certificates by Mike Malone is <a href="https://smallstep.com/blog/everything-pki.html">available here</a>.</p> -<h2>OCaml ecosystem evolving</h2> +<h2 id="ocaml-ecosystem-evolving">OCaml ecosystem evolving</h2> <p>More than 5 years ago David Kaloper and I <a href="https://mirage.io/blog/introducing-x509">released the initial ocaml-x509</a> package as part of our <a href="https://nqsb.io">TLS stack</a>, which contained code for decoding and encoding certificates, and path validation of a certificate chain (as described in <a href="https://tools.ietf.org/html/rfc6125">RFC 5280</a>). The validation logic and the decoder/encoder, based on the ASN.1 grammar specified in the RFC, implemented using David's <a href="https://github.com/mirleft/ocaml-asn1-combinators">asn1-combinators</a> library changed much over time.</p> <p>The OCaml ecosystem evolved over the years, which lead to some changes:</p> <ul> @@ -1085,27 +1085,27 @@ $ orb build --twice --keep-build-dir --diffoscope &lt;your-favourite-opam-pa <li>Usage of the <a href="https://github.com/mirage/alcotest">alcotest</a> unit testing framework (instead of oUnit). </li> </ul> -<h2>More use cases for X.509</h2> +<h2 id="more-use-cases-for-x.509">More use cases for X.509</h2> <p>Initially, we designed and used ocaml-x509 for providing TLS server endpoints and validation in TLS clients - mostly on the public web, where each operating system ships a set of ~100 trust anchors to validate any web server certificate against. But once you have a X.509 implementation, every authentication problem can be solved by applying it.</p> -<h3>Authentication with path building</h3> +<h3 id="authentication-with-path-building">Authentication with path building</h3> <p>It turns out that the trust anchor sets are not equal across operating systems and versions, thus some web servers serve sets, instead of chains, of certificates - as described in <a href="https://tools.ietf.org/html/rfc4158">RFC 4158</a>, where the client implementation needs to build valid paths and accept a connection if any path can be validated. The path building was initially in 0.5.2 slightly wrong, but fixed quickly in <a href="https://github.com/mirleft/ocaml-x509/commit/1a1476308d24bdcc49d45c4cd9ef539ca57461d2">0.5.3</a>.</p> -<h3>Fingerprint authentication</h3> +<h3 id="fingerprint-authentication">Fingerprint authentication</h3> <p>The chain of trust validation is useful for the open web, where you as software developer don't know to which remote endpoint your software will ever connect to - as long as the remote has a certificate signed (via intermediates) by any of the trust anchors. In the early days, before <a href="https://letsencrypt.org/">let's encrypt</a> was launched and embedded as trust anchors (or cross-signed by already deployed trust anchors), operators needed to pay for a certificate - a business model where some CAs did not bother to check the authenticity of a certificate signing request, and thus random people owning valid certificates for microsoft.com or google.com.</p> <p>Instead of using the set of trust anchors, the fingerprint of the server certificate, or preferably the fingerprint of the public key of the certificate, can be used for authentication, as optionally done since some years in <a href="https://github.com/hannesm/jackline/commit/a1e6f3159be1e45e6b690845e1b29366c41239a2">jackline</a>, an XMPP client. Support for this certificate / public key pinning was added in x509 0.2.1 / 0.5.0.</p> -<h3>Certificate signing requests</h3> +<h3 id="certificate-signing-requests">Certificate signing requests</h3> <p>Until x509 0.4.0 there was no support for generating certificate signing requests (CSR), as defined in PKCS 10, which are self-signed blobs containing a public key, an identity, and possibly extensions. Such as CSR is sent to the certificate authority, and after validation of ownership of the identity and paying a fee, the certificate is issued. Let's encrypt specified the ACME protocol which automates the proof of ownership: they provide a HTTP API for requesting a challenge, providing the response (the proof of ownership) via HTTP or DNS, and then allow the submission of a CSR and downloading the signed certificate. The ocaml-x509 library provides operations for creating such a CSR, and also for signing a CSR to generate a certificate.</p> <p>Mindy developed the command-line utility <a href="https://github.com/yomimono/ocaml-certify/">certify</a> which uses these operations from the ocaml-x509 library and acts as a swiss-army knife purely in OCaml for these required operations.</p> <p>Maker developed a <a href="https://github.com/mmaker/ocaml-letsencrypt">let's encrypt library</a> which implements the above mentioned ACME protocol for provisioning CSR to certificates, also using our ocaml-x509 library.</p> <p>To complete the required certificate authority functionality, in x509 0.6.0 certificate revocation lists, both validation and signing, was implemented.</p> -<h3>Deploying unikernels</h3> +<h3 id="deploying-unikernels">Deploying unikernels</h3> <p>As <a href="/Posts/VMM">described in another post</a>, I developed <a href="https://github.com/hannesm/albatross">albatross</a>, an orchestration system for MirageOS unikernels. This uses ASN.1 for internal socket communication and allows remote management via a TLS connection which is mutually authenticated with a X.509 client certificate. To encrypt the X.509 client certificate, first a TLS handshake where the server authenticates itself to the client is established, and over that connection another TLS handshake is established where the client certificate is requested. Note that this mechanism can be dropped with TLS 1.3, since there the certificates are transmitted over an already encrypted channel.</p> <p>The client certificate already contains the command to execute remotely - as a custom extension, being it &quot;show me the console output&quot;, or &quot;destroy the unikernel with name = YYY&quot;, or &quot;deploy the included unikernel image&quot;. The advantage is that the commands are already authenticated, and there is no need for developing an ad-hoc protocol on top of the TLS session. The resource limits, assigned by the authority, are also part of the certificate chain - i.e. the number of unikernels, access to network bridges, available accumulated memory, accumulated size for block devices, are constrained by the certificate chain presented to the server, and currently running unikernels. The names of the chain are used for access control - if Alice and Bob have intermediate certificates from the same CA, neither Alice may manage Bob's unikernels, nor Bob may manage Alice's unikernels. I'm using albatross since 2.5 years in production on two physical machines with ~20 unikernels total (multiple users, multiple administrative domains), and it works stable and is much nicer to deal with than <code>scp</code> and custom hacked shell scripts.</p> -<h2>Why 0.7?</h2> +<h2 id="why-0.7">Why 0.7?</h2> <p>There are still some missing pieces in our ocaml-x509 implementation, namely modern ECC certificates (depending on elliptic curve primitives not yet available in OCaml), RSA-PSS signing (should be straightforward), PKCS 12 (there is a <a href="https://github.com/mirleft/ocaml-x509/pull/114">pull request</a>, but this should wait until asn1-combinators supports the <code>ANY defined BY</code> construct to cleanup the code), ... Once these features are supported, the library should likely be named PKCS since it supports more than X.509, and released as 1.0.</p> <p>The 0.7 release series moved a lot of modules and function names around, thus it is a major breaking release. By using a map instead of lists for extensions, GeneralName, ..., the API was further revised - invariants that each extension key (an ASN.1 object identifier) may occur at most once are now enforced. By not leaking exceptions through the public interface, the API is easier to use safely - see <a href="https://github.com/mmaker/ocaml-letsencrypt/commit/dc53518f46310f384c9526b1d96a8e8f815a09c7">let's encrypt</a>, <a href="https://git.robur.io/?p=openvpn.git;a=commitdiff;h=929c53116c1438ba1214f53df7506d32da566ccc">openvpn</a>, <a href="https://github.com/yomimono/ocaml-certify/pull/17">certify</a>, <a href="https://github.com/mirleft/ocaml-tls/pull/394">tls</a>, <a href="https://github.com/mirage/capnp-rpc/pull/158">capnp</a>, <a href="https://github.com/hannesm/albatross/commit/50ed6a8d1ead169b3e322aaccb469e870ad72acc">albatross</a>.</p> <p>I intended in 0.7.0 to have much more precise types, esp. for the SubjectAlternativeName (SAN) extension that uses a GeneralName, but it turns out the GeneralName is as well used for NameConstraints (NC) in a different way -- IP in SAN is an IPv4 or IPv6 address, in CN it is the IP/netmask; DNS is a domain name in SAN, in CN it is a name starting with a leading dot (i.e. &quot;.example.com&quot;), which is not a valid domain name. In 0.7.1, based on a bug report, I had to revert these variants and use less precise types.</p> -<h2>Conclusion</h2> +<h2 id="conclusion">Conclusion</h2> <p>The work on X.509 was sponsored by <a href="http://ocamllabs.io/">OCaml Labs</a>. You can support our work at robur by a <a href="https://robur.io/Donate">donation</a>, which we will use to work on our OCaml and MirageOS projects. You can also reach out to us to realize commercial products.</p> <p>I'm interested in feedback, either via <strike><a href="https://twitter.com/h4nnes">twitter</a></strike> <a href="https://mastodon.social/@hannesm">hannesm@mastodon.social</a> or via eMail.</p> urn:uuid:f2cf2a6a-8eef-5c2c-be03-d81a1bf0f066X509 0.72021-11-19T18:04:52-00:00hannes \ No newline at end of file