commit f4a8902a1946e0202622e73ab47a48c7a3d4ab25 Author: Canopy bot Date: Sat Nov 5 15:14:01 2022 +0000 updated from main (commit 2e42a836c40546a3183203e7a0bfd1774aa3f04e) diff --git a/About b/About new file mode 100644 index 0000000..5fa98b7 --- /dev/null +++ b/About @@ -0,0 +1,88 @@ + +About

About

Written by hannes
Classified under: overviewmyselfbackground
Published: 2016-04-01 (last updated: 2021-11-19)

What is a "full stack engineer"?

+

Analysing the word literally, we should start with silicon and some electrons, +maybe a soldering iron, and build everything all the way up to our favourite +communication system.

+

While I know how to solder, I don't plan to write about hardware in here. I'll +assume that off-the-shelf hardware (arm/amd64) is available and trustworthy. +Read the Intel x86 considered +harmful paper in +case you're interested in trustworthiness of hardware.

+

My current obsession is to enable people to take back control over their data: +simple to setup, secure, decentralised infrastructure. We're not there yet, +which also means I've plenty of projects :).

+

I will write about my projects, which cover topics on various software layers.

+

Myself

+

I'm Hannes Mehnert, a hacker +(in the original sense of the word), 3X years old. In my spare time, I'm not +only a hacker, but also a barista. I like to travel and repair my recumbent +bicycle.

+

Back in 199X, my family bought a PC. It came +with MS-DOS installed, I also remember Windows 3.1 (likely on a later computer). +This didn't really hook me into computers, but over the years I started with +friends to modify some computer games (e.g. modifying text of Civilization). I +first encountered programming in high school around 1995: Borland's Turbo Pascal +(which chased me for several years).

+

Fast forwarding a bit, I learned about the operating system Linux (starting with +SUSE 6.4) and got hooked (by providing basic network services (NFS/YP/Samba)) to +UNIX. In 2000 I joined the Chaos Computer Club. +Over the years I learned various things, from Linux kernel modifications, +Perl, PHP, basic network and security. I use FreeBSD since 4.5, FreeBSD-CURRENT +on my laptop. I helped to reverse engineer and analyse the security of a voting +computer in the Netherlands, and some +art installations in Berlin and Paris. There were +several annual Chaos Communication Congresses where I co-setup the network +(backbone, access layer, wireless, network services such as DHCP/DNS), struggling with +Cisco hardware from their demo pool, and also amongst others HP, Force10, Lucent, Juniper +equipment.

+

In the early 200X I started to program Dylan, a LISP +dialect (dynamic, multiple inheritance, object-oriented), which even resulted in +a TCP/IP +implementation +including a wireshark-like GTK based user interface with a shell similar to IOS for configuring the stack.

+

I got excited about programming languages and type theory (thanks to +types and programming languages, an +excellent book); a key event for me was the international conference on functional programming (ICFP). I wondered how a +gradually typed +Dylan would look like, leading to my master thesis. Gradual typing is the idea to evolve untyped programs into typed ones, and runtime type errors must be in the dynamic part. To me, this sounded like a great idea, to start with some random code, and add types later. +My result was not too convincing (too slow, unsound type system). +Another problem with Dylan is that the community is very small, without sufficient time and energy to maintain the +self-hosted compiler(s) and the graphical IDE.

+

During my studies I met Peter Sestoft. +After half a year off in New Zealand (working on formalising some type systems), +I did a PhD in the ambitious research project "Tools and methods for +scalable software verification", where we mechanised proofs of the functional correctness +of imperative code (PIs: Peter and Lars Birkedal). +The idea was great, the project was fun, but we ended with 3000 lines of proof +script for a 100 line Java program. The Java program was taken off-the-shelf, +several times refactored, and most of its shared mutable state was removed. The +proof script was in Coq, using our higher-order separation logic.

+

I concluded two things: formal verification is hard and usually not applicable +for off-the-shelf software. Since we have to rewrite the software anyways, why +not do it in a declarative way?

+

Some artefacts from that time are still around: an eclipse plugin for +Coq, I also started (with David) the idris-mode for +emacs. Idris is a dependently +typed programming language (you can express richer types), actively being +researched (I would not consider it production ready yet, needs more work on a +faster runtime, and libraries).

+

After I finished my PhD, I decided to slack off for some time to make decent +espresso. I ended up spending the winter (beginning of 2014) in Mirleft, +Morocco. A good friend of mine pointed me to MirageOS, a +clean-slate operating system written in the high-level language OCaml. I got +hooked pretty fast, after some experience with LISP machines I imagined a modern +OS written in a single functional programming language.

+

From summer 2014 until end of 2017 I worked as a postdoctoral researcher at University of Cambridge (in the rigorous engineering of mainstream systems project) with Peter Sewell. I primarily worked on TLS, MirageOS, opam signing, and network semantics. In 2018 I relocated back to Berlin and am working on robur.

+

MirageOS had various bits and pieces into place, including infrastructure for +building and testing (and a neat self-hosted website). A big gap was security. +No access control, no secure sockets layer, nothing. This will be the topic of +another post.

+

OCaml is academically and commercially used, compiles to native code (arm/amd64/likely more), is +fast enough ("Reassuring, because our blanket performance statement 'OCaml +delivers at least 50% of the performance of a decent C compiler' is +not invalidated :-)" Xavier Leroy), and the community is sufficiently large.

+

Me on the intertubes

+

You can find me on twitter and on +GitHub.

+

The data of this blog is stored in a git repository.

+
\ No newline at end of file diff --git a/Posts/ARP b/Posts/ARP new file mode 100644 index 0000000..edc489b --- /dev/null +++ b/Posts/ARP @@ -0,0 +1,54 @@ + +Re-engineering ARP

Re-engineering ARP

Written by hannes
Classified under: mirageosprotocol
Published: 2016-07-12 (last updated: 2021-11-19)

What is ARP?

+

ARP is the Address Resolution Protocol, widely used in legacy IP networks (which support only IPv4). It is responsible to translate an IPv4 address to an Ethernet address. It is strictly more general, abstracting over protocol and hardware addresses. It is basically DNS (the domain name system) on a different layer.

+

ARP is link-local: ARP frames are not routed into other networks, all stay in the same broadcast domain. Thus there is no need for a hop limit (time-to-live). A reverse lookup mechanism (hardware address to protocol) is also available, named reverse ARP ;).

+

I will focus on ARP in this article, as used widely to translate IPv4 addresses into Ethernet addresses. There are two operations in ARP: request and response. A request is usually broadcasted to all hosts (by setting the destination to the broadcast Ethernet address, ff:ff:ff:ff:ff:ff), while a reply is send via unicast (to the host which requested that information).

+

The frame format is pretty straightforward: 2 bytes hardware address type, 2 bytes protocol type, 1 byte length for both types, 2 bytes operation, followed by source addresses (hardware and protocol), and target addresses. In total 28 bytes, considering 48 bit Ethernet addresses and 32 bit IPv4 addresses.

+

It was initially specified in RFC 826, but reading through RFC 1122 (requirements for Internet Hosts - Communication layer), and maybe the newer RFC 5227 (IPv4 address conflict detection) does not hurt.

+

On UNIX systems, you can investigate your arp table, also called arp cache, using the arp command line utility.

+

Protocol logic

+

Let us look what our ARP handler actually needs to do? Translating IPv4 addresses to Ethernet addresses, but where does it learn new information?

+

First of all, our ARP handler needs to know its own IPv4 address and its Ethernet address. It will even broadcast them on startup, so-called gratuitous ARP. The purpose of this is to inform all other hosts on the same network that we are here now. And if another host, let's name it barf, has the same IPv4 address, some sort of conflict resolution needs to happen (otherwise all hosts on the network are confused to whether to send us or barf packets).

+

Once initialisation is over, our ARP handler needs to wait for ARP requests from other hosts on the network, and if addresses to our IPv4 address, issue a reply. The other event which might happen is that a user wants to send an IPv4 packet to another host on the network. In this case, we either already have the Ethernet address in our cache, or we need to send an ARP request to the network and wait for a reply. Since packets might get lost, we actually need to retry sending ARP requests until a limit is reached. To keep the cache in a reasonable size, old entries should be dropped if unused. Also, the Ethernet address of hosts may change, due to hardware replacement or failover.

+

That's it. Pretty straightforward.

+

Design

+

Back in 2008, together with Andreas Bogk, we just used a hash table and installed expiration and retransmission timers when needed. Certainly timers sometimes needed to be cancelled, and testing the code was cumbersome. It were only 250 lines of Dylan code plus some wire format definition.

+

Nowadays, after some years of doing formal verification and typed functional programming, I try to have effects, including mutable state, isolated and explicitly annotated. The code should not contain surprises, but straightforward to understand. The core protocol logic should not be convoluted with side effects, rather a small wrapper around it should. Once this is achieved, testing is straightforward. If the fashion of the asynchronous task library changes (likely with OCaml multicore), the core logic can be reused. It can also be repurposed to run as a test oracle. You can read more marketing of this style in our Usenix security paper.

+

My proposed style and hash tables are not good friends, since hash tables in OCaml are imperative structures. Instead, a Map (documentation) is a functional data structure for associating keys with values. Its underlying data structure is a balanced binary tree.

+

Our ARP handler certainly has some state, at least its IPv4 address, its Ethernet address, and the map containing entries.

+

We have to deal with the various effects mentioned earlier:

+
    +
  • Network we provide a function taking a state and a packet, transforming to successor state, potentially output on the network, and potentially waking up tasks which are awaiting the mac address. +
  • +
  • Timer we need to rely on an external periodic event calling our function tick, which transforms a state to a successor state, a list of ARP requests to be send out (retransmission), and a list of tasks to be informed that a timeout occurred. +
  • +
  • Query a query for an IPv4 address using some state leads to a successor state, and either an immediate answer with the Ethernet address, or an ARP request to be sent and waiting for an answer, or just waiting for an answer in the case another task has already requested that IPv4 address. Since we don't want to convolute the protocol core with tasks, we'll let the effectful layer decide how to achieve that by abstracting over some alpha to store, and requiring a merge : alpha option -> alpha function. +
  • +
+

Excursion: security

+

ARP is a link-local protocol, thus attackers have to have access to the same link-layer: either a cable in the same switch or hub, or in the same wireless network (if you're into modern technology).

+

A very common attack vector for protocols is the so called person in the middle attack, where the attacker sits between you and the remote host. An attacker can achieve this using ARP spoofing: if they can convince your computer that the attacker is the gateway, your computer will send all packets to the attacker, who either forwards them to the remote host, or modifies them, or drops them.

+

ARP does not employ any security mechanism, it is more a question of receiving the first answer (depending on the implementation). A common countermeasure is to manually fill the cache with the gateway statically. This only needs updates if the gateway is replaced, or gets a new network card.

+

Denial of service attacks are also possible using ARP: if the implementation preserves all replies, the cache might expand immensely. This happens sometimes in switch hardware, which have a limited cache, and once it is full, they go into hub mode. This means all frames are broadcasted on all ports. This enables an attacker to passively sniff all traffic in the local network.

+

One denial of service attack vector is due to choosing a hash table as underlying store. Its hash function should be collision-resistant, one way, and its output should be fixed length. A good choice would be a cryptographic hash function (like SHA-256), but these are too expensive and thus rarely used for hash tables. Denial of Service via Algorithmic Complexity Attacks and Efficient Denial of Service Attacks on Web Application Platforms are worth studying. If you expose your hash function to user input (and don't use a private seed), you might accidentally open your attack surface.

+

Back to our design

+

To mitigate person in the middle attacks, we provide an API to add static entries, which are never overwritten by network input. While our own IPv4 addresses are advertised if a matching ARP request was received, other static entries are not advertised (neither are dynamic entries). We do only insert entries to our cache if we have an outstanding request or already an entry. To provide low latency, just before a dynamic entry would timeout, we send another request for this IPv4 address to the network.

+

Implementation

+

I have the source, its documentation, a test suite and a coverage report online.

+

The implementation of the core logic still fits in less than 250 lines of code. Below 100 more lines are needed for decoding and encoding byte buffers. And another 140 lines to implement the Mirage ARP interface. Tests are available which cover the protocol logic and decoding/encoding to 100%.

+

The effectful layer is underspecified (especially regarding conflicts: what happens if there is an outstanding request for an IPv4 address and I add a static entry for this?). There is an implementation based on hash tables, which I used to benchmark a bit.

+

Correctness aside, the performance should be in the same ballpark. I am mainly interested in how much input can be processed, being it invalid input, random valid input, random requests, random replies, and a mix of all that above plus some valid requests which should be answered. I ran the tests in two modes, one with accelerated time (where a minute passed in a second) to increase the pressure on the cache (named fast), one in real time. The results are in the table below (bigger numbers are better). It shows that neither approach is slower by design (of course there is still room for improvement).

+
| Test          | Hashtable |    fast |     Map |    fast |
+| ------------- | --------- | ------- | ------- | ------- |
+| invalid       |   2813076 | 2810684 | 2806899 | 2835905 |
+| valid         |   1126805 | 1320737 | 1770123 | 1785630 |
+| request       |   2059550 | 2044507 | 2109540 | 2119289 |
+| replies       |   1293293 | 1313405 | 1432225 | 1449860 |
+| mixed         |   2158481 | 2191617 | 2196092 | 2213530 |
+| queries       |     42058 |   45258 |   44803 |   44379 |
+
+

I ran each benchmark 3 times on a single core (used cpuset -l 3 to pin it to one specific core) and picked the best set of results. The measure is number of packets processed over 5 seconds, using the Mirage ARP API. The full source code is in the bench subdirectory. As always, take benchmarks with a grain of salt: everybody will always find the right parameters for their microbenchmarks.

+

There was even a bug in the MirageOS ARP code: its definition of gratuitous ARP is wrong.

+

I'm interested in feedback, either via +twitter or via eMail.

+
\ No newline at end of file diff --git a/Posts/BadRecordMac b/Posts/BadRecordMac new file mode 100644 index 0000000..9c79dc2 --- /dev/null +++ b/Posts/BadRecordMac @@ -0,0 +1,59 @@ + +Catch the bug, walking through the stack

Catch the bug, walking through the stack

Written by hannes
Classified under: mirageossecurity
Published: 2016-05-03 (last updated: 2021-11-19)

BAD RECORD MAC

+

Roughly 2 weeks ago, Engil informed me that a TLS alert pops up in his browser sometimes when he reads this website. His browser reported that the message authentication code was wrong. From RFC 5246: This message is always fatal and should never be observed in communication between proper implementations (except when messages were corrupted in the network).

+

I tried hard, but could not reproduce, but was very worried and was eager to find the root cause (some little fear remained that it was in our TLS stack). I setup this website with some TLS-level tracing (extending the code from our TLS handshake server). We tried to reproduce the issue with traces and packet captures (both on client and server side) in place from our computer labs office with no success. Later, Engil tried from his home and after 45MB of wire data, ran into this issue. Finally, evidence! Isolating the TCP flow with the alert resulted in just about 200KB of packet capture data (TLS ASCII trace around 650KB).

+

encrypted alert

+

What is happening on the wire? After some data is successfully transferred, at some point the client sends an encrypted alert (see above). The TLS session used a RSA key exchange and I could decrypt the TLS stream with Wireshark, which revealed that the alert was indeed a bad record MAC. Wireshark's "follow SSL stream" showed all client requests, but not all server responses. The TLS level trace from the server showed properly encrypted data. I tried to spot the TCP payload which caused the bad record MAC, starting from the alert in the client capture (the offending TCP frame should be closely before the alert).

+

client TCP frame

+

There is plaintext data which looks like a HTTP request in the TCP frame sent by the server to the client? WTF? This should never happen! The same TCP frame on the server side looked even more strange: it had an invalid checksum.

+

server TCP frame

+

What do we have so far? We spotted some plaintext data in a TCP frame which is part of a TLS session. The TCP checksum is invalid.

+

This at least explains why we were not able to reproduce from our office: usually, TCP frames with invalid checksums are dropped by the receiving TCP stack, and the sender will retransmit TCP frames which have not been acknowledged by the recipient. However, this mechanism only works if the checksums haven't been changed by a well-meaning middleman to be correct! Our traces are from a client behind a router doing network address translation, which has to recompute the TCP checksum because it modifies destination IP address and port. It seems like this specific router does not validate the TCP checksum before recomputing it, so it replaced the invalid TCP checksum with a valid one.

+

Next steps are: what did the TLS layer intended to send? Why is there a TCP frame with an invalid checksum emitted?

+

Looking into the TLS trace, the TCP payload in question should have started with the following data:

+
0000  0B C9 E5 F3 C5 32 43 6F  53 68 ED 42 F8 67 DA 8B  .....2Co Sh.B.g..
+0010  17 87 AB EA 3F EC 99 D4  F3 38 88 E6 E3 07 D5 6E  ....?... .8.....n
+0020  94 9A 81 AF DD 76 E2 7C  6F 2A C6 98 BA 70 1A AD  .....v.| o*...p..
+0030  95 5E 13 B0 F7 A3 8C 25  6B 3D 59 CE 30 EC 56 B8  .^.....% k=Y.0.V.
+0040  0E B9 E7 20 80 FA F1 AC  78 52 66 1E F1 F8 CC 0D  ........ xRf.....
+0050  6C CD F0 0B E4 AD DA BA  40 55 D7 40 7C 56 32 EE  l....... @U.@|V2.
+0060  9D 0B A8 DE 0D 1B 0A 1F  45 F1 A8 69 3A C3 4B 47  ........ E..i:.KG
+0070  45 6D 7F A6 1D B7 0F 43  C4 D0 8C CF 52 77 9F 06  Em.....C ....Rw..
+0080  59 31 E0 9D B2 B5 34 BD  A4 4B 3F 02 2E 56 B9 A9  Y1....4. .K?..V..
+0090  95 38 FD AD 4A D6 35 E4  66 86 6E 03 AF 2C C9 00  .8..J.5. f.n..,..
+
+

The ethernet, IP, and TCP headers are in total 54 bytes, thus we have to compare starting at 0x0036 in the screenshot above. The first 74 bytes (till 0x007F in the screenshot, 0x0049 in the text dump) are very much the same, but then they diverge (for another 700 bytes).

+

I manually computed the TCP checksum using the TCP/IP payload from the TLS trace, and it matches the one reported as invalid. Thus, a big relief: both the TLS and the TCP/IP stack have used the correct data. Our memory disclosure issue must be after the TCP checksum is computed. After this:

+ +

As mentioned earlier I'm still using mirage-net-xen release 1.4.1.

+

Communication with the Xen hypervisor is done via shared memory. The memory is allocated by mirage-net-xen, which then grants access to the hypervisor using Xen grant tables. The TX protocol is implemented here in mirage-net-xen, which includes allocation of a ring buffer. The TX protocol also has implementations for writing requests and waiting for responses, both of which are identified using a 16bit integer. When a response has arrived from the hypervisor, the respective page is returned into the pool of shared pages, to be reused by the next packet to be transmitted.

+

Instead of a whole page (4096 byte) per request/response, each page is split into two blocks (since the most common MTU for ethernet is 1500 bytes). The identifier in use is the grant reference, which might be unique per page, but not per block.

+

Thus, when two blocks are requested to be sent, the first polled response will immediately release both into the list of free blocks. When another packet is sent, the block still waiting to be sent in the ringbuffer can be reused. This leads to corrupt data being sent.

+

The fix was already done back in December to the master branch of mirage-net-xen, and has now been backported to the 1.4 branch. In addition, a patch to avoid collisions on the receiving side has been applied to both branches (and released in versions 1.4.2 resp. 1.6.1).

+

What can we learn from this? Read the interface documentation (if there is any), and make sure unique identifiers are really unique. Think about the lifecycle of pieces of memory. Investigation of high level bugs pays off, you might find some subtle error on a different layer. There is no perfect security, and code only gets better if more people read and understand it.

+

The issue was in mirage-net-xen since its initial release, but only occured under load, and thanks to reliable protocols, was silently discarded (an invalid TCP checksum leads to a dropped frame and retransmission of its payload).

+

We have seen plain data in a TLS encrypted stream. The plain data was intended to be sent to the dom0 for logging access to the webserver. The same code is used in our Piñata, thus it could have been yours (although I tried hard and couldn't get the Piñata to leak data).

+

Certainly, interfacing the outside world is complex. The mirage-block-xen library uses a similar protocol to access block devices. From a brief look, that library seems to be safe (using 64bit identifiers).

+

I'm interested in feedback, either via +twitter or via eMail.

+

Other updates in the MirageOS ecosystem

+ +
\ No newline at end of file diff --git a/Posts/BottomUp b/Posts/BottomUp new file mode 100644 index 0000000..d0a4b9d --- /dev/null +++ b/Posts/BottomUp @@ -0,0 +1,61 @@ + +Counting Bytes

Counting Bytes

Written by hannes
Classified under: mirageosbackground
Published: 2016-06-11 (last updated: 2021-11-19)

I was busy writing code, text, talks, and also spend a week without Internet, where I ground and brewed 15kg espresso.

+

Size of a MirageOS unikernel

+

There have been lots of claims and myths around the concrete size of MirageOS unikernels. In this article I'll apply some measurements which overapproximate the binary sizes. The tools used for the visualisations are available online, and soon hopefully upstreamed into the mirage tool. This article uses mirage-2.9.0 (which might be outdated at the time of reading).

+

Let us start with a very minimal unikernel, consisting of a unikernel.ml:

+
module Main (C: V1_LWT.CONSOLE) = struct
+  let start c = C.log_s c "hello world"
+end
+
+

and the following config.ml:

+
open Mirage
+
+let () =
+  register "console" [
+    foreign "Unikernel.Main" (console @-> job) $ default_console
+  ]
+
+

If we mirage configure --unix and mirage build, we end up (at least on a 64bit FreeBSD-11 system with OCaml 4.02.3) with a 2.8MB main.native, dynamically linked against libthr, libm and libc (ldd ftw), or a 4.5MB Xen virtual image (built on a 64bit Linux computer).

+

In the _build directory, we can find some object files and their byte sizes:

+
 7144 key_gen.o
+14568 main.o
+ 3552 unikernel.o
+
+

These do not sum up to 2.8MB ;)

+

We did not specify any dependencies ourselves, thus all bits have been injected automatically by the mirage tool. Let us dig a bit deeper what we actually used. mirage configure generates a Makefile which includes the dependent OCaml libraries, and the packages which are used:

+
LIBS   = -pkgs functoria.runtime, mirage-clock-unix, mirage-console.unix, mirage-logs, mirage-types.lwt, mirage-unix, mirage.runtime
+PKGS   = functoria lwt mirage-clock-unix mirage-console mirage-logs mirage-types mirage-types-lwt mirage-unix
+
+

I explained bits of our configuration DSL Functoria earlier. The mirage-clock device is automatically injected by mirage, providing an implementation of the CLOCK device. We use a mirage-console device, where we print the hello world. Since mirage-2.9.0 the logging library (and its reporter, mirage-logs) is automatically injected as well, which actually uses the clock. Also, the mirage type signatures are required. The mirage-unix contains a sleep, a main, and provides the argument vector argv (all symbols in the OS module).

+

Looking into the archive files of those libraries, we end up with ~92KB (NB mirage-types only contains types, and thus no runtime data):

+
15268 functoria/functoria-runtime.a
+ 3194 mirage-clock-unix/mirage-clock.a
+12514 mirage-console/mirage_console_unix.a
+24532 mirage-logs/mirage_logs.a
+14244 mirage-unix/OS.a
+21964 mirage/mirage-runtime.a
+
+

This still does not sum up to 2.8MB since we're missing the transitive dependencies.

+

Visualising recursive dependencies

+

Let's use a different approach: first recursively find all dependencies. We do this by using ocamlfind to read META files which contain a list of dependent libraries in their requires line. As input we use LIBS from the Makefile snippet above. The code (OCaml script) is available here. The colour scheme is red for pieces of the OCaml distribution, yellow for input packages, and orange for the dependencies.

+

+

This is the UNIX version only, the Xen version looks similar (but worth mentioning).

+

+

You can spot at the right that mirage-bootvar uses re, which provoked me to open a PR, but Jon Ludlam already had a nicer PR which is now merged (and a new release is in preparation).

+

Counting bytes

+

While a dependency graphs gives a big picture of what the composed libraries of a MirageOS unikernel, we also want to know how many bytes they contribute to the unikernel. The dependency graph only contains the OCaml-level dependencies, but MirageOS has in addition to that a pkg-config universe of the libraries written in C (such as mini-os, openlibm, ...).

+

We overapproximate the sizes here by assuming that a linker simply concatenates all required object files. This is not true, since the sum of all objects is empirically factor two of the actual size of the unikernel.

+

I developed a pie chart visualisation, but a friend of mine reminded me that such a chart is pretty useless for comparing slices for the human brain. I spent some more time to develop a treemap visualisation to satisfy the brain. The implemented algorithm is based on squarified treemaps, but does not use implicit mutable state. In addition, the provided script parses common linker flags (-o -L -l) and collects arguments to be linked in. It can be passed to ocamlopt as the C linker, more instructions at the end of treemap.ml (which should be cleaned up and integrated into the mirage tool, as mentioned earlier).

+

+

+

As mentioned above, this is an overapproximation. The libgcc.a is only needed on Xen (see this comment), I have not yet tracked down why there is a libasmrun.a and a libxenasmrun.a.

+

More complex examples

+

Besides the hello world, I used the same tools on our BTC Piñata.

+

+

+

Conclusion

+

OCaml does not yet do dead code elimination, but there is a PR based on the flambda middle-end which does so. I haven't yet investigated numbers using that branch.

+

Those counting statistics could go into more detail (e.g. using nm to count the sizes of concrete symbols - which opens the possibility to see which symbols are present in the objects, but not in the final binary). Also, collecting the numbers for each module in a library would be great to have. In the end, it would be great to easily spot the source fragments which are responsible for a huge binary size (and getting rid of them).

+

I'm interested in feedback, either via +twitter or via eMail.

+
\ No newline at end of file diff --git a/Posts/Conex b/Posts/Conex new file mode 100644 index 0000000..37e052d --- /dev/null +++ b/Posts/Conex @@ -0,0 +1,317 @@ + +Conex, establish trust in community repositories

Conex, establish trust in community repositories

Written by hannes
Published: 2017-02-16 (last updated: 2021-11-19)

Less than two years after the initial proposal, we're happy to present conex +0.9.2. Pleas note that this is still work in progress, to be deployed with opam +2.0 and the opam repository.

+

screenshot

+

Conex is a library to verify and attest release integrity and +authenticity of a community repository through the use of cryptographic signatures.

+

Packages are collected in a community repository to provide an index and +allowing cross-references. Authors submit their packages to the repository. which +is curated by a team of janitors. Information +about a package stored in a repository includes: license, author, releases, +their dependencies, build instructions, url, tarball checksum. When someone +publishes a new package, the janitors integrate it into the repository, if it +compiles and passes some validity checks. For example, its name must not be misleading, +nor may it be too general.

+

Janitors keep an eye on the repository and fix emergent failures. A new +compiler release, or a release of a package on which other packages depend, might break the compilation of +a package. Janitors usually fix these problems by adding a patch to the build script, or introducing +a version constraint in the repository.

+

Conex ensures that every release of each package has been approved by its author or a quorum of janitors. +A conex-aware client initially verifies the repository using janitor key fingerprints as anchor. +Afterwards, the on-disk repository is trusted, and every update is verified (as a patch) individually. +This incremental verification is accomplished by ensuring all resources +that the patch modifies result in a valid repository with +sufficient approvals. Additionally, monotonicity is preserved by +embedding counters in each resource, and enforcing a counter +increment after modification. +This mechanism avoids rollback attacks, when an +attacker presents you an old version of the repository.

+

A timestamping service (NYI) will periodically approve a global view of the +verified repository, together with a timestamp. This is then used by the client +to prevent mix-and-match attacks, where an attacker mixes some old packages and +some new ones. Also, the client is able to detect freeze attacks, since at +least every day there should be a new signature done by the timestamping service.

+

The trust is rooted in digital signatures by package authors. The server which +hosts the repository does not need to be trusted. Neither does the host serving +release tarballs.

+

If a single janitor would be powerful enough to approve a key for any author, +compromising one janitor would be sufficient to enroll any new identities, +modify dependencies, build scripts, etc. In conex, a quorum of janitors (let's +say 3) have to approve such changes. This is different from current workflows, +where a single janitor with access to the repository can merge fixes.

+

Conex adds metadata, in form of resources, to the repository to ensure integrity and +authenticity. There are different kinds of resources:

+
    +
  • Authors, consisting of a unique identifier, public key(s), accounts. +
  • +
  • Teams, sharing the same namespace as authors, containing a set of members. +
  • +
  • Authorisation, one for each package, describing which identities are authorised for the package. +
  • +
  • Package index, for each package, listing all releases. +
  • +
  • Release, for each release, listing checksums of all data files. +
  • +
+

Modifications to identities and authorisations need to be approved by a quorum +of janitors, package index and release files can be modified either by an authorised +id or by a quorum of janitors.

+

Documentation

+

API documentation is +available online, also a coverage +report.

+

We presented an abstract at OCaml +2016 about an +earlier design.

+

Another article on an earlier design (from +2015) is also +available.

+

Conex is inspired by the update +framework, especially on their CCS 2010 +paper, and +adapted to the opam repository.

+

The TUF +spec +has a good overview of attacks and threat model, both of which are shared by conex.

+

What's missing

+
    +
  • See issue 7 for a laundry list +
  • +
  • Timestamping service +
  • +
  • Key revocation and rollover +
  • +
  • Tool to approve a PR (for janitors) +
  • +
  • Camelus like opam-repository check bot +
  • +
  • Integration into release management systems +
  • +
+

Getting started

+

At the moment, our opam repository +does not include any metadata needed for signing. We're in a bootstrap phase: +we need you to generate a keypair, claim your packages, and approve your releases.

+

We cannot verify the main opam repository yet, but opam2 has support for a +repository validation command, +builtin, which should then call out to conex_verify (there is a --nostrict +flag for the impatient). There is also an example repository which uses the opam validation command.

+

To reduce the manual work, we analysed 7000 PRs of the opam repository within +the last 4.5 years (more details here. +This resulted in an educated guess who are the people +modifying each package, which we use as a basis whom to authorise for +which packages. Please check with conex_author status below whether your team +membership and authorised packages were inferred correctly.

+

Each individual author - you - need to generate their private key, submit +their public key and starts approving releases (and old ones after careful +checking that the build script, patches, and tarball checksum are valid). +Each resource can be approved in multiple versions at the same time.

+

Installation

+

TODO: remove clone once PR 8494 is merged.

+
$ git clone -b auth https://github.com/hannesm/opam-repository.git repo
+$ opam install conex
+$ cd repo
+
+

This will install conex, namely command line utilities, conex_author and +conex_verify_nocrypto/conex_verify_openssl. All files read and written by conex are in the usual +opam file format. This means can always manually modify them (but be careful, +modifications need to increment counters, add checksums, and be signed). Conex +does not deal with git, you have to manually git add files and open pull +requests.

+

Author enrollment

+

For the opam repository, we will use GitHub ids as conex ids. Thus, your conex +id and your GitHub id should match up.

+
repo$ conex_author init --repo ~/repo --id hannesm
+Created keypair hannesm.  Join teams, claim your packages, sign your approved resources and open a PR :)
+
+

This attempts to parse ~/repo/id/hannesm, errors if it is a team or an author +with a publickey. Otherwise it generates a keypair, writes the private part as +home.hannes.repo.hannesm.private (the absolute path separated by dots, +followed by your id, and private - if you move your repository, rename your +private key) into ~/.conex/, the checksums of the public part and your +accounts into ~/repo/id/hannesm. See conex_author help init for more +options (esp. additional verbosity -v can be helpful).

+
repo$ git status -s
+ M id/hannesm
+
+repo$ git diff //abbreviated output
+-  ["counter" 0x0]
++  ["counter" 0x1]
+
+-  ["resources" []]
++  [
++    "resources"
++    [
++      [
++        ["typ" "key"]
++        ["name" "hannesm"]
++        ["index" 0x1]
++        ["digest" ["SHA256" "ht9ztjjDwWwD/id6LSVi7nKqVyCHQuQu9ORpr8Zo2aY="]]
++      ]
++      [
++        ["typ" "account"]
++        ["name" "hannesm"]
++        ["index" 0x2]
++        ["digest" ["SHA256" "aCsktJ5M9PI6T+m1NIQtuIFYILFkqoHKwBxwvuzpuzg="]]
++      ]
++
++keys: [
++  [
++    [
++      "RSA"
++      """
++-----BEGIN PUBLIC KEY-----
++MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAyUhArwt4XcxLanARyH9S
+...
++9KQdg6QnLsQh/j74QKLOZacCAwEAAQ==
++-----END PUBLIC KEY-----"""
++      0x58A3419F
++    ]
++    [
++      0x58A79A1D
++      "RSA-PSS-SHA256"
++      "HqqicsDx4hG9pFM5E7"
++    ]
++  ]
++]
+
+

Status

+

If you have a single identity and contribute to a single signed opam repository, +you don't need to specify --id or --repo from now on.

+

The status subcommand presents an author-specific view on the repository. It +lists the own public keys, team membership, queued resources, and authorised +packages.

+

The opam repository is in a transitionary state, we explicitly pass --quorum 0, which means that every checksum is valid (approved by a quorum of 0 +janitors).

+
repo$ conex_author status --quorum 0 arp
+author hannesm #1 (created 0) verified 3 resources, 0 queued
+4096 bit RSA key created 1487094175 approved, SHA256: ht9ztjjDwWwD/id6LSVi7nKqVyCHQuQu9ORpr8Zo2aY=
+account GitHub hannesm approved
+account email hannes@mehnert.org approved
+package arp authorisation approved
+conex_author: [ERROR] package index arp was not found in repository
+
+

This shows your key material and accounts, team membership and packages you are +authorised to modify (inferred as described +here.

+

The --noteam argument limits the package list to only these you are personally +authorised for. The --id argument presents you with a view of another author, +or from a team perspective. The positional argument is a prefix matching on +package names (leave empty for all).

+

Resource approval

+

Each resource needs to be approved individually. Each author has a local queue +for to-be-signed resources, which is extended with authorisation, init, +key, release, and team (all have a --dry-run flag). The queue can be +dropped using conex_author reset. Below shown is conex_author sign, which +let's you interactively approve queued resources and cryptopgraphically signs +your approved resources afterwards.

+

The output of conex_author status listed an authorisation for conf-gsl, +which I don't feel responsible for. Let's drop my privileges:

+
repo$ conex_author authorisation conf-gsl --remove -m hannesm
+modified authorisation and added resource to your queue.
+
+

I checked my arp release careful (checksums of tarballs are correct, opam files +do not execute arbitrary shell code, etc.), and approve this package and its +single release:

+
repo$ conex_author release arp
+conex_author.native: [WARNING] package index arp was not found in repository
+conex_author.native: [WARNING] release arp.0.1.1 was not found in repository
+wrote release and added resources to your queue.
+
+

Once finished with joining and leaving teams (using the team subcommand), +claiming packages (using the authorisation subcommand), and approve releases +(using the release subcommand), you have to cryprographically sign your queued +resource modifications:

+
repo$ conex_author sign
+release arp.0.1.1 #1 (created 1487269425)
+[descr: SHA256: aCsNvcj3cBKO0GESWG4r3AzoUEnI0pHGSyEDYNPouoE=;
+opam: SHA256: nqy6lD1UP+kXj3+oPXLt2VMUIENEuHMVlVaG2V4z3p0=;
+url: SHA256: FaUPievda6cEMjNkWdi0kGVK7t6EpWGfQ4q2NTSTcy0=]
+approved (yes/No)?
+package arp #1 (created 1487269425) [arp.0.1.1]
+approved (yes/No)?y
+authorisation conf-gsl #1 (created 0) empty
+approved (yes/No)?y
+wrote hannesm to disk
+
+repo$ conex_author status --quorum 0 arp
+author hannesm #1 (created 0) verified 7 resources, 0 queued
+4096 bit RSA key created 1487094175 approved, SHA256: ht9ztjjDwWwD/id6LSVi7nKqVyCHQuQu9ORpr8Zo2aY=
+account GitHub hannesm approved
+account email hannes@mehnert.org approved
+package arp authorisation approved package index approved
+release arp.0.1.1: approved
+
+

If you now modify anything in packages/arp (add subdirectories, modify opam, +etc.), this will not be automatically approved (see below for how to do this).

+

You manually need to git add some created files.

+
repo$ git status -s
+ M id/hannesm
+ M packages/conf-gsl/authorisation
+?? packages/arp/arp.0.1.1/release
+?? packages/arp/package
+
+repo$ git add packages/arp/arp.0.1.1/release packages/arp/package
+repo$ git commit -m "hannesm key enrollment and some fixes" id packages
+
+

Now push this to your fork, and open a PR on opam-repository!

+

Editing a package

+

If you need to modify a released package, you modify the opam file (as before, +e.g. introducing a conflict with a dependency), and then approve the +modifications. After your local modifications, conex_author status will +complain:

+
repo$ conex_author status arp --quorum 0
+package arp authorisation approved package index approved
+release arp.0.1.1: checksums for arp.0.1.1 differ, missing on disk: empty, missing in checksums file: empty, checksums differ: [have opam: SHA256: QSGUU9HdPOrwoRs6XJka4cZpd8h+8NN1Auu5IMN8ew4= want opam: SHA256: nqy6lD1UP+kXj3+oPXLt2VMUIENEuHMVlVaG2V4z3p0=]
+
+repo$ conex_author release arp.0.1.1
+released and added resources to your resource list.
+
+repo$ conex_author sign
+release arp.0.1.1 #1 (created 1487269943)
+[descr: SHA256: aCsNvcj3cBKO0GESWG4r3AzoUEnI0pHGSyEDYNPouoE=;
+opam: SHA256: QSGUU9HdPOrwoRs6XJka4cZpd8h+8NN1Auu5IMN8ew4=;
+url: SHA256: FaUPievda6cEMjNkWdi0kGVK7t6EpWGfQ4q2NTSTcy0=]
+approved (yes/No)? y
+wrote hannesm to disk
+
+

The release subcommand recomputed the checksums, incremented the counter, and +added it to your queue. The sign command signed the approved resource.

+
repo$ git status -s
+ M id/hannesm
+ M packages/arp/arp.0.1.1/opam
+ M packages/arp/arp.0.1.1/package
+
+repo$ git commit -m "fixed broken arp package" id packages
+
+

Janitor tools

+

Janitors need to approve teams, keys, accounts, and authorisations.

+

To approve resources which are already in the repository on disk, +the key subcommand queues approval of keys and accounts of the provided author:

+
repo$ conex_author key avsm
+added keys and accounts to your resource list.
+
+

The authorisation subcommand, and team subcommand behave similarly for +authorisations and teams.

+

Bulk operations are supported as well:

+
conex_author authorisation all
+
+

This will approve all authorisations of the repository which are not yet +approved by you. Similar for the key and team subcommands, which also +accept all.

+

Don't forget to conex_author sign afterwards (or yes | conex_author sign).

+

Verification

+

The two command line utlities, conex_verify_openssl and +conex_verify_nocrypto contain the same logic and same command line arguments.

+

For bootstrapping purposes (nocrypto is an opam package with dependencies), +conex_verify_openssl relies on the openssl command line tool (version 1.0.0 +and above) for digest computation and verification of the RSA-PSS signature.

+

The goal is to use the opam2 provided hooks, but before we have signatures we +cannot enable them.

+

See the example repository for initial +verification experiments, and opam2 integration.

+

I'm interested in feedback, please open an issue on the conex +repository. This article itself is stored as +Markdown in a different repository.

+
\ No newline at end of file diff --git a/Posts/DNS b/Posts/DNS new file mode 100644 index 0000000..05e9a45 --- /dev/null +++ b/Posts/DNS @@ -0,0 +1,266 @@ + +My 2018 contains robur and starts with re-engineering DNS

My 2018 contains robur and starts with re-engineering DNS

Written by hannes
Classified under: mirageosprotocol
Published: 2018-01-11 (last updated: 2021-11-19)

2018

+

At the end of 2017, I resigned from my PostDoc position at University of +Cambridge (in the rems project). Early +December 2017 I organised the 4th MirageOS hack +retreat, with which I'm +very satisfied. In March 2018 the 5th retreat will +happen (please sign up!).

+

In 2018 I moved to Berlin and started to work for the (non-profit) Center for +the cultivation of technology with our +robur.io project "At robur, we build performant bespoke +minimal operating systems for high-assurance services". robur is only possible +by generous donations in autumn 2017, enthusiastic collaborateurs, supportive +friends, and a motivated community, thanks to all. We will receive funding from +the prototypefund to work on a +CalDAV server implementation in OCaml +targeting MirageOS. We're still looking for donations and further funding, +please get in touch. Apart from CalDAV, I want to start the year by finishing +several projects which I discovered on my hard drive. This includes DNS, opam +signing, TCP, ... . My personal goal for 2018 is to develop a +flexible mirage deploy, because after configuring and building a unikernel, I +want to get it smoothly up and running (spoiler: I already use +albatross in production).

+

To kick off (3% of 2018 is already used) this year, I'll talk in more detail +about µDNS, an opinionated from-scratch +re-engineered DNS library, which I've been using since Christmas 2017 in production for +ns.nqsb.io and +ns.robur.io. The +development started in March 2017, and continued over several evenings and long +weekends. My initial motivation was to implement a recursive resolver to run on +my laptop. I had a working prototype in use on my laptop over 4 months in the +summer 2017, but that code was not in a good shape, so I went down the rabbit +hole and (re)wrote a server (and learned more about GADT). A configurable +resolver needs a server, as local overlay, usually anyways. Furthermore, +dynamic updates are standardised and thus a configuration interface exists +inside the protocol, even with hmac-signatures for authentication! +Coincidentally, I started to solve another issue, namely automated management of let's +encrypt certificates (see this +branch for an +initial hack). On my journey, I also reported a cache poisoning vulnerability, +which was fixed in Docker for +Windows.

+

But let's get started with some content. Please keep in mind that while the +code is publicly available, it is not yet released (mainly since the test +coverage is not high enough, and the lack of documentation). I appreciate early +adopters, please let me know if you find any issues or find a use case which is +not straightforward to solve. This won't be the last article about DNS this +year - persistent storage, resolver, let's encrypt support are still missing.

+

What is DNS?

+

The domain name system is a core Internet +protocol, which translates domain names to IP addresses. A domain name is +easier to memorise for human beings than an IP address. DNS is hierarchical and +decentralised. It was initially "specified" in Nov 1987 in RFC +1034 and RFC +1035. Nowadays it spans over more than 20 +technical RFCs, 10 security related, 5 best current practises and another 10 +informational. The basic encoding and mechanisms did not change.

+

On the Internet, there is a set of root servers (administrated by IANA) which +provide the information about which name servers are authoritative for which top level +domain (such as ".com"). They provide the information about which name servers are +responsible for which second level domain name (such as "example.com"), and so +on. There are at least two name servers for each domain name in separate +networks - in case one is unavailable the other can be reached.

+

The building blocks for DNS are: the resolver, a stub (gethostbyname provided +by your C library) or caching forwarding resolver (at your ISP), which send DNS +packets to another resolver, or a recursive resolver which, once seeded with the +root servers, finds out the IP address of a requested domain name. The other +part are authoritative servers, which reply to requests for their configured +domain.

+

To get some terminology, a DNS client sends a query, consisting of a domain +name and a query type, and expects a set of answers, which are called resource +records, and contain: name, time to live, type, and data. The resolver +iteratively requests resource records from authoritative servers, until the requested +domain name is resolved or fails (name does not exist, server +failure, server offline).

+

DNS usually uses UDP as transport which is not reliable and limited to 512 byte +payload on the Internet (due to various middleboxes). DNS can also be +transported via TCP, and even via TLS over UDP or TCP. If a DNS packet +transferred via UDP is larger than 512 bytes, it is cut at the 512 byte mark, +and a bit in its header is set. The receiver can decide whether to use the 512 +bytes of information, or to throw it away and attempt a TCP connection.

+

DNS packet

+

The packet encoding starts with a 16bit identifier followed by a 16bit header +(containing operation, flags, status code), and four counters, each 16bit, +specifying the amount of resource records in the body: questions, answers, +authority records, and additional records. The header starts with one bit +operation (query or response), four bits opcode, various flags (recursion, +authoritative, truncation, ...), and the last four bit encode the response code.

+

A question consists of a domain name, a query type, and a query class. A +resource record additionally contains a 32bit time to live, a length, and the +data.

+

Each domain name is a case sensitive string of up to 255 bytes, separated by . +into labels of up to 63 bytes each. A label is either encoded by its length +followed by the content, or by an offset to the start of a label in the current +DNS frame (poor mans compression). Care must be taken during decoding to avoid +cycles in offsets. Common operations on domain names are comparison: equality, +ordering, and also whether some domain name is a subdomain of another domain +name, should be efficient. My initial representation naïvely was a list of +strings, now it is an array of strings in reverse order. This speeds up common +operations by a factor of 5 (see test/bench.ml).

+

The only really used class is IN (for Internet), as mentioned in RFC +6895. Various query types (MD, MF, +MB, MG, MR, NULL, AFSDB, ...) are barely or never used. There is no +need to convolute the implementation and its API with these legacy options (if +you have a use case and see those in the wild, please tell me).

+

My implemented packet decoding does decompression, only allows valid internet +domain names, and may return a partial parse - to use as many resource records +in truncated packets as possible. There are no exceptions raised, the parsing +uses a monadic style error handling. Since label decompression requires the +parser to know absolute offsets, the original buffer and the offset is manually +passed around at all times, instead of using smaller views on the buffer. The +decoder does not allow for gaps, when the outer resource data length specifies a +byte length which is not completely consumed by the specific resource data +subparser (an A record must always consume four bytes). Failing to check this can +lead to a way to exfiltrate data without getting noticed.

+

Each zone (a served domain name) contains a SOA "start of authority" entry, +which includes the primary nameserver name, the hostmaster's email address (both +encoded as domain name), a serial number of the zone, a refresh, retry, expiry, +and minimum interval (all encoded as 32bit unsigned number in seconds). Common +resource records include A, which payload is 32bit IPv4 address. A nameserver +(NS) record carries a domain name as payload. A mail exchange (MX) whose +payload is a 16bit priority and a domain name. A CNAME record is an alias to +another domain name. These days, there are even records to specify the +certificate authority authorisation (CAA) records containing a flag (critical), +a tag ("issue") and a value ("letsencrypt.org").

+

Server

+

The operation of a DNS server is to listen for a request and serve a reply. +Data to be served can be canonically encoded (the RFC describes the format) in a +zone file. Apart from insecurity in DNS server implementations, another attack +vector are amplification attacks where an attacker crafts a small UDP frame +with a fake source IP address, and the server answers with a large response to +that address which may lead to a DoS attack. Various mitigations exist +including rate limiting, serving large replies only via TCP, ...

+

Internally, the zone file data is stored in a tree (module +Dns_trie +implementation), +where each node contains two maps: sub, which key is a label and value is a +subtree and dns_map (module Dns_map), which key is a resource record type and +value is the resource record. Both use the OCaml +Map ("also known +as finite maps or dictionaries, given a total ordering function over the +keys. All operations over maps are purely applicative (no side-effects). The +implementation uses balanced binary trees, and therefore searching and insertion +take time logarithmic in the size of the map").

+

The server looks up the queried name, and in the returned Dns_map the queried +type. The found resource records are sent as answer, which also includes the +question and authority information (NS records of the zone) and additional glue +records (IP addresses of names mentioned earlier in the same zone).

+

Dns_map

+

The data structure which contains resource record types as key, and a collection +of matching resource records as values. In OCaml the value type must be +homogenous - using a normal sum type leads to an unneccessary unpacking step +(or lacking type information):

+
let lookup_ns t =
+  match Map.find NS t with
+  | None -> Error `NotFound
+  | Some (NS nameservers) -> Ok nameservers
+  | Some _ -> Error `NotFound
+
+

Instead, I use in my current rewrite generalized algebraic data +types (read +OCaml manual and +Mads Hartmann blog post about use cases for +GADTs, Andreas +Garnæs about using GADTs for GraphQL type +modifiers) +to preserve a relation between key and value (and A record has a list of IPv4 +addresses and a ttl as value) - similar to +hmap, but different: a closed key-value +mapping (the GADT), no int for each key and mutable state. Thanks to Justus +Matthiesen for helping me with GADTs and this code. Look into the +interface and +implementation.

+
(* an ordering relation, I dislike using int for that *)
+module Order = struct
+  type (_,_) t =
+    | Lt : ('a, 'b) t
+    | Eq : ('a, 'a) t
+    | Gt : ('a, 'b) t
+end
+
+module Key = struct
+  (* The key and its value type *)
+  type _ t =
+    | Soa : (int32 * Dns_packet.soa) t
+    | A : (int32 * Ipaddr.V4.t list) t
+    | Ns : (int32 * Dns_name.DomSet.t) t
+    | Cname : (int32 * Dns_name.t) t
+
+  (* we need a total order on our keys *)
+  let compare : type a b. a t -> b t -> (a, b) Order.t = fun t t' ->
+    let open Order in
+    match t, t' with
+    | Cname, Cname -> Eq | Cname, _ -> Lt | _, Cname -> Gt
+    | Ns, Ns -> Eq | Ns, _ -> Lt | _, Ns -> Gt
+    | Soa, Soa -> Eq | Soa, _ -> Lt | _, Soa -> Gt
+    | A, A -> Eq
+end
+
+type 'a key = 'a Key.t
+
+(* our OCaml Map with an encapsulated constructor as key *)
+type k = K : 'a key -> k
+module M = Map.Make(struct
+    type t = k
+    (* the price I pay for not using int as three-state value *)
+    let compare (K a) (K b) = match Key.compare a b with
+      | Order.Lt -> -1
+      | Order.Eq -> 0
+      | Order.Gt -> 1
+  end)
+
+(* v contains a key and value pair, wrapped by a single constructor *)
+type v = V : 'a key * 'a -> v
+
+(* t is the main type of a Dns_map, used by clients *)
+type t = v M.t
+
+(* retrieve a typed value out of the store *)
+let get : type a. a Key.t -> t -> a = fun k t ->
+  match M.find (K k) t with
+  | V (k', v) ->
+    (* this comparison is superfluous, just for the types *)
+    match Key.compare k k' with
+    | Order.Eq -> v
+    | _ -> assert false
+
+

This helps me to programmaticaly retrieve tightly typed values from the cache, +important when code depends on concrete values (i.e. when there are domain +names, look these up as well and add as additional records). Look into server/dns_server.ml

+

Dynamic updates, notifications, and authentication

+

Dynamic updates specify in-protocol +record updates (supported for example by nsupdate from ISC bind-tools), +notifications are used by primary servers +to notify secondary servers about updates, which then initiate a zone +transfer to retrieve up to date +data. Shared hmac secrets are used to +ensure that the transaction (update, zone transfer) was authorised. These are +all protocol extensions, there is no need to use out-of-protocol solutions.

+

The server logic for update and zone transfer frames is slightly more complex, +and includes a dependency upon an authenticator (implemented using the +nocrypto library, and +ptime).

+

Deployment and Let's Encrypt

+

To deploy servers without much persistent data, an authentication schema is +hardcoded in the dns-server: shared secrets are also stored as DNS entries +(DNSKEY), and _transfer.zone, _update.zone, and _key-management.zone names +are introduced to encode the permissions. A _transfer key also needs to +encode the IP address of the primary (to know where to request zone transfers) +and secondary IP (to know where to send notifications).

+

Please have a look at +ns.robur.io and the examples for more details. The shared secrets are provided as boot parameter of the unikernel.

+

I hacked maker's +ocaml-letsencrypt +library to use µDNS and sending update frames to the given IP address. I +already used this to have letsencrypt issue various certificates for my domains.

+

There is no persistent storage of updates yet, but this can be realised by +implementing a secondary (which is notified on update) that writes every new +zone to persistent storage (e.g. disk +or git). I also plan to have an +automated Let's Encrypt certificate unikernel which listens for certificate +signing requests and stores signed certificates in DNS. Luckily the year only +started and there's plenty of time left.

+

I'm interested in feedback, either via twitter +hannesm@mastodon.social or via eMail.

+
\ No newline at end of file diff --git a/Posts/Deploy b/Posts/Deploy new file mode 100644 index 0000000..15ce4cb --- /dev/null +++ b/Posts/Deploy @@ -0,0 +1,59 @@ + +Deploying binary MirageOS unikernels

Deploying binary MirageOS unikernels

Written by hannes
Classified under: mirageosdeployment
Published: 2021-06-30 (last updated: 2021-11-15)

Introduction

+

MirageOS development focus has been a lot on tooling and the developer experience, but to accomplish our goal to "get MirageOS into production", we need to lower the barrier. This means for us to release binary unikernels. As described earlier, we received a grant for "Deploying MirageOS" from NGI Pointer to work on the required infrastructure. This is joint work with Reynir.

+

We provide at builds.robur.coop binary unikernel images (and supplementary software). Doing binary releases of MirageOS unikernels is challenging in two aspects: firstly to be useful for everyone, a binary unikernel should not contain any configuration (such as private keys, certificates, etc.). Secondly, the binaries should be reproducible. This is crucial for security; everyone can reproduce the exact same binary and verify that our build service did only use the sources. No malware or backdoors included.

+

This post describes how you can deploy MirageOS unikernels without compiling it from source, then dives into the two issues outlined above - configuration and reproducibility - and finally describes how to setup your own reproducible build infrastructure for MirageOS, and how to bootstrap it.

+

Deploying MirageOS unikernels from binary

+

To execute a MirageOS unikernel, apart from a hypervisor (Xen/KVM/Muen), a tender (responsible for allocating host system resources and passing these to the unikernel) is needed. Using virtio, this is conventionally done with qemu on Linux, but its code size (and attack surface) is huge. For MirageOS, we develop Solo5, a minimal tender. It supports hvt - hardware virtualization (Linux KVM, FreeBSD BHyve, OpenBSD VMM), spt - sandboxed process (a tight seccomp ruleset (only a handful of system calls allowed, no hardware virtualization needed), Linux only). Apart from that, muen (a hypervisor developed in Ada), virtio (for some cloud deployments), and xen (PVHv2 or Qubes 4.0) - read more. We deploy our unikernels as hvt with FreeBSD BHyve as hypervisor.

+

On builds.robur.coop, next to the unikernel images, solo5-hvt packages are provided - download the binary and install it. A NixOS package is already available - please note that soon packaging will be much easier (and we will work on packages merged into distributions).

+

When the tender is installed, download a unikernel image (e.g. the traceroute described in an earlier post), and execute it:

+
$ solo5-hvt --net:service=tap0 -- traceroute.hvt --ipv4=10.0.42.2/24 --ipv4-gateway=10.0.42.1
+
+

If you plan to orchestrate MirageOS unikernels, you may be interested in albatross - we provide binary packages as well for albatross. An upcoming post will go into further details of how to setup albatross.

+

MirageOS configuration

+

A MirageOS unikernel has a specific purpose - composed of OCaml libraries - selected at compile time, which allows to only embed the required pieces. This reduces the attack surface drastically. At the same time, to be widely useful to multiple organisations, no configuration data must be embedded into the unikernel.

+

Early MirageOS unikernels such as mirage-www embed content (blog posts, ..) and TLS certificates and private keys in the binary (using crunch). The Qubes firewall (read the blog post by Thomas for more information) used to include the firewall rules until v0.6 in the binary, since v0.7 the rules are read dynamically from QubesDB. This is big usability improvement.

+

We have several possibilities to provide configuration information in MirageOS, on the one hand via boot parameters (can be pre-filled at development time, and further refined at configuration time, but those passed at boot time take precedence). Boot parameters have a length limitation.

+

Another option is to use a block device - where the TLS reverse proxy stores the configuration, modifiable via a TCP control socket (authentication using a shared hmac secret).

+

Several other unikernels, such as this website and our CalDAV server, store the content in a remote git repository. The git URI and credentials (private key seed, host key fingerprint) are passed via boot parameter.

+

Finally, another option that we take advantage of is to introduce a post-link step that rewrites the binary to embed configuration. The tool caravan developed by Romain that does this rewrite is used by our openvpn router (binary).

+

In the future, some configuration information - such as monitoring system, syslog sink, IP addresses - may be done via DHCP on one of the private network interfaces - this would mean that the DHCP server has some global configuration option, and the unikernels no longer require that many boot parameters. Another option we want to investigate is where the tender shares a file as read-only memory-mapped region from the host system to the guest system - but this is tricky considering all targets above (especially virtio and muen).

+

Behind the scenes: reproducible builds

+

To provide a high level of assurance and trust, if you distribute binaries in 2021, you should have a recipe how they can be reproduced in a bit-by-bit identical way. This way, different organisations can run builders and rebuilders, and a user can decide to only use a binary if it has been reproduced by multiple organisations in different jurisdictions using different physical machines - to avoid malware being embedded in the binary.

+

For a reproduction to be successful, you need to collect the checksums of all sources that contributed to the built, together with other things (host system packages, environment variables, etc.). Of course, you can record the entire OS and sources as a tarball (or file system snapshot) and distribute that - but this may be suboptimal in terms of bandwidth requirements.

+

With opam, we already have precise tracking which opam packages are used, and since opam 2.1 the opam switch export includes extra-files (patches) and records the VCS version. Based on this functionality, orb, an alternative command line application using the opam-client library, can be used to collect (a) the switch export, (b) host system packages, and (c) the environment variables. Only required environment variables are kept, all others are unset while conducting a build. The only required environment variables are PATH (sanitized with an allow list, /bin, /sbin, with /usr, /usr/local, and /opt prefixes), and HOME. To enable Debian's apt to install packages, DEBIAN_FRONTEND is set to noninteractive. The SWITCH_PATH is recorded to allow orb to use the same path during a rebuild. The SOURCE_DATE_EPOCH is set to enable tools that record a timestamp to use a static one. The OS* variables are only used for recording the host OS and version.

+

The goal of reproducible builds can certainly be achieved in several ways, including to store all sources and used executables in a huge tarball (or docker container), which is preserved for rebuilders. The question of minimal trusted computing base and how such a container could be rebuild from sources in reproducible way are open.

+

The opam-repository is a community repository, where packages are released to on a daily basis by a lot of OCaml developers. Package dependencies usually only use lower bounds of other packages, and the continuous integration system of the opam repository takes care that upon API changes all reverse dependencies include the right upper bounds. Using the head commit of opam-repository usually leads to a working package universe.

+

For our MirageOS unikernels, we don't want to stay behind with ancient versions of libraries. That's why our automated building is done on a daily basis with the head commit of opam-repository. Since our unikernels are not part of the main opam repository (they include the configuration information which target to use, e.g. hvt), and we occasionally development versions of opam packages, we use the unikernel-repo as overlay.

+

If no dependent package got a new release, the resulting binary has the same checksum. If any dependency was released with a newer release, this is picked up, and eventually the checksum changes.

+

Each unikernel (and non-unikernel) job (e.g. dns-primary outputs some artifacts:

+
    +
  • the binary image (in bin/, unikernel image, OS package) +
  • +
  • the build-environment containing the environment variables used for this build +
  • +
  • the system-packages containing all packages installed on the host system +
  • +
  • the opam-switch that contains all opam packages, including git commit or tarball with checksum, and potentially extra patches, used for this build +
  • +
  • a job script and console output +
  • +
+

To reproduce such a built, you need to get the same operating system (OS, OS_FAMILY, OS_DISTRIBUTION, OS_VERSION in build-environment), the same set of system packages, and then you can orb rebuild which sets the environment variables and installs the opam packages from the opam-switch.

+

You can browse the different builds, and if there are checksum changes, you can browse to a diff between the opam switches to reason whether the checksum change was intentional (e.g. here the checksum of the unikernel changed when the x509 library was updated).

+

The opam reproducible build infrastructure is driven by:

+ +

These tools are themselves reproducible, and built on a daily basis. The infrastructure executing the build jobs installs the most recent packages of orb and builder before conducting a build. This means that our build infrastructure is reproducible as well, and uses the latest code when it is released.

+

Conclusion

+

Thanks to NGI funding we now have reproducible MirageOS binary builds available at builds.robur.coop. The underlying infrastructure is reproducible, available for multiple platforms (Ubuntu using docker, FreeBSD using jails), and can be easily bootstrapped from source (once you have OCaml and opam working, getting builder and orb should be easy). All components are open source software, mostly with permissive licenses.

+

We also have an index over sha-256 checksum of binaries - in the case you find a running unikernel image where you forgot which exact packages were used, you can do a reverse lookup.

+

We are aware that the web interface can be improved (PRs welcome). We will also work on the rebuilder setup and run some rebuilds.

+

Please reach out to us (at team AT robur DOT coop) if you have feedback and suggestions.

+
\ No newline at end of file diff --git a/Posts/DnsServer b/Posts/DnsServer new file mode 100644 index 0000000..004642a --- /dev/null +++ b/Posts/DnsServer @@ -0,0 +1,276 @@ + +Deploying authoritative OCaml-DNS servers as MirageOS unikernels

Deploying authoritative OCaml-DNS servers as MirageOS unikernels

Written by hannes
Classified under: mirageosprotocoldeployment
Published: 2019-12-23 (last updated: 2021-11-19)

Goal

+

Have your domain served by OCaml-DNS authoritative name servers. Data is stored in a git remote, and let's encrypt certificates can be requested to DNS. This software is deployed since more than two years for several domains such as nqsb.io and robur.coop. This present the authoritative server side, and certificate library of the OCaml-DNS implementation formerly known as µDNS.

+

Prerequisites

+

You need to own a domain, and be able to delegate the name service to your own servers. +You also need two spare public IPv4 addresses (in different /24 networks) for your name servers. +A git server or remote repository reachable via git over ssh. +Servers which support solo5 guests, and have the corresponding tender installed. +A computer with opam (>= 2.0.0) installed.

+

Data preparation

+

Figure out a way to get the DNS entries of your domain in a "master file format", i.e. what bind uses.

+

This is a master file for the mirage domain, defining $ORIGIN to avoid typing the domain name after each hostname (use @ if you need the domain name only; if you need to refer to a hostname in a different domain end it with a dot (.), i.e. ns2.foo.com.). The default time to live $TTL is an hour (3600 seconds). +The zone contains a start of authority (SOA) record containing the nameserver, hostmaster, serial, refresh, retry, expiry, and minimum. +Also, a single name server (NS) record ns1 is specified with an accompanying address (A) records pointing to their IPv4 address.

+
git-repo> cat mirage
+$ORIGIN mirage.
+$TTL 3600
+@	SOA	ns1	hostmaster	1	86400	7200	1048576	3600
+@	NS	ns1
+ns1     A       127.0.0.1
+www	A	1.1.1.1
+git-repo> git add mirage && git commit -m initial && git push
+
+

Installation

+

On your development machine, you need to install various OCaml packages. You don't need privileged access if common tools (C compiler, make, libgmp) are already installed. You have opam installed.

+

Let's create a fresh switch for the DNS journey:

+
$ opam init
+$ opam update
+$ opam switch create udns 4.09.0
+# waiting a bit, a fresh OCaml compiler is getting bootstrapped
+$ eval `opam env` #sets some environment variables
+
+

The last command set environment variables in your current shell session, please use the same shell for the commands following (or run eval $(opam env) in another shell and proceed in there - the output of opam switch sohuld point to udns).

+

Validation of our zonefile

+

First let's check that OCaml-DNS can parse our zonefile:

+
$ opam install dns-cli #installs ~/.opam/udns/bin/ozone and other binaries
+$ ozone <git-repo>/mirage # see ozone --help
+successfully checked zone
+
+

Great. Error reporting is not great, but line numbers are indicated (ozone: zone parse problem at line 3: syntax error), lexer and parser are lex/yacc style (PRs welcome).

+

FWIW, ozone accepts --old <filename> to check whether an update from the old zone to the new is fine. This can be used as pre-commit hook in your git repository to avoid bad parse states in your name servers.

+

Getting the primary up

+

The next step is to compile the primary server and run it to serve the domain data. Since the git-via-ssh client is not yet released, we need to add a custom opam repository to this switch.

+
# git via ssh is not yet released, but this opam repository contains the branch information
+$ opam repo add git-ssh git+https://github.com/roburio/git-ssh-dns-mirage3-repo.git
+# get the `mirage` application via opam
+$ opam install lwt mirage
+
+# get the source code of the unikernels
+$ git clone -b future https://github.com/roburio/unikernels.git
+$ cd unikernels/primary-git
+
+# let's build the server first as unix application
+$ mirage configure --prng fortuna #--no-depext if you have all system dependencies
+$ make depend
+$ make
+
+# run it
+$ ./primary_git
+# starts a unix process which clones https://github.com/roburio/udns.git
+# attempts to parse the data as zone files, and fails on parse error
+$ ./primary-git --remote=https://my-public-git-repository
+# this should fail with ENOACCESS since the DNS server tries to listen on port 53
+
+# which requires a privileged user, i.e. su, sudo or doas
+$ sudo ./primary-git --remote=https://my-public-git-repository
+# leave it running, run the following programs in a different shell
+
+# test it
+$ host ns1.mirage 127.0.0.1
+ns1.mirage has address 127.0.0.1
+$ dig any mirage @127.0.0.1
+# a DNS packet printout with all records available for mirage
+
+

That's exciting, the DNS server serving answers from a remote git repository.

+

Securing the git access with ssh

+

Let's authenticate the access by using ssh, so we feel ready to push data there as well. The primary-git unikernel already includes an experimental ssh client, all we need to do is setting up credentials - in the following a RSA keypair and the server fingerprint.

+
# collect the RSA host key fingerprint
+$ ssh-keyscan <git-server> > /tmp/git-server-public-keys
+$ ssh-keygen -l -E sha256 -f /tmp/git-server-public-keys | grep RSA
+2048 SHA256:a5kkkuo7MwTBkW+HDt4km0gGPUAX0y1bFcPMXKxBaD0 <git-server> (RSA)
+# we're interested in the SHA256:yyy only
+
+# generate a ssh keypair
+$ awa_gen_key # installed by the make depend step above in ~/.opam/udns/bin
+seed is pIKflD07VT2W9XpDvqntcmEW3OKlwZL62ak1EZ0m
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5b2cSSkZ5/MAu7pM6iJLOaX9tJsfA8DB1RI34Zygw6FA0y8iisbqGCv6Z94ZxreGATwSVvrpqGo5p0rsKs+6gQnMCU1+sOC4PRlxy6XKgj0YXvAZcQuxwmVQlBHshuq0CraMK9FASupGrSO8/dW30Kqy1wmd/IrqW9J1Cnw+qf0C/VEhIbo7btlpzlYpJLuZboTvEk1h67lx1ZRw9bSPuLjj665yO8d0caVIkPp6vDX20EsgITdg+cFjWzVtOciy4ETLFiKkDnuzHzoQ4EL8bUtjN02UpvX2qankONywXhzYYqu65+edSpogx2TuWFDJFPHgcyO/ZIMoluXGNgQlP awa@awa.local
+# please run your own awa_gen_key, don't use the numbers above
+
+

The public key needs is in standard OpenSSH format and needs to be added to the list of accepted keys on your server - the exact steps depend on your git server, if you're running your own with gitosis, add it as new public key file and grant that key access to the data repository. If you use gitlab or github, you may want to create a new user account and with the generated key.

+

The private key is not displayed, but only the seed required to re-generate it, when using the same random number generator, in our case fortuna implemented by nocrypto - used by both awa_gen_key and primary_git. The seed is provided as command-line argument while starting primary_git:

+
# execute with git over ssh, authenticator from ssh-keyscan, seed from awa_gen_key
+$ ./primary_git --authenticator=SHA256:a5kkkuo7MwTBkW+HDt4km0gGPUAX0y1bFcPMXKxBaD0 --seed=pIKflD07VT2W9XpDvqntcmEW3OKlwZL62ak1EZ0m --remote=ssh://git@<git-server>/repo-name.git
+# started up, you can try the host and dig commands from above if you like
+
+

To wrap up, we now have a primary authoritative name server for our zone running as Unix process, which clones a remote git repository via ssh on startup and then serves it.

+

Authenticated data updates

+

Our remote git repository is the source of truth, if you need to add a DNS entry to the zone, you git pull, edit the zone file, remember to increase the serial in the SOA line, run ozone, git commit and push to the repository.

+

So, the primary_git needs to be informed of git pushes. This requires a communication channel from the git server (or somewhere else, e.g. your laptop) to the DNS server. I prefer in-protocol solutions over adding yet another protocol stack, no way my DNS server will talk HTTP REST.

+

The DNS protocol has an extension for notifications of zone changes (as a DNS packet), usually used between the primary and secondary servers. The primary_git accepts these notify requests (i.e. bends the standard slightly), and upon receival pulls the remote git repository, and serves the fresh zone files. Since a git pull may be rather excessive in terms of CPU cycles and network bandwidth, only authenticated notifications are accepted.

+

The DNS protocol specifies in another extension authentication (DNS TSIG) with transaction signatures on DNS packets including a timestamp and fudge to avoid replay attacks. As key material hmac secrets distribued to both the communication endpoints are used.

+

To recap, the primary server is configured with command line parameters (for remote repository url and ssh credentials), and serves data from a zonefile. If the secrets would be provided via command line, a restart would be necessary for adding and removing keys. If put into the zonefile, they would be publicly served on request. So instead, we'll use another file, still in zone file format, in the top-level domain _keys, i.e. the mirage._keys file contains keys for the mirage zone. All files ending in ._keys are parsed with the normal parser, but put into an authentication store instead of the domain data store, which is served publically.

+

For encoding hmac secrets into DNS zone file format, the DNSKEY format is used (designed for DNSsec). The bind software comes with dnssec-keygen and tsig-keygen to generate DNSKEY output: flags is 0, protocol is 3, and algorithm identifier for SHA256 is 163 (SHA384 164, SHA512 165). This is reused by the OCaml DNS library. The key material itself is base64 encoded.

+

Access control and naming of keys follows the DNS domain name hierarchy - a key has the form name._operation.domain, and has access granted to domain and all subdomains of it. Two operations are supported: update and transfer. In the future there may be a dedicated notify operation, for now we'll use update. The name part is ignored for the update operation.

+

Since we now embedd secret information in the git repository, it is a good idea to restrict access to it, i.e. make it private and not publicly cloneable or viewable. Let's generate a first hmac secret and send a notify:

+
$ dd if=/dev/random bs=1 count=32 | b64encode -
+begin-base64 644 -
+kJJqipaQHQWqZL31Raar6uPnepGFIdtpjkXot9rv2xg=
+====
+[..]
+git-repo> echo "personal._update.mirage. DNSKEY 0 3 163 kJJqipaQHQWqZL31Raar6uPnepGFIdtpjkXot9rv2xg=" > mirage._keys
+git-repo> git add mirage._keys && git commit -m "add hmac secret" && git push
+
+# now we need to restart the primary git to get the git repository with the key
+$ ./primary_git --seed=... # arguments from above, remote git, host key fingerprint, private key seed
+
+# now test that a notify results in a git pull
+$ onotify 127.0.0.1 mirage --key=personal._update.mirage:SHA256:kJJqipaQHQWqZL31Raar6uPnepGFIdtpjkXot9rv2xg=
+# onotify was installed by dns-cli in ~/.opam/udns/bin/onotify, see --help for options
+# further changes to the hmac secrets don't require a restart anymore, a notify packet is sufficient :D
+
+

Ok, this onotify command line could be setup as a git post-commit hook, or run manually after each manual git push.

+

Secondary

+

It's time to figure out how to integrate the secondary name server. An already existing bind or something else that accepts notifications and issues zone transfers with hmac-sha256 secrets should work out of the box. If you encounter interoperability issues, please get in touch with me.

+

The secondary subdirectory of the cloned unikernels repository is another unikernel that acts as secondary server. It's only command line argument is a list of hmac secrets used for authenticating that the received data originates from the primary server. Data is initially transferred by a full zone transfer (AXFR), later updates (upon refresh timer or notify request sent by the primary) use incremental (IXFR). Zone transfer requests and data are authenticated with transaction signatures again.

+

Convenience by OCaml DNS is that transfer key names matter, and are of the form .._transfer.domain, i.e. 1.1.1.1.2.2.2.2._transfer.mirage if the primary server is 1.1.1.1, and the secondary 2.2.2.2. Encoding the IP address in the name allows both parties to start the communication: the secondary starts by requesting a SOA for all domains for which keys are provided on command line, and if an authoritative SOA answer is received, the AXFR is triggered. The primary server emits notification requests on startup and then on every zone change (i.e. via git pull) to all secondary IP addresses of transfer keys present for the specific zone in addition to the notifications to the NS records in the zone.

+
$ cd ../secondary
+$ mirage configure --prng fortuna
+# make depend should not be needed since all packages are already installed by the primary-git
+$ make
+$ ./secondary
+
+

IP addresses and routing

+

Both primary and secondary serve the data on the DNS port (53) on UDP and TCP. To run both on the same machine and bind them to different IP addresses, we'll use a layer 2 network (ethernet frames) with a host system software switch (bridge interface service), the unikernels as virtual machines (or seccomp-sandboxed) via the solo5 backend. Using xen is possible as well. As IP address range we'll use 10.0.42.0/24, and the host system uses the 10.0.42.1.

+

The primary git needs connectivity to the remote git repository, thus on a laptop in a private network we need network address translation (NAT) from the bridge where the unikernels speak to the Internet where the git repository resides.

+
# on FreeBSD:
+# configure NAT with pf, you need to have forwarding enabled
+$ sysctl net.inet.ip.forwarding: 1
+$ echo 'nat pass on wlan0 inet from 10.0.42.0/24 to any -> (wlan0)' >> /etc/pf.conf
+$ service pf restart
+
+# make tap interfaces UP on open()
+$ sysctl net.link.tap.up_on_open: 1
+
+# bridge creation, naming, and IP setup
+$ ifconfig bridge create
+bridge0
+$ ifconfig bridge0 name service
+$ ifconfig bridge0 10.0.42.1/24
+
+# two tap interfaces for our unikernels
+$ ifconfig tap create
+tap0
+$ ifconfig tap create
+tap1
+# add them to the bridge
+$ ifconfig service addm tap0 addm tap1
+
+

Primary and secondary setup

+

Let's update our zone slightly to reflect the IP changes.

+
git-repo> cat mirage
+$ORIGIN mirage.
+$TTL 3600
+@	SOA	ns1	hostmaster	2	86400	7200	1048576	3600
+@	NS	ns1
+@	NS	ns2
+ns1     A       10.0.42.2
+ns2	A	10.0.42.3
+
+# we also need an additional transfer key
+git-repo> cat mirage._keys
+personal._update.mirage. DNSKEY 0 3 163 kJJqipaQHQWqZL31Raar6uPnepGFIdtpjkXot9rv2xg=
+10.0.42.2.10.0.42.3._transfer.mirage. DNSKEY 0 3 163 cDK6sKyvlt8UBerZlmxuD84ih2KookJGDagJlLVNo20=
+git-repo> git commit -m "udpates" . && git push
+
+

Ok, the git repository is ready, now we need to compile the unikernels for the virtualisation target (see other targets for further information).

+
# back to primary
+$ cd ../primary-git
+$ mirage configure -t hvt --prng fortuna # or e.g. -t spt (and solo5-spt below)
+# installs backend-specific opam packages, recompiles some
+$ make depend
+$ make
+[...]
+$ solo5-hvt --net:service=tap0 -- primary_git.hvt --ipv4=10.0.42.2/24 --ipv4-gateway=10.0.42.1 --seed=.. --authenticator=.. --remote=ssh+git://...
+# should now run as a virtual machine (kvm, bhyve), and clone the git repository
+$ dig any mirage @10.0.42.2
+# should reply with the SOA and NS records, and also the name server address records in the additional section
+
+# secondary
+$ cd ../secondary
+$ mirage configure -t hvt --prng fortuna
+$ make
+$ solo5-hvt --net:service=tap1 -- secondary.hvt --ipv4=10.0.42.3/24 --keys=10.0.42.2.10.0.42.3._transfer.mirage:SHA256:cDK6sKyvlt8UBerZlmxuD84ih2KookJGDagJlLVNo20=
+# an ipv4-gateway is not needed in this setup, but in real deployment later
+# it should start up and transfer the mirage zone from the primary
+
+$ dig any mirage @10.0.42.3
+# should now output the same information as from 10.0.42.2
+
+# testing an update and propagation
+# edit mirage zone, add a new record and increment the serial number
+git-repo> echo "foo A 127.0.0.1" >> mirage
+git-repo> vi mirage <- increment serial
+git-repo> git commit -m 'add foo' . && git push
+$ onotify 10.0.42.2 mirage --key=personal._update.mirage:SHA256:kJJqipaQHQWqZL31Raar6uPnepGFIdtpjkXot9rv2xg=
+
+# now check that it worked
+$ dig foo.mirage @10.0.42.2 # primary
+$ dig foo.mirage @10.0.42.3 # secondary got notified and transferred the zone
+
+

You can also check the behaviour when restarting either of the VMs, whenever the primary is available the zone is synchronised. If the primary is down, the secondary still serves the zone. When the secondary is started while the primary is down, it won't serve any data until the primary is online (the secondary polls periodically, the primary sends notifies on startup).

+

Dynamic data updates via DNS, pushed to git

+

DNS is a rich protocol, and it also has builtin updates that are supported by OCaml DNS, again authenticated with hmac-sha256 and shared secrets. Bind provides the command-line utility nsupdate to send these update packets, a simple oupdate unix utility is available as well (i.e. for integration of dynamic DNS clients). You know the drill, add a shared secret to the primary, git push, notify the primary, and voila we can dynamically in-protocol update. An update received by the primary via this way will trigger a git push to the remote git repository, and notifications to the secondary servers as described above.

+
# being lazy, I reuse the key above
+$ oupdate 10.0.42.2 personal._update.mirage:SHA256:kJJqipaQHQWqZL31Raar6uPnepGFIdtpjkXot9rv2xg= my-other.mirage 1.2.3.4
+
+# let's observe the remote git
+git-repo> git pull
+# there should be a new commit generated by the primary
+git-repo> git log
+
+# test it, should return 1.2.3.4
+$ dig my-other.mirage @10.0.42.2
+$ dig my-other.mirage @10.0.42.3
+
+

So we can deploy further oupdate (or nsupdate) clients, distribute hmac secrets, and have the DNS zone updated. The source of truth is still the git repository, where the primary-git pushes to. Merge conflicts and timing of pushes is not yet dealt with. They are unlikely to happen since the primary is notified on pushes and should have up-to-date data in storage. Sorry, I'm unsure about the error semantics, try it yourself.

+

Let's encrypt!

+

Let's encrypt is a certificate authority (CA), which certificate is shipped as trust anchor in web browsers. They specified a protocol for automated certificate management environment (ACME), used to get X509 certificates for your services. In the protocol, a certificate signing request (publickey and hostname) is sent to let's encrypt servers, which sends a challenge to proof the ownership of the hostnames. One widely-used way to solve this challenge is running a web server, another is to serve it as text record from the authoritative DNS server.

+

Since I avoid persistent storage when possible, and also don't want to integrate a HTTP client stack in the primary server, I developed a third unikernel that acts as (hidden) secondary server, performs the tedious HTTP communication with let's encrypt servers, and stores all data in the public DNS zone.

+

For encoding of certificates, the DANE working group specified TLSA records in DNS. They are quadruples of usage, selector, matching type, and ASN.1 DER-encoded material. We set usage to 3 (domain-issued certificate), matching type to 0 (no hash), and selector to 0 (full certificate) or 255 (private usage) for certificate signing requests. The interaction is as follows:

+
    +
  1. Primary, secondary, and let's encrypt unikernels are running +
  2. +
  3. A service (ocertify, unikernels/certificate, or the dns-certify.mirage library) demands a TLS certificate, and has a hmac-secret for the primary DNS +
  4. +
  5. The service generates a certificate signing request with the desired hostname(s), and performs an nsupdate with TLSA 255 +
  6. +
  7. The primary accepts the update, pushes the new zone to git, and sends notifies to secondary and let's encrypt unikernels which (incrementally) transfer the zone +
  8. +
  9. The let's encrypt unikernel notices while transferring the zone a signing request without a certificate, starts HTTP interaction with let's encrypt +
  10. +
  11. The let's encrypt unikernel solves the challenge, sends the response as update of a TXT record to the primary nameserver +
  12. +
  13. The primary pushes the TXT record to git, and notifies secondaries (which transfer the zone) +
  14. +
  15. The let's encrypt servers request the TXT record from either or both authoritative name servers +
  16. +
  17. The let's encrypt unikernel polls for the issued certificate and send an update to the primary TLSA 0 +
  18. +
  19. The primary pushes the certificate to git, notifies secondaries (which transfer the zone) +
  20. +
  21. The service polls TLSA records for the hostname, and use it upon retrieval +
  22. +
+

Note that neither the signing request nor the certificate contain private key material, thus it is fine to serve them publically. Please also note, that the service polls for the certificate for the hostname in DNS, which is valid (start and end date) certificate and uses the same public key, this certificate is used and steps 3-10 are not executed.

+

The let's encrypt unikernel does not serve anything, it is a reactive system which acts upon notification from the primary. Thus, it can be executed in a private address space (with a NAT). Since the OCaml DNS server stack needs to push notifications to it, it preserves all incoming signed SOA requests as candidates for notifications on update. The let's encrypt unikernel ensures to always have a connection to the primary to receive notifications.

+
# getting let's encrypt up and running
+$ cd ../lets-encrypt
+$ mirage configure -t hvt --prng fortuna
+$ make depend
+$ make
+
+# run it
+$ solo5-hvt --net:service=tap2 -- letsencrypt.hvt --keys=...
+
+# test it
+$ ocertify 10.0.42.2 foo.mirage
+
+

For actual testing with let's encrypt servers you need to have the primary and secondary deployed on your remote hosts, and your domain needs to be delegated to these servers. Good luck. And ensure you have backup your git repository.

+

As fine print, while this tutorial was about the mirage zone, you can stick any number of zones into the git repository. If you use a _keys file (without any domain prefix), you can configure hmac secrets for all zones, i.e. something to use in your let's encrypt unikernel and secondary unikernel. Dynamic addition of zones is supported, just create a new zonefile and notify the primary, the secondary will be notified and pick it up. The primary responds to a signed SOA for the root zone (i.e. requested by the secondary) with the SOA response (not authoritative), and additionally notifications for all domains of the primary.

+

Conclusion and thanks

+

This tutorial presented how to use the OCaml DNS based unikernels to run authoritative name servers for your domain, using a git repository as the source of truth, dynamic authenticated updates, and let's encrypt certificate issuing.

+

There are further steps to take, such as monitoring -- have a look at the monitoring branch of the opam repository above, and the future-robur branch of the unikernels repository above, which use a second network interface for reporting syslog and metrics to telegraf / influx / grafana. Some DNS features are still missing, most prominently DNSSec.

+

I'd like to thank all people involved in this software stack, without other key components, including git, irmin 2.0, nocrypto, awa-ssh, cohttp, solo5, mirage, ocaml-letsencrypt, and more.

+

If you want to support our work on MirageOS unikernels, please donate to robur. I'm interested in feedback, either via twitter, hannesm@mastodon.social or via eMail.

+
\ No newline at end of file diff --git a/Posts/EC b/Posts/EC new file mode 100644 index 0000000..c8b4a18 --- /dev/null +++ b/Posts/EC @@ -0,0 +1,45 @@ + +Cryptography updates in OCaml and MirageOS

Cryptography updates in OCaml and MirageOS

Written by hannes
Classified under: mirageossecuritytls
Published: 2021-04-23 (last updated: 2021-11-19)

Introduction

+

Tl;DR: mirage-crypto-ec, with x509 0.12.0, and tls 0.13.0, provide fast and secure elliptic curve support in OCaml and MirageOS - using the verified fiat-crypto stack (Coq to OCaml to executable which generates C code that is interfaced by OCaml). In x509, a long standing issue (countryName encoding), and archive (PKCS 12) format is now supported, in addition to EC keys. In tls, ECDH key exchanges are supported, and ECDSA and EdDSA certificates.

+

Elliptic curve cryptography

+

Since May 2020, our OCaml-TLS stack supports TLS 1.3 (since tls version 0.12.0 on opam).

+

TLS 1.3 requires elliptic curve cryptography - which was not available in mirage-crypto (the maintained fork of nocrypto).

+

There are two major uses of elliptic curves: key exchange (ECDH) for establishing a shared secret over an insecure channel, and digital signature (ECDSA) for authentication, integrity, and non-repudiation. (Please note that the construction of digital signatures on Edwards curves (Curve25519, Ed448) is called EdDSA instead of ECDSA.)

+

Elliptic curve cryptoraphy is vulnerable to various timing attacks - have a read of the overview article on ECDSA. When implementing elliptic curve cryptography, it is best to avoid these known attacks. Gladly, there are some projects which address these issues by construction.

+

In addition, to use the code in MirageOS, it should be boring C code: no heap allocations, only using a very small amount of C library functions -- the code needs to be compiled in an environment with nolibc.

+

Two projects started in semantics, to solve the issue from the grounds up: fiat-crypto and hacl-star: their approach is to use a proof system (Coq or F* to verify that the code executes in constant time, not depending on data input. Both projects provide as output of their proof systems C code.

+

For our initial TLS 1.3 stack, Clément, Nathan and Etienne developed fiat-p256 and hacl_x5519. Both were one-shot interfaces for a narrow use case (ECDH for NIST P-256 and X25519), worked well for their purpose, and allowed to gather some experience from the development side.

+

Changed requirements

+

Revisiting our cryptography stack with the elliptic curve perspective had several reasons, on the one side the customer project NetHSM asked for feasibility of ECDSA/EdDSA for various elliptic curves, on the other side DNSSec uses elliptic curve cryptography (ECDSA), and also wireguard relies on elliptic curve cryptography. The number of X.509 certificates using elliptic curves is increasing, and we don't want to leave our TLS stack in a state where it can barely talk to a growing number of services on the Internet.

+

Looking at hacl-star, their support is limited to P-256 and Curve25519, any new curve requires writing F*. Another issue with hacl-star is C code quality: their C code does neither compile with older C compilers (found on Oracle Linux 7 / CentOS 7), nor when enabling all warnings (> 150 are generated). We consider the C compiler as useful resource to figure out undefined behaviour (and other problems), and when shipping C code we ensure that it compiles with -Wall -Wextra -Wpedantic --std=c99 -Werror. The hacl project ships a bunch of header files and helper functions to work on all platforms, which is a clunky ifdef desert. The hacl approach is to generate a whole algorithm solution: from arithmetic primitives, group operations, up to cryptographic protocol - everything included.

+

In contrast, fiat-crypto is a Coq development, which as part of compilation (proof verification) generates executables (via OCaml code extraction from Coq). These executables are used to generate modular arithmetic (as C code) given a curve description. The generated C code is highly portable, independent of platform (word size is taken as input) - it only requires a <stdint.h>, and compiles with all warnings enabled (once a minor PR got merged). Supporting a new curve is simple: generate the arithmetic code using fiat-crypto with the new curve description. The downside is that group operations and protocol needs to implemented elsewhere (and is not part of the proven code) - gladly this is pretty straightforward to do, especially in high-level languages.

+

Working with fiat-crypto

+

As mentioned, our initial fiat-p256 binding provided ECDH for the NIST P-256 curve. Also, BoringSSL uses fiat-crypto for ECDH, and developed the code for group operations and cryptographic protocol on top of it.

+

The work needed was (a) ECDSA support and (b) supporting more curves (let's focus on NIST curves). For ECDSA, the algorithm requires modular arithmetics in the field of the group order (in addition to the prime). We generate these primitives with fiat-crypto (named npYYY_AA) - that required a small fix in decoding hex. Fiat-crypto also provides inversion since late October 2020, paper - which allowed to reduce our code base taken from BoringSSL. The ECDSA protocol was easy to implement in OCaml using the generated arithmetics.

+

Addressing the issue of more curves was also easy to achieve, the C code (group operations) are macros that are instantiated for each curve - the OCaml code are functors that are applied with each curve description.

+

Thanks to the test vectors (as structured data) from wycheproof (and again thanks to Etienne, Nathan, and Clément for their OCaml code decodin them), I feel confident that our elliptic curve code works as desired.

+

What was left is X25519 and Ed25519 - dropping the hacl dependency entirely felt appealing (less C code to maintain from fewer projects). This turned out to require more C code, which we took from BoringSSL. It may be desirable to reduce the imported C code, or to wait until a project on top of fiat-crypto which provides proven cryptographic protocols is in a usable state.

+

To avoid performance degradation, I distilled some X25519 benchmarks, turns out the fiat-crypto and hacl performance is very similar.

+

Achievements

+

The new opam package mirage-crypto-ec is released, which includes the C code generated by fiat-crypto (including inversion), point operations from BoringSSL, and some OCaml code for invoking these functions and doing bounds checks, and whether points are on the curve. The OCaml code are some functors that take the curve description (consisting of parameters, C function names, byte length of value) and provide Diffie-Hellman (Dh) and digital signature algorithm (Dsa) modules. The nonce for ECDSA is computed deterministically, as suggested by RFC 6979, to avoid private key leakage.

+

The code has been developed in NIST curves, removing blinding (since we use operations that are verified to be constant-time), added missing length checks (reported by Greg), curve25519, a fix for signatures that do not span the entire byte size (discovered while adapting X.509), fix X25519 when the input has offset <> 0. It works on x86 and arm, both 32 and 64 bit (checked by CI). The development was partially sponsored by Nitrokey.

+

What is left to do, apart from further security reviews, is performance improvements, Ed448/X448 support, and investigating deterministic k for P521. Pull requests are welcome.

+

When you use the code, and encounter any issues, please report them.

+

Layer up - X.509 now with ECDSA / EdDSA and PKCS 12 support, and a long-standing issue fixed

+

With the sign and verify primitives, the next step is to interoperate with other tools that generate and use these public and private keys. This consists of serialisation to and deserialisation from common data formats (ASN.1 DER and PEM encoding), and support for handling X.509 certificates with elliptic curve keys. Since X.509 0.12.0, it supports EC private and public keys, including certificate validation and issuance.

+

Releasing X.509 also included to go through the issue tracker and attempt to solve the existing issues. This time, the "country name is encoded as UTF8String, while RFC demands PrintableString" filed more than 5 years ago by Reynir, re-reported by Petter in early 2017, and again by Vadim in late 2020, was fixed by Vadim.

+

Another long-standing pull request was support for PKCS 12, the archive format for certificate and private key bundles. This has been developed and merged. PKCS 12 is a widely used and old format (e.g. when importing / exporting cryptographic material in your browser, used by OpenVPN, ...). Its specification uses RC2 and 3DES (see this nice article), which are the default algorithms used by openssl pkcs12.

+

One more layer up - TLS

+

In TLS we are finally able to use ECDSA (and EdDSA) certificates and private keys, this resulted in slightly more complex configuration - the constraints between supported groups, signature algorithms, ciphersuite, and certificates are intricate:

+

The ciphersuite (in TLS before 1.3) specifies which key exchange mechanism to use, but also which signature algorithm to use (RSA/ECDSA). The supported groups client hello extension specifies which elliptic curves are supported by the client. The signature algorithm hello extension (TLS 1.2 and above) specifies the signature algorithm. In the end, at load time the TLS configuration is validated and groups, ciphersuites, and signature algorithms are condensed depending on configured server certificates. At session initiation time, once the client reports what it supports, these parameters are further cut down to eventually find some suitable cryptographic parameters for this session.

+

From the user perspective, earlier the certificate bundle and private key was a pair of X509.Certificate.t list and Mirage_crypto_pk.Rsa.priv, now the second part is a X509.Private_key.t - all provided constructors have been updates (notably X509_lwt.private_of_pems and Tls_mirage.X509.certificate).

+

Finally, conduit and mirage

+

Thanks to Romain, conduit* 4.0.0 was released which supports the modified API of X.509 and TLS. Romain also developed patches and released mirage 3.10.3 which supports the above mentioned work.

+

Conclusion

+

Elliptic curve cryptography is now available in OCaml using verified cryptographic primitives from the fiat-crypto project - opam install mirage-crypto-ec. X.509 since 0.12.0 and TLS since 0.13.0 and MirageOS since 3.10.3 support this new development which gives rise to smaller EC keys. Our old bindings, fiat-p256 and hacl_x25519 have been archived and will no longer be maintained.

+

Thanks to everyone involved on this journey: reporting issues, sponsoring parts of the work, helping with integration, developing initial prototypes, and keep motivating me to continue this until the release is done.

+

In the future, it may be possible to remove zarith and gmp from the dependency chain, and provide EC-only TLS servers and clients for MirageOS. The benefit will be much less C code (libgmp-freestanding.a is 1.5MB in size) in our trusted code base.

+

Another potential project that is very close now is a certificate authority developed in MirageOS - now that EC keys, PKCS 12, revocation lists, ... are implemented.

+

Footer

+

If you want to support our work on MirageOS unikernels, please donate to robur. I'm interested in feedback, either via twitter, hannesm@mastodon.social or via eMail.

+
\ No newline at end of file diff --git a/Posts/Functoria b/Posts/Functoria new file mode 100644 index 0000000..ea75dd9 --- /dev/null +++ b/Posts/Functoria @@ -0,0 +1,119 @@ + +Configuration DSL step-by-step

Configuration DSL step-by-step

Written by hannes
Classified under: mirageosbackground
Published: 2016-05-10 (last updated: 2021-11-19)

Sorry for being late again with this article, I had other ones planned, but am not yet satisfied with content and code, will have to wait another week.

+

MirageOS configuration

+

As described in an earlier post, MirageOS is a library operating system which generates single address space custom kernels (so called unikernels) for each application. The application code is (mostly) independent on the used backend. To achieve this, the language which expresses the configuration of a MirageOS unikernel is rather complex, and has to deal with package dependencies, setup of layers (network stack starting at the (virtual) ethernet device, or sockets), logging, tracing.

+

The abstraction over concrete implementation of e.g. the network stack is done by providing a module signature in the mirage-types package. The socket-based network stack, the tap device based network stack, and the Xen virtual network device based network stack implement this signature (depending on other module signatures). The unikernel contains code which applies those dependent modules to instantiate a custom-tailored network stack for the specific configuration. A developer should only describe what their requirements are, the user who wants to deploy it should provide the concrete configuration. And the developer should not need to manually instantiate the network stack for all possible configurations, this is what the mirage tool should embed.

+

Initially, MirageOS contained an adhoc system which relied on concatenation of strings representing OCaml code. This turned out to be error prone. In 2015 Drup developed Functoria, a domain-specific language (DSL) to organize functor applications, primarily for MirageOS. It has been introduced in a blog post. It is not limited to MirageOS (although this is the primary user right now).

+

Functoria has been included in MirageOS since its 2.7.0 release at the end of February 2016. Functoria provides support for command line arguments which can then either be passed at configuration time or at boot time to the unikernel (such as IP address configuration) using the cmdliner library underneath (and includes dynamic man pages, help, sensible command line parsing, and even visualisation (mirage describe) of the configuration and data dependencies).

+

I won't go into details about command line arguments in here, please have a look at the functoria blog post in case you're interested. Instead, I'll describe how to define a Functoria device which inserts content as code at configuration time into a MirageOS unikernel (running here, source). Using this approach, no external data (using crunch or a file system image) is needed, while the content can still be modified using markdown. Also, no markdown to HTML converter is needed at runtime, but this step is completely done at compile time (the result is a small (still too large) unikernel, 4.6MB).

+

Unikernel

+

Similar to my nqsb.io website post, this unikernel only has a single resource and thus does not need to do any parsing (or even call read). The main function is start:

+
let start stack _ =
+  S.listen_tcpv4 stack ~port:80 (serve rendered) ;
+  S.listen stack
+
+

Where S is a V1_LWT.STACKV4, a complete TCP/IP stack for IPv4. The functions we are using are listen_tcpv4, which needs a stack, port and a callback (and should be called register_tcp_callback), and listen which polls for incoming frames.

+

Our callback is serve rendered, where serve is defined as:

+
let serve data tcp =
+  TCP.writev tcp [ header; data ] >>= fun _ ->
+  TCP.close tcp
+
+

Upon an incoming TCP connection, the list consisting of header ; data is written to the connection, which is subsequently closed.

+

The function header is very similar to our previous one, splicing a proper HTTP header together:

+
let http_header ~status xs =
+  let headers = List.map (fun (k, v) -> k ^ ": " ^ v) xs in
+  let lines   = status :: headers @ [ "\r\n" ] in
+  Cstruct.of_string (String.concat "\r\n" lines)
+
+let header = http_header
+    ~status:"HTTP/1.1 200 OK"
+    [ ("Content-Type", "text/html; charset=UTF-8") ;
+      ("Connection", "close") ]
+
+

And the rendered function consists of some hardcoded HTML, and references to two other modules, Style.data and Content.data:

+
let rendered =
+  Cstruct.of_string
+    (String.concat "" [
+        "<html><head>" ;
+        "<title>1st MirageOS hackathon: 11-16th March 2016, Marrakech, Morocco</title>" ;
+        "<style>" ; Style.data ; "</style>" ;
+        "</head>" ;
+        "<body><div id=\"content\">" ;
+        Content.data ;
+        "</div></body></html>" ])
+
+

This puts together the pieces we need for a simple HTML site. This unikernel does not have any external dependencies, if we assume that the mirage toolchain, the types, and the network implementation are already provided (the latter two are implicitly added by the mirage tool depending on the configuration, the first you'll have to install manually opam install mirage).

+

But wait, where do Style and Content come from? There are no ml modules in the repository. Instead, there is a content.md and style.css in the data subdirectory.

+

Configuration

+

We use the builtin configuration time magic of functoria to translate these into OCaml modules, in such a way that our unikernel does not need to embed code to render markdown to HTML and carry along a markdown data file.

+

Inside of config.ml, let's look again at the bottom:

+
let () =
+  register "marrakech2016" [
+    foreign
+      ~deps:[abstract config_shell]
+      "Unikernel.Main"
+      ( stackv4 @-> job )
+      $ net
+  ]
+
+

The function register is provided by the mirage tool, it will execute the list of jobs using the given name. To construct a job, we use the foreign combinator, which might have dependencies (here, a list with the single element config_shell explained later, using the abstract combinator), the name of the main function (Unikernel.main), a typ (here constructed using the @-> combinator, from a stackv4 to a job), and this applied (using the $ combinator) to the net (an actual implementation of stackv4).

+

The net implementation is as following:

+
let address addr nm gw =
+  let f = Ipaddr.V4.of_string_exn in
+  { address = f addr ; netmask = f nm ; gateways = [f gw] }
+
+let server = address "198.167.222.204" "255.255.255.0" "198.167.222.1"
+
+let net =
+  if_impl Key.is_xen
+    (direct_stackv4_with_static_ipv4 default_console tap0 server)
+    (socket_stackv4 default_console [Ipaddr.V4.any])
+
+

Depending on whether we're running on unix or xen, either a socket stack (for testing) or the concrete IP configuration for deployment (using if_impl and is_xen from our DSLs).

+

So far nothing too surprising, only some combinators of the functoria DSL which let us describe the possible configuration options.

+

Let us look into config_shell, which embeds the markdown and CSS into OCaml modules at configuration time:

+
type sh = ShellConfig
+
+let config_shell = impl @@ object
+    inherit base_configurable
+
+    method configure i =
+      let open Functoria_app.Cmd in
+      let (>>=) = Rresult.(>>=) in
+      let dir = Info.root i in
+      run "echo 'let data = {___|' > style.ml" >>= fun () ->
+      run "cat data/style.css >> style.ml" >>= fun () ->
+      run "echo '|___}' >> style.ml" >>= fun () ->
+      run "echo 'let data = {___|' > content.ml" >>= fun () ->
+      run "omd data/content.md >> content.ml" >>= fun () ->
+      run "echo '|___}' >> content.ml"
+
+    method clean i = Functoria_app.Cmd.run "rm -f style.ml content.ml"
+
+    method module_name = "Functoria_runtime"
+    method name = "shell_config"
+    method ty = Type ShellConfig
+end
+
+

Functoria uses classes internally, and we extend the base_configurable class, which extends configurable with some sensible defaults.

+

The important bits are what actually happens during configure and clean: execution of some shell commands (echo, omd, and rm) using the functoria application builder interface. Some information is as well exposed via the Functoria_info module.

+

Wrapup

+

We walked through the configuration magic of MirageOS, which is a domain-specific language designed for MirageOS demands. We can run arbitrary commands at compile time, and do not need to escape into external files, such as Makefile or shell scripts, but can embed them in our config.ml.

+

I'm interested in feedback, either via +twitter or via eMail.

+

Other updates in the MirageOS ecosystem

+ +
\ No newline at end of file diff --git a/Posts/Jackline b/Posts/Jackline new file mode 100644 index 0000000..2d238b1 --- /dev/null +++ b/Posts/Jackline @@ -0,0 +1,337 @@ + +Jackline, a secure terminal-based XMPP client

Jackline, a secure terminal-based XMPP client

Written by hannes
Classified under: UIsecurity
Published: 2017-01-30 (last updated: 2021-09-08)

screenshot

+

Back in 2014, when we implemented TLS in OCaml, at some point +I was bored with TLS. I usually need at least two projects (but not more than 5) at the same time to +procrastinate the one I should do with the other one - it is always more fun to +do what you're not supposed to do. I started to implement another security +protocol (Off-the-record, resulted in +ocaml-otr) on my own, +applying what I learned while co-developing TLS with David. I was eager to +actually deploy our TLS stack: using it with a web server (see this post) is fun, but only using one half +of the state machine (server side) and usually short-lived connections +(discovers lots of issues with connection establishment) - not the client side +and no long living connection (which may discover other kinds of issues, such as +leaking memory).

+

To use the stack, I needed to find an application I use on a daily basis (thus +I'm eager to get it up and running if it fails to work). Mail client or web +client are just a bit too big for a spare time project (maybe not ;). Another +communication protocol I use daily is jabber, or +XMPP. Back then I used +mcabber inside a terminal, which is a curses based client +written in C.

+

I started to develop jackline (first +commit is 13th November 2014), a terminal based XMPP client in +OCaml. This is a report of a +work-in-progress (unreleased, but publicly available!) software project. I'm +not happy with the code base, but neverthelss consider it to be a successful +project: dozens of friends are using it (no exact numbers), I got contributions from other people +(more than 25 commits from more than 8 individuals), I use it on a daily basis +for lots of personal communication.

+

What is XMPP?

+

The eXtensible Messaging and Presence Protocol (previously known as Jabber) +describes (these days as RFC 6120) a +communication protocol based on XML fragments, which enables near real-time +exchange of structured (and extensible) data between two network entities.

+

The landscape of instant messaging used to contain ICQ, AOL instant messenger, +and MSN messenger. In 1999, people defined a completely open protocol standard, +then named Jabber, since 2011 official RFCs. It is a federated (similar to +eMail) near-real time extensible messaging system (including presence +information) used for instant messaging. Extensions include end-to-end +encryption, multi-user chat, audio transport, ... Unicode support is builtin, +everything is UTF8 encoded.

+

There are various open jabber servers where people can register accounts, as +well as closed ones. Google Talk used to federate (until 2014) into XMPP, +Facebook chat used to be based on XMPP. Those big companies wanted something +"more usable" (where they're more in control, reliable message delivery via +caching in the server and mandatory delivery receipts, multiple devices all +getting the same messages), and thus moved away from the open standard.

+

XMPP Security

+

Authentication is done via a TLS channel (where your client should authenticate +the server), and SASL that the server authenticates your client. I +investigated in 2008 (in German) +which clients and servers use which authentication methods (I hope the state of +certificate verification improved in the last decade).

+

End-to-end encryption is achievable using OpenPGP (rarely used in my group of +friends) via XMPP, or Off-the-record, which was +pioneered over XMPP, and is still in wide use - it gave rise to forward secrecy: +if your long-term (stored on disk) asymmetric keys get seized or stolen, they +are not sufficient to decrypt recorded sessions (you can't derive the session +key from the asymmetric keys) -- but the encrypted channel is still +authenticated (once you verified the public key via a different channel or a +shared secret, using the Socialist millionaires problem).

+

OTR does not support offline messages (the session keys may already be destroyed +by the time the communication partner reconnects and receives the stored +messages), and thus recently omemo was +developed. Other messaging protocols (Signal, Threema) are not really open, +support no federation, but have good support for group encryption and offline +messaging. (There is a nice overview over secure messaging and threats.)

+

There is (AFAIK) no encrypted group messaging via XMPP; also the XMPP server +contains lots of sensible data: your address book (buddy list), together with +offline messages, nicknames you gave to your buddies, subscription information, +and information every time you connect (research of privacy preserving presence +protocols has been done, but is not widely used AFAIK, +e.g. DP5).

+

XMPP client landscape

+

See wikipedia for an +extensive comparison (which does not mention jackline :P).

+

A more opinionated analysis is that you were free to choose between C - where +all code has to do manual memory management and bounds checking - with ncurses +(or GTK) and OpenSSL (or GnuTLS) using libpurple (or some other barely +maintained library which tries to unify all instant messaging protocols), or +Python - where you barely know upfront what it will do at runtime - with GTK and +some OpenSSL, or even JavaScript - where external scripts can dynamically modify +the prototype of everything at runtime (and thus modify code arbitrarily, +violating invariants) - calling out to C libraries (NSS, maybe libpurple, who +knows?).

+

Due to complex APIs of transport layer security, certificate verification is +still not always done correctly (that's just +one example, you'll find more) - even if, it may not allow custom trust anchors +or certificate fingerprint based verification - which are crucial for a +federated operations without a centralised trust authority.

+

Large old code basis usually gather dust and getting bitrot - and if you add +patch by patch from random people on the Internet, you've to deal with the most +common bug: insufficient checking of input (or output data, if you encrypt only the plain body, but not the marked up one). In some +programming languages this easily leads to execution of remote code, other programming languages steal the +work from programmers by deploying automated memory management (finally machines +take our work away! :)) - also named garbage collection, often used together +with automated bounds checking -- this doesn't mean that you're safe - there are +still logical flaws, and integer overflows (and funny things which happen at +resource starvation), etc.

+

Goals and non-goals

+

My upfront motivation was to write and use an XMPP client tailored to my needs. +I personally don't use many graphical applications (coding in emacs, mail via +thunderbird, firefox, mplayer, mupdf), but stick mostly to terminal +applications. I additionally don't use any terminal multiplexer (saw too many +active screen sessions on remote servers where people left root shells open).

+

The goal was from the beginning +to write a "minimalistic graphical user interface for a secure (fail hard) +and trustworthy XMPP client". By fail hard I mean exactly that: if it can't +authenticate the server, don't send the password. If there is no +end-to-end encrypted session, don't send the message.

+

As a user of (unreleased) software, there is a single property which I like to +preserve: continue to support all data written to persistent storage. Even +during large refactorings, ensure that data on the user's disk will also be +correctly parsed. There is nothing worse than having to manually configure an +application after update. The solution is straightforward: put a version in +every file you write, and keep readers for all versions ever written around. +My favourite marshalling format (human readable, structured) are still +S-expressions - luckily there is a +sexplib in OCaml for handling these. +Additionally, once the initial configuration file has been created (e.g. interactively with the application), the application +does no further writes to the config file. Users can make arbitrary modifications to the file, +and restart the application (and they can make changes while the application is running).

+

I also appreciate another property of software: don't ever transmit any data or +open a network connection unless initiated by the user (this means no autoconnect on startup, or user is typing indications). Don't be obviously +fingerprintable. A more mainstream demand is surely that software should not +phone home - that's why I don't know how many people are using jackline, reports +based on friends opinions are hundreds of users, I personally know at least +several dozens.

+

As written earlier, I often take +a look at the trusted computing base of a computer system. Jackline's trusted +computing base consists of the client software itself, its OCaml dependencies +(including OTR, TLS, tty library, ...), then the OCaml runtime system, which +uses some parts of libc, and a whole UNIX kernel underneath -- one goal is to +have jackline running as a unikernel (then you connect via SSH or telnet and +TLS).

+

There are only a few features I need in an XMPP client: single account, strict +validation, delivery receipts, notification callback, being able to deal with +friends logged in multiple times with wrongly set priorities - and end-to-end +encryption. I don't need inline HTML, avatar images, my currently running +music, leaking timezone information, etc. I explicitly don't want to import any +private key material from other clients and libraries, because I want to ensure +that the key was generated by a good random number generator (read David's blog article on randomness and entropy).

+

The security story is crucial: always do strict certificate validation, fail +hard, make it noticable by the user if they're doing insecure communication. +Only few people are into reading out loud their OTR public key fingerprint, and +SMP is not trivial -- thus jackline records the known public keys together with +a set of resources used, a session count, and blurred timestamps (accuracy: day) +when the publickey was initially used and when it was used the last time.

+

I'm pragmatic - if there is some server (or client) deployed out there which +violates (my interpretation of) the specification, I'm happy to implement workarounds. Initially I +worked roughly one day a week on jackline.

+

To not release the software for some years was something I learned from the +slime project (watch Luke's presentation from 2013) - if +there's someone complaining about an issue, fix it within 10 minutes and ask +them to update. This only works if each user compiles the git version anyways.

+

User interface

+

other screenshot

+

Stated goal is minimalistic. No heavy use of colours. Visibility on +both black and white background (btw, as a Unix process there is no way to find +out your background colour (or is there?)). The focus is also security - and +that's where I used colours from the beginning: red is unencrypted (non +end-to-end, there's always the transport layer encryption) communication, green +is encrypted communication. Verification status of the public key uses the same +colours: red for not verified, green for verified. Instead of colouring each +message individually, I use the encryption status of the active contact +(highlighted in the contact list, where messages you type now will be sent to) +to colour the entire frame. This results in a remarkable visual indication and +(at least I) think twice before presssing return in a red terminal. Messages +were initially white/black, but got a bit fancier over time: incoming messages +are bold, multi user messages mentioning your nick are underlined.

+

The graphical design is mainly inspired by mcabber, as mentioned earlier. There +are four components: the contact list in the upper left, chat window upper +right, log window on the bottom (surrounded by two status bars), and a readline +input. The sizes are configurable (via commands and key shortcuts). A +different view is having the chat window fullscreen (or only the received +messages) - useful for copy and pasting fragments. Navigation is done in the +contact list. There is a single active contact (colours are inverted in the +contact list, and the contact is mentioned in the status bar), whose chat +messages are displayed.

+

There is not much support for customisation - some people demanded to have a +7bit ASCII version (I output some unicode characters for layout). Recently I +added support to customise the colours. I tried to ensure it looks fine on both +black and white background.

+

Code

+

Initially I targeted GTK with OCaml, but that excursion only lasted two weeks, +when I switched to a lambda-term terminal +interface.

+

UI

+

The lambda-term interface survived for a good year (until 7th Feb 2016), +when I started to use notty - developed by +David - using a decent unicode library.

+

Notty back then was under heavy development, I spend several hours rebasing +jackline to updates in notty. What I got out of it is proper unicode support: +the symbol 茶 gets two characters width (see screenshot at top of page), and the layouting keeps track +how many characters are already written on the terminal.

+

I recommend to look into notty if you want to +do terminal graphics in OCaml!

+

Application logic and state

+

Stepping back, an XMPP client reacts to two input sources: the user input +(including terminal resize), and network input (or failure). The output is a +screen (80x25 characters) image. Each input event can trigger output events on +the display and the network.

+

I used to use multiple threads and locking between shared data for these kinds +of applications: there can go something wrong when network and user input +happens at the same time, or what if the output is interrupted by more input +(which happens e.g. during copy and paste).

+

Initially I used lots of shared data and had hope, but this was clearly not a +good solution. Nowadays I use mailboxes, and separate tasks which wait for +receiving a message: one task which writes persistent data (session counts, +verified fingerprints) periodically to ask, another which writes on change to +disk, an error handler +(init_system) +which resets the state upon a connection failure, another task which waits for +user input +(read_terminal), +one waiting for network input (Connect, including reconnecting timers), +one to call out the notification hooks +(Notify), +etc. The main task is simple: wait for input, process input (producing a new +state), render the state, and recursively call itself +(loop).

+

Only recently I solved the copy and paste issue by delaying all redraws by 40ms, +and canceling if another redraw is scheduled.

+

The whole +state +contains some user interface parameters (/buddywith, /logheight, ..), as +well as the contact map, which contain users, which have sessions, each +containing chat messages.

+

The code base is just below 6000 lines of code (way too big ;), and nowadays +supports multi-user chat, sane multi-resource interaction (press enter to show +all available resources of a contact and message each individually in case you +need to), configurable colours, tab completions for nicknames and commands, +per-user input history, emacs keybindings. It even works with the XMPP gateway +provided by slack (some startup doing a centralised groupchat with picture embedding and +animated cats).

+

Road ahead

+

Common feature requests are: omemo support, +IRC support, +support for multiple accounts +(tbh, these are all +things I'd like to have as well).

+

But there's some mess to clean up:

+
    +
  1. +

    The XMPP library makes heavy use of +functors (to abstract over the concrete IO, etc.), and embeds IO deep inside it. +I do prefer (see e.g. our TLS paper, or my ARP post) these days to have a pure interface for +the protocol implementation, providing explicit input (state, event, data), and +output (state, action, potentially data to send on network, potentially data to +process by the application). The sasl implementation +is partial and deeply embedded. The XML parser is as well deeply embedded (and +has some issues). +The library needs to be torn apart (something I procrastinate since more than +a year). Once it is pure, the application can have full control over when to +call IO (and esp use the same protocol implementation as well for registering a +new account - currently not supported).

    +
  2. +
  3. +

    On the frontend side (the cli subfolder), there is too much knowledge of +XMPP. It should be more general, and be reusable (some bits and pieces are +notty utilities, such as wrapping a string to fit into a text box of specific +width, see +split_unicode).

    +
  4. +
  5. +

    The command processing engine itself is 1300 lines (including ad-hoc string +parsing) +(Cli_commands), +best to replaced by a more decent command abstraction.

    +
  6. +
  7. +

    A big record of functions +(user_data) +is passed (during /connect in +handle_connect) +from the UI to the XMPP task to inject messages and errors.

    +
  8. +
  9. +

    The global variable +xmpp_session +should be part of the earlier mentioned cli_state, also contacts should be a map, not a Hashtbl (took me some time to learn).

    +
  10. +
  11. +

    Having jackline self-hosted as a MirageOS unikernel. I've implemented a a +telnet server, there is a +notty branch be used with the telnet +server. But there is (right now) no good story for persistent mutable storage.

    +
  12. +
  13. +

    Jackline predates some very elegant libraries, such as +logs and +astring, even +result - since 4.03 part of Pervasives - is not used. +Clearly, other libraries (such as TLS) do not yet use result.

    +
  14. +
  15. +

    After looking in more depths at the logs library, and at user interfaces - I +envision the graphical parts to be (mostly!?) a viewer of logs, and a command +shell (using a control interface, maybe +9p): Multiple layers (of a protocol), +slightly related (by tags - such as the OTR session), and have the layers be visible to users (see also +tlstools), a slightly different interface +of similarly structured data. In jackline I'd like to e.g. see all messages of +a single OTR session (see issue), or hide the presence messages in a multi-user chat, +investigate the high-level message, its XML encoded stanza, TLS encrypted +frames, the TCP flow, all down to the ethernet frames send over the wire - also +viewable as sequence diagram and other suitable (terminal) presentations (TCP +window size maybe in a size over time diagram).

    +
  16. +
  17. +

    Once the API between the sources (contacts, hosts) and the UI (what to +display, where and how to trigger notifications, where and how to handle global +changes (such as reconnect)) is clear and implemented, commands need to be +reinvented (some, such as navigation commands and emacs keybindings, are generic +to the user interface, others are specific to XMPP and/or OTR): a new transport +(IRC) or end-to-end crypto protocol (omemo) - should be easy to integrate (with +similar minimal UI features and colours).

    +
  18. +
+

Conclusion

+

Jackline started as a procrastination project, and still is one. I only develop +on jackline if I enjoy it. I'm not scared to try new approaches in jackline, +and either reverting them or rewriting some chunks of code again. It is a +project where I publish early and push often. I've met several people (whom I +don't think I know personally) in the multi-user chatroom +jackline@conference.jabber.ccc.de, and fixed bugs, discussed features.

+

When introducing customisable colours, +the proximity to a log viewer became again clear to me - configurable colours +are for severities such as Success, Warning, Info, Error, Presence - +maybe I really should get started on implementing a log viewer.

+

I would like to have more community contributions to jackline, but the lack of +documentation (there aren't even a lot of interface files), mixed with a +non-mainstream programming language, and a convoluted code base, makes me want +some code cleanups first, or maybe starting from scratch.

+

I'm interested in feedback, either via twitter or +on the jackline repository on GitHub.

+
\ No newline at end of file diff --git a/Posts/Maintainers b/Posts/Maintainers new file mode 100644 index 0000000..e2f7194 --- /dev/null +++ b/Posts/Maintainers @@ -0,0 +1,77 @@ + +Who maintains package X?

Who maintains package X?

Written by hannes
Classified under: package signingsecurity
Published: 2017-02-16 (last updated: 2017-03-09)

A very important data point for conex, the new opam signing utility, is who is authorised for a given package. We +could have written this manually down, or force each author to create a +pull request for their packages, but this would be a long process and not +easy: the main opam repository has around 1500 unique packages, and 350 +contributors. Fortunately, it is a git repository with 5 years of history, and +over 6900 pull requests. Each opam file may also contain a maintainers entry, +a list of strings (usually a mail address).

+

The data sources we correlate are the maintainers entry in opam file, and who +actually committed in the opam repository. This is inspired by some GitHub +discussion.

+

GitHub id and email address

+

For simplicity, since conex uses any (unique) identifier for authors, and the opam +repository is hosted on GitHub, we use a GitHub id as author identifier. +Maintainer information is an email address, thus we need a mapping between them.

+

We wrote a shell +script +to find all PR merges, their GitHub id (in a brittle way: using the name of the +git remote), and email address of the last commit. It also saves a diff of the +PR for later. This results in 6922 PRs (opam repository version 38d908dcbc58d07467fbc00698083fa4cbd94f9d).

+

The metadata output is processed by +github_mail: +we ignore PRs from GitHub organisations PR.ignore_github, where commits +PR.ignore_pr are picked from a different author (manually), bad mail addresses, +and Jeremy's mail address (it is added to too many GitHub ids otherwise). The +goal is to have a for an email address a single GitHub id. 329 authors with 416 mail addresses are mapped.

+

Maintainer in opam

+

As mentioned, lots of packages contain a maintainers entry. In +maintainers +we extract the mail addresses of the most recently released opam +file. +Some hardcoded matches are teams which do not properly maintain the maintainers +field (such as mirage and xapi-project ;). We're open for suggestions to extend +this massaging to the needs. Additionally, the contact at ocamlpro mail address +was used for all packages before the maintainers entry was introduced (based on +a discussion with Louis Gesbert). 132 packages with empty maintainers.

+

Fitness

+

Combining these two data sources, we hoped to find a strict small set of whom to +authorise for which package. Turns out some people use different mail addresses +for git commits and opam maintainer entries, which are be easily +fixed.

+

While processing the full diffs of each +PR +(using the diff parser of conex mentioned above), ignoring the 44% done by +janitors +(a manually created set by looking at log data, please report if wrong), we +categorise the modifications: authorised modification (the GitHub id is +authorised for the package), modification by an author to a team-owned package +(propose to add this author to the team), modification of a package where no +GitHub id is authorised, and unauthorised modification. We also ignore packages +which are no longer in the opam repository.

+

2766 modifications were authorised, 418 were team-owned, 452 were to packages +with no maintainer, and 570 unauthorised. This results in 125 unowned packages.

+

Out of the 452 modifications to packages with no maintainer, 75 are a global +one-to-one author to package relation, and are directly authorised.

+

Inference of team members is an overapproximation (everybody who committed +changes to their packages), additionally the janitors are missing. We will have +to fill these manually.

+
alt-ergo -> OCamlPro-Iguernlala UnixJunkie backtracking bobot nobrowser
+janestreet -> backtracking hannesm j0sh rgrinberg smondet
+mirage -> MagnusS dbuenzli djs55 hannesm hnrgrgr jonludlam mato mor1 pgj pqwy pw374 rdicosmo rgrinberg ruhatch sg2342 talex5 yomimono
+ocsigen -> balat benozol dbuenzli hhugo hnrgrgr jpdeplaix mfp pveber scjung slegrand45 smondet vasilisp
+xapi-project -> dbuenzli djs55 euanh mcclurmc rdicosmo simonjbeaumont yomimono
+
+

Alternative approach: GitHub urls

+

An alternative approach (attempted earlier) working only for GitHub hosted projects, is to authorise +the use of the user part of the GitHub repository +URL. +Results after filtering GitHub organisations are not yet satisfactory (but only +56 packages with no maintainer, output repo. This approach +completely ignores the manually written maintainer field.

+

Conclusion

+

Manually maintained metadata is easily out of date, and not very useful. But +combining automatically created metadata with manually, and some manual tweaking +leads to reasonable data.

+

The resulting authorised inference is available in this branch.

+
\ No newline at end of file diff --git a/Posts/Monitoring b/Posts/Monitoring new file mode 100644 index 0000000..a72c555 --- /dev/null +++ b/Posts/Monitoring @@ -0,0 +1,115 @@ + +All your metrics belong to influx

All your metrics belong to influx

Written by hannes
Published: 2022-03-08 (last updated: 2022-03-08)

Introduction to monitoring

+

At robur we use a range of MirageOS unikernels. Recently, we worked on improving the operations story thereof. One part is shipping binaries using our reproducible builds infrastructure. Another part is, once deployed we want to observe what is going on.

+

I first got into touch with monitoring - collecting and graphing metrics - with MRTG and munin - and the simple network management protocol SNMP. From the whole system perspective, I find it crucial that the monitoring part of a system does not add pressure. This favours a push-based design, where reporting is done at the disposition of the system.

+

The rise of monitoring where graphs are done dynamically (such as Grafana) and can be programmed (with a query language) by the operator are very neat, it allows to put metrics in relation after they have been recorded - thus if there's a thesis why something went berserk, you can graph the collected data from the past and prove or disprove the thesis.

+

Monitoring a MirageOS unikernel

+

From the operational perspective, taking security into account - either the data should be authenticated and integrity-protected, or being transmitted on a private network. We chose the latter, there's a private network interface only for monitoring. Access to that network is only granted to the unikernels and metrics collector.

+

For MirageOS unikernels, we use the metrics library - which design shares the idea of logs that only if there's a reporter registered, work is performed. We use the Influx line protocol via TCP to report via Telegraf to InfluxDB. But due to the design of metrics, other reporters can be developed and used -- prometheus, SNMP, your-other-favourite are all possible.

+

Apart from monitoring metrics, we use the same network interface for logging via syslog. Since the logs library separates the log message generation (in the OCaml libraries) from the reporting, we developed logs-syslog, which registers a log reporter sending each log message to a syslog sink.

+

We developed a small library for metrics reporting of a MirageOS unikernel into the monitoring-experiments package - which also allows to dynamically adjust log level and disable or enable metrics sources.

+

Required components

+

Install from your operating system the packages providing telegraf, influxdb, and grafana.

+

Setup telegraf to contain a socket listener:

+
[[inputs.socket_listener]]
+  service_address = "tcp://192.168.42.14:8094"
+  keep_alive_period = "5m"
+  data_format = "influx"
+
+

Use a unikernel that reports to Influx (below the heading "Unikernels (with metrics reported to Influx)" on builds.robur.coop) and provide --monitor=192.168.42.14 as boot parameter. Conventionally, these unikernels expect a second network interface (on the "management" bridge) where telegraf (and a syslog sink) are running. You'll need to pass --net=management and --arg='--management-ipv4=192.168.42.x/24' to albatross-client-local.

+

Albatross provides a albatross-influx daemon that reports information from the host system about the unikernels to influx. Start it with --influx=192.168.42.14.

+

Adding monitoring to your unikernel

+

If you want to extend your own unikernel with metrics, follow along these lines.

+

An example is the dns-primary-git unikernel, where on the branch future we have a single commit ahead of main that adds monitoring. The difference is in the unikernel configuration and the main entry point. See the binary builts in contrast to the non-monitoring builts.

+

In config, three new command line arguments are added: --monitor=IP, --monitor-adjust=PORT --syslog=IP and --name=STRING. In addition, the package monitoring-experiments is required. And a second network interface management_stack using the prefix management is required and passed to the unikernel. Since the syslog reporter requires a console (to report when logging fails), also a console is passed to the unikernel. Each reported metrics includes a tag vm=<name> that can be used to distinguish several unikernels reporting to the same InfluxDB.

+

Command line arguments:

+
   let doc = Key.Arg.info ~doc:"The fingerprint of the TLS certificate." [ "tls-cert-fingerprint" ] in
+   Key.(create "tls_cert_fingerprint" Arg.(opt (some string) None doc))
+ 
++let monitor =
++  let doc = Key.Arg.info ~doc:"monitor host IP" ["monitor"] in
++  Key.(create "monitor" Arg.(opt (some ip_address) None doc))
++
++let monitor_adjust =
++  let doc = Key.Arg.info ~doc:"adjust monitoring (log level, ..)" ["monitor-adjust"] in
++  Key.(create "monitor_adjust" Arg.(opt (some int) None doc))
++
++let syslog =
++  let doc = Key.Arg.info ~doc:"syslog host IP" ["syslog"] in
++  Key.(create "syslog" Arg.(opt (some ip_address) None doc))
++
++let name =
++  let doc = Key.Arg.info ~doc:"Name of the unikernel" ["name"] in
++  Key.(create "name" Arg.(opt string "ns.nqsb.io" doc))
++
+ let mimic_impl random stackv4v6 mclock pclock time =
+   let tcpv4v6 = tcpv4v6_of_stackv4v6 $ stackv4v6 in
+   let mhappy_eyeballs = mimic_happy_eyeballs $ random $ time $ mclock $ pclock $ stackv4v6 in
+
+

Requiring monitoring-experiments, registering command line arguments:

+
     package ~min:"3.7.0" ~max:"3.8.0" "git-mirage";
+     package ~min:"3.7.0" "git-paf";
+     package ~min:"0.0.8" ~sublibs:["mirage"] "paf";
++    package "monitoring-experiments";
++    package ~sublibs:["mirage"] ~min:"0.3.0" "logs-syslog";
+   ] in
+   foreign
+-    ~keys:[Key.abstract remote_k ; Key.abstract axfr]
++    ~keys:[
++      Key.abstract remote_k ; Key.abstract axfr ;
++      Key.abstract name ; Key.abstract monitor ; Key.abstract monitor_adjust ; Key.abstract syslog
++    ]
+     ~packages
+
+

Added console and a second network stack to foreign:

+
     "Unikernel.Main"
+-    (random @-> pclock @-> mclock @-> time @-> stackv4v6 @-> mimic @-> job)
++    (console @-> random @-> pclock @-> mclock @-> time @-> stackv4v6 @-> mimic @-> stackv4v6 @-> job)
++
+
+

Passing a console implementation (default_console) and a second network stack (with management prefix) to register:

+
+let management_stack = generic_stackv4v6 ~group:"management" (netif ~group:"management" "management")
+ 
+ let () =
+   register "primary-git"
+-    [dns_handler $ default_random $ default_posix_clock $ default_monotonic_clock $
+-     default_time $ net $ mimic_impl]
++    [dns_handler $ default_console $ default_random $ default_posix_clock $ default_monotonic_clock $
++     default_time $ net $ mimic_impl $ management_stack]
+
+

Now, in the unikernel module the functor changes (console and second network stack added):

+
@@ -4,17 +4,48 @@
+ 
+ open Lwt.Infix
+ 
+-module Main (R : Mirage_random.S) (P : Mirage_clock.PCLOCK) (M : Mirage_clock.MCLOCK) (T : Mirage_time.S) (S : Mirage_stack.V4V6) (_ : sig e
+nd) = struct
++module Main (C : Mirage_console.S) (R : Mirage_random.S) (P : Mirage_clock.PCLOCK) (M : Mirage_clock.MCLOCK) (T : Mirage_time.S) (S : Mirage
+_stack.V4V6) (_ : sig end) (Management : Mirage_stack.V4V6) = struct
+ 
+   module Store = Irmin_mirage_git.Mem.KV(Irmin.Contents.String)
+   module Sync = Irmin.Sync(Store)
+
+

And in the start function, the command line arguments are processed and used to setup syslog and metrics monitoring to the specified addresses. Also, a TCP listener is waiting for monitoring and logging adjustments if --monitor-adjust was provided:

+
   module D = Dns_server_mirage.Make(P)(M)(T)(S)
++  module Monitoring = Monitoring_experiments.Make(T)(Management)
++  module Syslog = Logs_syslog_mirage.Udp(C)(P)(Management)
+ 
+-  let start _rng _pclock _mclock _time s ctx =
++  let start c _rng _pclock _mclock _time s ctx management =
++    let hostname = Key_gen.name () in
++    (match Key_gen.syslog () with
++     | None -> Logs.warn (fun m -> m "no syslog specified, dumping on stdout")
++     | Some ip -> Logs.set_reporter (Syslog.create c management ip ~hostname ()));
++    (match Key_gen.monitor () with
++     | None -> Logs.warn (fun m -> m "no monitor specified, not outputting statistics")
++     | Some ip -> Monitoring.create ~hostname ?listen_port:(Key_gen.monitor_adjust ()) ip management);
+     connect_store ctx >>= fun (store, upstream) ->
+     load_git None store upstream >>= function
+     | Error (`Msg msg) ->
+
+

Once you compiled the unikernel (or downloaded a binary with monitoring), and start that unikernel by passing --net:service=tap0 and --net:management=tap10 (or whichever your tap interfaces are), and as unikernel arguments --ipv4=<my-ip-address> and --management-ipv4=192.168.42.2/24 for IPv4 configuration, --monitor=192.168.42.14, --syslog=192.168.42.10, --name=my.unikernel, --monitor-adjust=12345.

+

With this, your unikernel will report metrics using the influx protocol to 192.168.42.14 on port 8094 (every 10 seconds), and syslog messages via UDP to 192.168.0.10 (port 514). You should see your InfluxDB getting filled and syslog server receiving messages.

+

When you configure Grafana to use InfluxDB, you'll be able to see the data in the data sources.

+

Please reach out to us (at team AT robur DOT coop) if you have feedback and suggestions.

+
\ No newline at end of file diff --git a/Posts/NGI b/Posts/NGI new file mode 100644 index 0000000..23553fd --- /dev/null +++ b/Posts/NGI @@ -0,0 +1,67 @@ + +The road ahead for MirageOS in 2021

The road ahead for MirageOS in 2021

Written by hannes
Classified under: mirageos
Published: 2021-01-25 (last updated: 2021-11-19)

Introduction

+

2020 was an intense year. I hope you're healthy and keep being healthy. I am privileged (as lots of software engineers and academics are) to be able to work from home during the pandemic. Let's not forget people in less privileged situations, and let’s try to give them as much practical, psychological and financial support as we can these days. And as much joy as possible to everyone around :)

+

I cancelled the autumn MirageOS retreat due to the pandemic. Instead I collected donations for our hosts in Marrakech - they were very happy to receive our financial support, since they had a difficult year, since their income is based on tourism. I hope that in autumn 2021 we'll have an on-site retreat again.

+

For 2021, we (at robur) got a grant from the EU (via NGI pointer) for "Deploying MirageOS" (more details below), and another grant from OCaml software foundation for securing the opam supply chain (using conex). Some long-awaited releases for MirageOS libraries, namely a ssh implementation and a rewrite of our git implementation have already been published.

+

With my MirageOS view, 2020 was a pretty successful year, where we managed to add more features, fixed lots of bugs, and paved the road ahead. I want to thank OCamlLabs for funding work on MirageOS maintenance.

+

Recap 2020

+

Here is a very subjective random collection of accomplishments in 2020, where I was involved with some degree.

+

NetHSM

+

NetHSM is a hardware security module in software. It is a product that uses MirageOS for security, and is based on the muen separation kernel. We at robur were heavily involved in this product. It already has been security audited by an external team. You can pre-order it from Nitrokey.

+

TLS 1.3

+

Dating back to 2016, at the TRON (TLS 1.3 Ready or NOt), we developed a first draft of a 1.3 implementation of OCaml-TLS. Finally in May 2020 we got our act together, including ECC (ECDH P256 from fiat-crypto, X25519 from hacl) and testing with tlsfuzzer, and release tls 0.12.0 with TLS 1.3 support. Later we added ECC ciphersuites to TLS version 1.2, implemented ChaCha20/Poly1305, and fixed an interoperability issue with Go's implementation.

+

Mirage-crypto provides the underlying cryptographic primitives, initially released in March 2020 as a fork of nocrypto -- huge thanks to pqwy for his great work. Mirage-crypto detects CPU features at runtime (thanks to Julow) (bugfix for bswap), using constant time modular exponentation (powm_sec) and hardens against Lenstra's CRT attack, supports compilation on Windows (thanks to avsm), async entropy harvesting (thanks to seliopou), 32 bit support, chacha20/poly1305 (thanks to abeaumont), cross-compilation (thanks to EduardoRFS) and various bug fixes, even memory leak (thanks to talex5 for reporting several of these issues), and RSA interoperability (thanks to psafont for investigation and mattjbray for reporting). This library feels very mature now - being used by multiple stakeholders, and lots of issues have been fixed in 2020.

+

Qubes Firewall

+

The MirageOS based Qubes firewall is the most widely used MirageOS unikernel. And it got major updates: in May Steffi announced her and Mindy's work on improving it for Qubes 4.0 - including dynamic firewall rules via QubesDB. Thanks to prototypefund for sponsoring.

+

In October 2020, we released Mirage 3.9 with PVH virtualization mode (thanks to mato). There's still a memory leak to be investigated and fixed.

+

IPv6

+

In December, with Mirage 3.10 we got the IPv6 code up and running. Now MirageOS unikernels have a dual stack available, besides IPv4-only and IPv6-only network stacks. Thanks to nojb for the initial code and MagnusS.

+

Turns out this blog, but also robur services, are now available via IPv6 :)

+

Albatross

+

Also in December, I pushed an initial release of albatross, a unikernel orchestration system with remote access. Deploy your unikernel via a TLS handshake -- the unikernel image is embedded in the TLS client certificates.

+

Thanks to reynir for statistics support on Linux and improvements of the systemd service scripts. Also thanks to cfcs for the initial Linux port.

+

CA certs

+

For several years I postponed the problem of how to actually use the operating system trust anchors for OCaml-TLS connections. Thanks to emillon for initial code, there are now ca-certs and ca-certs-nss opam packages (see release announcement) which fills this gap.

+

Unikernels

+

I developed several useful unikernels in 2020, and also pushed a unikernel gallery to the Mirage website:

+

Traceroute in MirageOS

+

I already wrote about traceroute which traces the routing to a given remote host.

+

Unipi - static website hosting

+

Unipi is a static site webserver which retrieves the content from a remote git repository. Let's encrypt certificate provisioning and dynamic updates via a webhook to be executed for every push.

+

TLSTunnel - TLS demultiplexing

+

The physical machine this blog and other robur infrastructure runs on has been relocated from Sweden to Germany mid-December. Thanks to UPS! Fewer IPv4 addresses are available in the new data center, which motivated me to develop tlstunnel.

+

The new behaviour is as follows (see the monitoring branch):

+
    +
  • listener on TCP port 80 which replies with a permanent redirect to https +
  • +
  • listener on TCP port 443 which forwards to a backend host if the requested server name is configured +
  • +
  • its configuration is stored on a block device, and can be dynamically changed (with a custom protocol authenticated with a HMAC) +
  • +
  • it is setup to hold a wildcard TLS certificate and in DNS a wildcard entry is pointing to it +
  • +
  • setting up a new service is very straightforward: only the new name needs to be registered with tlstunnel together with the TCP backend, and everything will just work +
  • +
+

2021

+

The year started with a release of awa, a SSH implementation in OCaml (thanks to haesbaert for initial code). This was followed by a git 3.0 release (thanks to dinosaure).

+

Deploying MirageOS - NGI Pointer

+

For 2021 we at robur received funding from the EU (via NGI pointer) for "Deploying MirageOS", which boils down into three parts:

+
    +
  • reproducible binary releases of MirageOS unikernels, +
  • +
  • monitoring (and other devops features: profiling) and integration into existing infrastructure, +
  • +
  • and further documentation and advertisement. +
  • +
+

Of course this will all be available open source. Please get in touch via eMail (team aT robur dot coop) if you're eager to integrate MirageOS unikernels into your infrastructure.

+

We discovered at an initial meeting with an infrastructure provider that a DNS resolver is of interest - even more now that dnsmasq suffered from dnspooq. We are already working on an implementation of DNSSec.

+

MirageOS unikernels are binary reproducible, and infrastructure tools are available. We are working hard on a web interface (and REST API - think of it as "Docker Hub for MirageOS unikernels"), and more tooling to verify reproducibility.

+

Conex - securing the supply chain

+

Another funding from the OCSF is to continue development and deploy conex - to bring trust into opam-repository. This is a great combination with the reproducible build efforts, and will bring much more trust into retrieving OCaml packages and using MirageOS unikernels.

+

MirageOS 4.0

+

Mirage so far still uses ocamlbuild and ocamlfind for compiling the virtual machine binary. But the switch to dune is close, a lot of effort has been done. This will make the developer experience of MirageOS much more smooth, with a per-unikernel monorepo workflow where you can push your changes to the individual libraries.

+

Footer

+

If you want to support our work on MirageOS unikernels, please donate to robur. I'm interested in feedback, either via twitter, hannesm@mastodon.social or via eMail.

+
\ No newline at end of file diff --git a/Posts/OCaml b/Posts/OCaml new file mode 100644 index 0000000..e7335f1 --- /dev/null +++ b/Posts/OCaml @@ -0,0 +1,121 @@ + +Why OCaml

Why OCaml

Written by hannes
Classified under: overviewbackground
Published: 2016-04-17 (last updated: 2021-11-19)

Programming

+

For me, programming is fun. I enjoy doing it, every single second. All the way +from designing over experimenting to debugging why it does not do what I want. +In the end, the computer is dumb and executes only what you (or code from +someone else which you rely on) tell it to do.

+

To abstract from assembly code, which is not portable, programming languages were +developed. Different flavoured languages vary in +expressive power and static guarantees. Many claim to be general purpose or +systems languages; depending on the choices of +the language designer and tooling around the language, it is a language which lets you conveniently develop programs in.

+

A language designer decides on the builtin abstraction mechanisms, each of which +is both a burden and a blessing: it might be interfering (which to use? for or while, trait or object), +orthogonal (one way to do it), or even synergistic (higher order functions and anonymous functions). Another choice is whether the language includes a type +system, and if the developer can cheat on it (by allowing arbitrary type casts, a weak type system). A strong static type system +allows a developer to encode invariants, without the need to defer to runtime +assertions. Type systems differ in their expressive power (dependent typing are the hot research area at the moment). Tooling depends purely +on the community size, natural selection will prevail the useful tools +(community size gives inertia to other factors: demand for libraries, package manager, activity on stack overflow, etc.).

+

Why OCaml?

+

As already mentioned in other +articles here, it is a +combination of sufficiently large community, runtime stability and performance, modularity, +carefully thought out abstraction mechanisms, maturity (OCaml recently turned 20), and functional features.

+

The latter is squishy, I'll try to explain it a bit: you define your concrete +data types as products (int * int, a tuple of integers), records ({ foo : int ; bar : int } to name fields), sums (type state = Initial | WaitingForKEX | Established, or variants, or tagged union in C). +These are called algebraic data types. Whenever you have a +state machine, you can encode the state as a variant and use a +pattern match to handle the different cases. The compiler checks whether your pattern match is complete +(contains a line for each member of the variant). Another important aspect of +functional programming is that you can pass functions to other functions +(higher-order functions). Also, recursion is fundamental for functional +programming: a function calls itself -- combined with a variant type (such as +type 'a list = Nil | Cons of 'a * 'a list) it is trivial to show termination.

+

Side effects make the program interesting, because they +communicate with other systems or humans. Side effects should be isolated and +explicitly stated (in the type!). Algorithm and protocol +implementations should not deal with side effects internally, but leave this to an +effectful layer on top of it. The internal pure functions +(which receive arguments and return values, no other way of communication) inside +preserve referential +transparency. +Modularity helps to separate the concerns.

+

The holy grail is declarative programing, write what +a program should achieve, not how to achieve it (like often done in an imperative language).

+

OCaml has a object and class system, which I do not use. OCaml also contains +exceptions (and annoyingly the standard library (e.g. List.find) is full of +them), which I avoid as well. Libraries should not expose any exception (apart from out of memory, a really exceptional situation). If your +code might end up in an error state (common for parsers which process input +from the network), return a variant type as value (type ('a, 'b) result = Ok of 'a | Error of 'b). +That way, the caller has to handle +both the success and failure case explicitly.

+

Where to start?

+

The OCaml website contains a variety of +tutorials and examples, including +introductionary +material how to get +started with a new library. Editor integration (at least for emacs, vim, and +atom) is via merlin +available.

+

A very good starting book is OCaml from the very +beginning to learn the functional ideas in OCaml (also +its successor More +OCaml). +Another good book is real world OCaml, though it +is focussed around the "core" library (which I do not recommend due to its +huge size).

+

There are programming +guidelines, best to re-read +on a regular schedule. Daniel wrote guidelines how to handle with errors and results.

+

Opam is the OCaml package manager. +The opam repository contains over 1000 +libraries. The quality varies, I personally like the small libraries done by +Daniel Bünzli, as well as our +nqsb libraries (see mirleft org), +notty. +A concise library (not much code), +including tests, documentation, etc. is +hkdf. For testing I currently prefer +alcotest. For cooperative tasks, +lwt is decent (though it is a bit convoluted by +integrating too many features).

+

I try to stay away from big libraries such as ocamlnet, core, extlib, batteries. +When I develop a library I do not want to force anyone into using such large +code bases. Since opam is widely used, distributing libraries became easier, +thus the trend is towards small libraries (such as +astring, +ptime, +PBKDF, scrypt).

+

What is needed? This depends on your concrete goal. There are lots of +issues in lots of libraries, the MirageOS project also has a list of +Pioneer projects which +would be useful to have. I personally would like to have a native simple +authentication and security layer (SASL) +implementation in OCaml soon (amongst other things, such as using an ELF section for +data, +strtod).

+

A dashboard for MirageOS is +under development, which will hopefully ease tracking of what is being actively +developed within MirageOS. Because I'm impatient, I setup an atom +feed +which watches lots of MirageOS-related repositories.

+

I hope I gave some insight into OCaml, and why I currently enjoy it. A longer read on applicability of OCaml is our Usenix 2015 paper +Not-quite-so-broken TLS: lessons in re-engineering a security protocol +specification and +implementation. I'm interested in feedback, either via +twitter or via eMail.

+

Other updates in the MirageOS ecosystem

+ +
\ No newline at end of file diff --git a/Posts/OpamMirror b/Posts/OpamMirror new file mode 100644 index 0000000..74c4fb4 --- /dev/null +++ b/Posts/OpamMirror @@ -0,0 +1,32 @@ + +Mirroring the opam repository and all tarballs

Mirroring the opam repository and all tarballs

Written by hannes
Classified under: mirageosdeploymentopam
Published: 2022-09-29 (last updated: 2022-10-11)

We at robur developed opam-mirror in the last month and run a public opam mirror at https://opam.robur.coop (updated hourly).

+

What is opam and why should I care?

+

Opam is the OCaml package manager (also used by other projects such as coq). It is a source based system: the so-called repository contains the metadata (url to source tarballs, build dependencies, author, homepage, development repository) of all packages. The main repository is hosted on GitHub as ocaml/opam-repository, where authors of OCaml software can contribute (as pull request) their latest releases.

+

When opening a pull request, automated systems attempt to build not only the newly released package on various platforms and OCaml versions, but also all reverse dependencies, and also with dependencies with the lowest allowed version numbers. That's crucial since neither semantic versioning has been adapted across the OCaml ecosystem (which is tricky, for example due to local opens any newly introduced binding will lead to a major version bump), neither do many people add upper bounds of dependencies when releasing a package (nobody is keen to state "my package will not work with cmdliner in version 1.2.0").

+

So, the opam-repository holds the metadata of lots of OCaml packages (around 4000 at the moment this article was written) with lots of versions (in total 25000) that have been released. It is used by the opam client to figure out which packages to install or upgrade (using a solver that takes the version bounds into consideration).

+

Of course, opam can use other repositories (overlays) or forks thereof. So nothing stops you from using any other opam repository. The url to the source code of each package may be a tarball, or a git repository or other version control systems.

+

The vast majority of opam packages released to the opam-repository include a link to the source tarball and a cryptographic hash of the tarball. This is crucial for security (under the assumption the opam-repository has been downloaded from a trustworthy source - check back later this year for updates on conex). At the moment, there are some weak spots in respect to security: md5 is still allowed, and the hash and the tarball are downloaded from the same server: anyone who is in control of that server can inject arbitrary malicious data. As outlined above, we're working on infrastructure which fixes the latter issue.

+

How does the opam client work?

+

Opam, after initialisation, downloads the index.tar.gz from https://opam.ocaml.org/index.tar.gz, and uses this as the local opam universe. An opam install cmdliner will resolve the dependencies, and download all required tarballs. The download is first tried from the cache, and if that failed, the URL in the package file is used. The download from the cache uses the base url, appends the archive-mirror, followed by the hash algorithm, the first two characters of the has of the tarball, and the hex encoded hash of the archive, i.e. for cmdliner 1.1.1 which specifies its sha512: https://opam.ocaml.org/cache/sha512/54/5478ad833da254b5587b3746e3a8493e66e867a081ac0f653a901cc8a7d944f66e4387592215ce25d939be76f281c4785702f54d4a74b1700bc8838a62255c9e.

+

How does the opam repository work?

+

According to DNS, opam.ocaml.org is a machine at amazon. It likely, apart from the website, uses opam admin index periodically to create the index tarball and the cache. There's an observable delay between a package merge in the opam-repository and when it shows up at opam.ocaml.org. Recently, there was a reported downtime.

+

Apart from being a single point of failure, if you're compiling a lot of opam projects (e.g. a continuous integration / continuous build system), it makes sense from a network usage (and thus sustainability perspective) to move the cache closer to where you need the source archives. We're also organising the MirageOS hack retreats in a northern African country with poor connectivity - so if you gather two dozen camels you better bring your opam repository cache with you to reduce the bandwidth usage (NB: this requires at the moment cooperation of all participants to configure their default opam repository accordingly).

+

Re-developing "opam admin create" as MirageOS unikernel

+

The need for a local opam cache at our reproducible build infrastructure and the retreats, we decided to develop opam-mirror as a MirageOS unikernel. Apart from a useful showcase using persistent storage (that won't fit into memory), and having fun while developing it, our aim was to reduce our time spent on system administration (the opam admin index is only one part of the story, it needs a Unix system and a webserver next to it - plus remote access for doing software updates - which has quite some attack surface.

+

Another reason for re-developing the functionality was that the opam code (what opam admin index actually does) is part of the opam source code, which totals to 50_000 lines of code -- looking up whether one or all checksums are verified before adding the tarball to the cache, was rather tricky.

+

In earlier years, we avoided persistent storage and block devices in MirageOS (by embedding it into the source code with crunch, or using a remote git repository), but recent development, e.g. of chamelon sparked some interest in actually using file systems and figuring out whether MirageOS is ready in that area. A month ago we started the opam-mirror project.

+

Opam-mirror takes a remote repository URL, and downloads all referenced archives. It serves as a cache and opam-repository - and does periodic updates from the remote repository. The idea is to validate all available checksums and store the tarballs only once, and store overlays (as maps) from the other hash algorithms.

+

Code development and improvements

+

Initially, our plan was to use ocaml-git for pulling the repository, chamelon for persistent storage, and httpaf as web server. With ocaml-tar recent support of gzip we should be all set, and done within a few days.

+

There is already a gap in the above plan: which http client to use - in the best case something similar to our http-lwt-client - in MirageOS: it should support HTTP 1.1 and HTTP 2, TLS (with certificate validation), and using happy-eyeballs to seemlessly support both IPv6 and legacy IPv4. Of course it should follow redirect, without that we won't get far in the current Internet.

+

On the path (over the last month), we fixed file descriptor leaks (memory leaks) in paf -- which is used as a runtime for httpaf and h2.

+

Then we ran into some trouble with chamelon (out of memory, some degraded peformance, it reporting out of disk space), and re-thought our demands for opam-mirror. Since the cache is only ever growing (new packages are released), there's no need to ever remove anything: it is append-only. Once we figured that out, we investigated what needs to be done in ocaml-tar (where tar is in fact a tape archive, and was initially designed as file format to be appended to) to support appending to an archive.

+

We also re-thought our bandwidth usage, and instead of cloning the git remote at startup, we developed git-kv which can dump and restore the git state.

+

Also, initially we computed all hashes of all tarballs, but with the size increasing (all archives are around 7.5GB) this lead to a major issue of startup time (around 5 minutes on a laptop), so we wanted to save and restore the maps as well.

+

Since neither git state nor the maps are suitable for tar's append-only semantics, and we didn't want to investigate yet another file system - such as fat may just work fine, but the code looks slightly bitrot, and the reported issues and non-activity doesn't make this package very trustworthy from our point of view. Instead, we developed mirage-block-partition to partition a block device into two. Then we just store the maps and the git state at the end - the end of a tar archive is 2 blocks of zeroes, so stuff at the far end aren't considered by any tooling. Extending the tar archive is also possible, only the maps and git state needs to be moved to the end (or recomputed). As file system, we developed oneffs which stores a single value on the block device.

+

We observed a high memory usage, since each requested archive was first read from the block device into memory, and then sent out. Thanks to Pierre Alains recent enhancements of the mirage-kv API, there is a get_partial, that we use to chunk-wise read the archive and send it via HTTP. Now, the memory usage is around 20MB (the git repository and the generated tarball are kept in memory).

+

What is next? Downloading and writing to the tar archive could be done chunk-wise as well; also dumping and restoring the git state is quite CPU intensive, we would like to improve that. Adding the TLS frontend (currently done on our site by our TLS termination proxy tlstunnel) similar to how unipi does it, including let's encrypt provisioning -- should be straightforward (drop us a note if you'd be interesting in that feature).

+

Conclusion

+

To conclude, we managed within a month to develop this opam-mirror cache from scratch. It has a reasonable footprint (CPU and memory-wise), is easy to maintain and easy to update - if you want to use it, we also provide reproducible binaries for solo5-hvt. You can use our opam mirror with opam repository set-url default https://opam.robur.coop (revert to the other with opam repository set-url default https://opam.ocaml.org) or use it as a backup with opam repository add robur --rank 2 https://opam.robur.coop.

+

Please reach out to us (at team AT robur DOT coop) if you have feedback and suggestions. We are a non-profit company, and rely on donations for doing our work - everyone can contribute.

+
\ No newline at end of file diff --git a/Posts/OperatingSystem b/Posts/OperatingSystem new file mode 100644 index 0000000..2011799 --- /dev/null +++ b/Posts/OperatingSystem @@ -0,0 +1,161 @@ + +Operating systems

Operating systems

Written by hannes
Published: 2016-04-09 (last updated: 2021-11-19)

Sorry to be late with this entry, but I had to fix some issues.

+

What is an operating system?

+

Wikipedia says: "An operating system (OS) is system software that manages +computer hardware and software resources and provides common services for +computer programs." Great. In other terms, it is an abstraction layer. +Applications don't need to deal with the low-level bits (device drivers) of the +computer.

+

But if we look at the landscape of deployed operating systems, there is a lot +more going on than abstracting devices: usually this includes process management (scheduler), +memory management (virtual memory), C +library, user management +(including access control), persistent storage (file system), network stack, +etc. all being part of the kernel, and executed in kernel space. A +counterexample is Minix, which consists of a tiny +microkernel, and executes the above mentioned services as user-space processes.

+

We are (or at least I am) interested in robust systems. Development is done +by humans, thus will always be error-prone. Even a proof of its functional +correctness can be flawed if the proof system is inconsistent or the +specification is wrong. We need to have damage control in place by striving +for the principle of least authority. +The goods to guard is the user data (passwords, personal information, private +mails, ...), which lives in memory.

+

A CPU contains protection rings, +where the kernel runs in ring 0 and thus has full access to the hardware, +including memory. A flaw in the kernel is devastating for the security of the +entire system, it is part of the trusted computing base). +Every byte of kernel code should be carefully developed and audited. If we +can contain code into areas with less authority, we should do so. Obviously, +the mechanism to contain code needs to be carefully audited as well, since +it will likely need to run in privileged mode.

+

In a virtualised world, we run a +hypervisor in ring -1, on top of +which we run an operating system kernel. The hypervisor gives access to memory +and hardware to virtual machines, schedules those virtual machines on +processors, and should isolate the virtual machines from each other (by using +the MMU).

+

there's no cloud, just other people's computers

+

This ominous "cloud" uses hypervisors on huge amount of physical machines, and +executes off-the-shelf operating systems as virtual machines on top. Accounting +is done by resource usage (time, bandwidth, storage).

+

From scratch

+

Ok, now we have hypervisors which already deals with memory and scheduling. Why +should we have the very same functionality again in the (general purpose) operating +system running as virtual machine?

+

Additionally, earlier in my life (back in 2005 at the Dutch hacker camp "What +the hack") I proposed (together with Andreas Bogk) to phase out UNIX before +2038-01-19 (this is when time_t +overflows, unless promoted to 64 bit), and replace it with Dylan. A random +comment about +our talk on the Internet is "the proposal that rewriting an entire OS in a +language with obscure syntax was somewhat original. However, I now somewhat feel +a strange urge to spend some time on Dylan, which is really weird..."

+

Being without funding back then, we didn't get far (hugest success was a +TCP/IP stack in +Dylan), and as mentioned earlier I went into formal methods and mechanised +proofs of full functional correctness properties.

+

MirageOS

+

At the end of 2013, David pointed me to +MirageOS, an operating system developed from scratch in the +functional and statically typed language OCaml. I've not +used much OCaml before, but some other functional programming languages. +Since then, I spend nearly every day on developing OCaml libraries (with varying success on being happy +with my code). In contrast to Dylan, there are more than two people developing MirageOS.

+

The idea is straightforward: use a hypervisor, and its hardware +abstractions (virtualised input/output and network device), and execute the +OCaml runtime directly on it. No C library included (since May 2015, see this +thread). +The virtual machine, based on the OCaml runtime and composed of OCaml libraries, +uses a single address space and runs in ring 0.

+

As mentioned above, all code which runs in ring 0 needs to be carefully +developed and checked since a flaw in it can jeopardise the security properties +of the entire system: the TCP/IP library should not have access to the private +key used for the TLS handshake. If we trust the OCaml runtime, especially its +memory management, there is no way for the TCP/IP library to access the memory +of the TLS subsystem: the TLS API does not expose the private key via an API +call, and being in a memory safe language, a library cannot read arbitrary +memory. There is no real need to isolate each library into a separate address +spaces. In my opinion, using capabilities for memory access would be a great +improvement, similar to barrelfish. OCaml has a C +foreign function call interface which can be used to read arbitrary memory -- +you have to take care that all C bits of the system are not malicious (it is +fortunately difficult to embed C code into MirageOS, thus only few bits written +in C are in MirageOS (such as (loop and allocation free) crypto +primitives). +To further read up on the topic, there is a nice article about the +security.

+

This website is 12MB in size (and I didn't even bother to strip yet), which +includes the static CSS and JavaScript (bootstrap, jquery, fonts), HTTP, TLS (also X.509, ASN.1, crypto), git (and irmin), TCP/IP libraries. +The memory management in MirageOS is +straightforward: the hypervisor provides the OCaml runtime with a chunk of memory, which +immediately takes all of it.

+

This is much simpler to configure and deploy than a UNIX operating system: +There is no virtual memory, no process management, no file +system (the markdown content is held in memory with irmin!), no user management in the image.

+

At compile (configuration) time, the TLS keys are baked into the image, in addition to the url of the remote +git repository, the IPv4 address and ports the image should use: +The full command line for configuring this website is: mirage configure --no-opam --xen -i Posts -n "full stack engineer" -r git://git.robur.io/hannes/hannes.robur.coop.git --dhcp false --network 0 --ip 198.167.222.205 --netmask 255.255.255.0 --gateways 198.167.222.1 --tls 443 --port 80. +It relies on the fact that the TLS certificate chain and private key are in the tls/ subdirectory, which is transformed to code and included in the image (using crunch). An improvement would be to use an ELF section, but there is no code yet. +After configuring and installing the required dependencies, a make builds the statically linked image.

+

Deployment is done via xl create canopy.xl. The file canopy.xl is automatically generated by mirage --configure (but might need modifications). It contains the full path to the image, the name of the bridge +interface, and how much memory the image can use:

+
name = 'canopy'
+kernel = 'mir-canopy.xen'
+builder = 'linux'
+memory = 256
+on_crash = 'preserve'
+vif = [ 'bridge=br0' ]
+
+

To rephrase: instead of running on a multi-purpose operating system including processes, file system, etc., this website uses a +set of libraries, which are compiled and statically +linked into the virtual machine image.

+

MirageOS uses the module system of OCaml to define how interfaces should be, thus an +application developer does not need to care whether they are using the TCP/IP +stack written in OCaml, or the sockets API of a UNIX operating system. This +also allows to compile and debug your library on UNIX using off-the-shelf tools +before deploying it as a virtual machine (NB: this is a lie, since there is code +which is only executed when running on Xen, and this code can be buggy) ;).

+

Most of the MirageOS ecosystem is developed under MIT/ISC/BSD license, which +allows everybody to use it for whichever project they want.

+

Did I mention that by using less code the attack vector shrinks? In +addition to that, using a memory safe programming language, where the developer +does not need to care about memory management and bounds checks, immediately removes +several classes of security problems (namely spatial and temporal memory +issues), once the runtime is trusted. +The OCaml runtime was reviewed by the French Agence nationale de la sécurité des systèmes d’information in 2013, +leading to some changes, such as separation of immutable strings (String) from mutable byte vectors (Bytes).

+

The attack surface is still big enough: logical issues, resource management, and there is no access +control. This website does not need access control, publishing of content is protected by relying on GitHub's +access control.

+

I hope I gave some insight into what the purpose of an operating systems is, and +how MirageOS fits into the picture. I'm interested in feedback, either via +twitter or via eMail.

+

Other updates in the MirageOS ecosystem

+
    +
  • this website is based on Canopy, the content is stored as markdown in a git repository +
  • +
  • it was running in a FreeBSD jail, but when I compiled too much the underlying zfs file system wasn't happy (and is now hanging in kernel space in a read) +
  • +
  • no remote power switch (borrowed to a friend 3 weeks ago), nobody was willing to go to the data centre and reboot +
  • +
  • I wanted to move it anyways to a host where I can deploy Xen guest VMs +
  • +
  • turns out the Xen compilation and deployment mode needed some love: + +
  • +
  • I was travelling +
  • +
  • good news: it now works on Xen, and there is an atom feed +
  • +
  • life of an "eat your own dogfood" full stack engineer ;) +
  • +
+
\ No newline at end of file diff --git a/Posts/Pinata b/Posts/Pinata new file mode 100644 index 0000000..4407c05 --- /dev/null +++ b/Posts/Pinata @@ -0,0 +1,45 @@ + +The Bitcoin Piñata - no candy for you

The Bitcoin Piñata - no candy for you

Written by hannes
Classified under: mirageossecuritybitcoin
Published: 2018-04-18 (last updated: 2021-11-19)

History

+

On February 10th 2015 David Kaloper-Meršinjak and Hannes Mehnert +launched (read also Amir's +description) our bug bounty +program in the form of our +Bitcoin Piñata MirageOS unikernel. Thanks again to +IPredator for both hosting our services and lending us +the 10 Bitcoins! We analysed a +bit more in depth after running it for five months. Mindy recently wrote about +whacking the Bitcoin +Piñata.

+

On March 18th 2018, after more than three years, IPredator, the lender of the Bitcoins, repurposed the 10 Bitcoins for other projects. Initially, we thought that the Piñata would maybe run for a month or two, but IPredator, David, and I decided to keep it running. The update of the Piñata's bounty is a good opportunity to reflect on the project.

+

The 10 Bitcoin in the Piñata were fluctuating in price over time, at peak worth 165000€.

+

From the start of the Piñata project, we published the source code, the virtual machine image, and the versions of the used libraries in a git repository. Everybody could develop their exploits locally before launching them against our Piñata. The Piñata provides TLS endpoints, which require private keys and certificates. These are generated by the Piñata at startup, and the secret for the Bitcoin wallet is provided as a command line argument.

+

Initially the Piñata was deployed on a Linux/Xen machine, later it was migrated to a FreeBSD host using BHyve and VirtIO with solo5, and in December 2017 it was migrated to native BHyve (using ukvm-bin and solo5). We also changed the Piñata code to accomodate for updates, such as the MirageOS 3.0 release, and the discontinuation of floating point numbers for timestamps (asn1-combinators 0.2.0, x509 0.6.0, tls 0.9.0).

+

Motivation

+

We built the Piñata for many purposes: to attract security professionals to evaluate our from-scratch developed TLS stack, to gather empirical data for our Usenix Security 15 paper, and as an improvement to current bug bounty programs.

+

Most bug bounty programs require communication via forms and long wait times for +human experts to evaluate the potential bug. This evaluation is subjective, +intransparent, and often requires signing of non-disclosure agreements (NDA), +even before the evaluation starts.

+

Our Piñata automates these parts, getting rid of wait times and NDAs. To get +the private wallet key that holds the bounty, you need to successfully establish +an authenticated TLS session or find a flaw elsewhere in the stack, which allows +to read arbitrary memory. Everyone can track transactions of the blockchain and +see if the wallet still contains the bounty.

+

Of course, the Piñata can't prove that our stack is secure, and it can't prove +that the access to the wallet is actually inside. But trust us, it is!

+

Observations

+

I still remember vividly the first nights in February 2015, being so nervous that I woke up every two hours and checked the blockchain. Did the Piñata still have the Bitcoins? I was familiar with the code of the Piñata and was afraid there might be a bug which allows to bypass authentication or leak the private key. So far, this doesn't seem to be the case.

+

In April 2016 we stumbled upon an information disclosure in the virtual network +device driver for Xen in MirageOS. Given enough +bandwidth, this could have been used to access the private wallet key. We +upgraded the Piñata and released the MirageOS Security Advisory +00.

+

We analysed the Piñata's access logs to the and bucketed them into website traffic and bounty connections. We are still wondering what happened in July 2015 and July 2017 where the graph shows spikes. Could it be a presentation mentioning the Piñata, or a new automated tool which tests for TLS vulnerabilities, or an increase in market price for Bitcoins?

+

Piñata access Piñata access cumulative

+

The cumulative graph shows that more than 500,000 accesses to the Piñata website, and more than 150,000 attempts at connecting to the Piñata bounty.

+

You can short-circuit the client and server Piñata endpoint and observe the private wallet key being transferred on your computer, TLS encrypted with the secret exchanged by client and server, using socat -x TCP:ownme.ipredator.se:10000 TCP:ownme.ipredator.se:10002.

+

If you attempted to exploit the Piñata, please let us know what you tried! Via +twitter +hannesm@mastodon.social or via eMail.

+

Since the start of 2018 we are developing robust software and systems at robur. If you like our work and want to support us with donations or development contracts, please get in touch with team@robur.io. Robur is a project of the German non-profit Center for the cultivation of technology. Donations to robur are tax-deductible in Europe.

+
\ No newline at end of file diff --git a/Posts/ReproducibleOPAM b/Posts/ReproducibleOPAM new file mode 100644 index 0000000..62bd83b --- /dev/null +++ b/Posts/ReproducibleOPAM @@ -0,0 +1,43 @@ + +Reproducible MirageOS unikernel builds

Reproducible MirageOS unikernel builds

Written by hannes
Published: 2019-12-16 (last updated: 2021-11-19)

Reproducible builds summit

+

I'm just back from the Reproducible builds summit 2019. In 2018, several people developing OCaml and opam and MirageOS, attended the Reproducible builds summit in Paris. The notes from last year on opam reproducibility and MirageOS reproducibility are online. After last years workshop, Raja started developing the opam reproducibilty builder orb, which I extended at and after this years summit. This year before and after the facilitated summit there were hacking days, which allowed further interaction with participants, writing some code and conduct experiments. I had this year again an exciting time at the summit and hacking days, thanks to our hosts, organisers, and all participants.

+

Goal

+

Stepping back a bit, first look on the goal of reproducible builds: when compiling source code multiple times, the produced binaries should be identical. It should be sufficient if the binaries are behaviourally equal, but this is pretty hard to check. It is much easier to check bit-wise identity of binaries, and relaxes the burden on the checker -- checking for reproducibility is reduced to computing the hash of the binaries. Let's stick to the bit-wise identical binary definition, which also means software developers have to avoid non-determinism during compilation in their toolchains, dependent libraries, and developed code.

+

A checklist of potential things leading to non-determinism has been written up by the reproducible builds project. Examples include recording the build timestamp into the binary, ordering of code and embedded data. The reproducible builds project also developed disorderfs for testing reproducibility and diffoscope for comparing binaries with file-dependent readers, falling back to objdump and hexdump. A giant test infrastructure with lots of variations between the builds, mostly using Debian, has been setup over the years.

+

Reproducibility is a precondition for trustworthy binaries. See why does it matter. If there are no instructions how to get from the published sources to the exact binary, why should anyone trust and use the binary which claims to be the result of the sources? It may as well contain different code, including a backdoor, bitcoin mining code, outputting the wrong results for specific inputs, etc. Reproducibility does not imply the software is free of security issues or backdoors, but instead of a audit of the binary - which is tedious and rarely done - the source code can be audited - but the toolchain (compiler, linker, ..) used for compilation needs to be taken into account, i.e. trusted or audited to not be malicious. I will only ever publish binaries if they are reproducible.

+

My main interest at the summit was to enhance existing tooling and conduct some experiments about the reproducibility of MirageOS unikernels -- a unikernel is a statically linked ELF binary to be run as Unix process or virtual machine. MirageOS heavily uses OCaml and opam, the OCaml package manager, and is an opam package itself. Thus, checking reproducibility of a MirageOS unikernel is the same problem as checking reproducibility of an opam package.

+

Reproducible builds with opam

+

Testing for reproducibility is achieved by taking the sources and compile them twice independently. Afterwards the equality of the resulting binaries can be checked. In trivial projects, the sources is just a single file, or originate from a single tarball. In OCaml, opam uses a community repository where OCaml developers publish their package releases to, but can also use custom repositores, and in addition pin packages to git remotes (url including branch or commit), or a directory on the local filesystem. Manually tracking and updating all dependent packages of a MirageOS unikernel is not feasible: our hello-world compiled for hvt (kvm/BHyve) already has 79 opam dependencies, including the OCaml compiler which is distribued as opam package. The unikernel serving this website depends on 175 opam packages.

+

Conceptually there should be two tools, the initial builder, which takes the latest opam packages which do not conflict, and exports exact package versions used during the build, as well as hashes of binaries. The other tool is a rebuilder, which imports the export, conducts a build, and outputs the hashes of the produced binaries.

+

Opam has the concept of a switch, which is an environment where a package set is installed. Switches are independent of each other, and can already be exported and imported. Unfortunately the export is incomplete: if a package includes additional patches as part of the repository -- sometimes needed for fixing releases where the actual author or maintainer of a package responds slowly -- these package neither the patches end up in the export. Also, if a package is pinned to a git branch, the branch appears in the export, but this may change over time by pushing more commits or even force-pushing to that branch. In PR #4040 (under discussion and review), also developed during the summit, I propose to embed the additional files as base64 encoded values in the opam file. To solve the latter issue, I modified the export mechanism to embed the git commit hash (PR #4055), and avoid sources from a local directory and which do not have a checksum.

+

So the opam export contains the information required to gather the exact same sources and build instructions of the opam packages. If the opam repository would be self-contained (i.e. not depend on any other tools), this would be sufficient. But opam does not run in thin air, it requires some system utilities such as /bin/sh, sed, a GNU make, commonly git, a C compiler, a linker, an assembler. Since opam is available on various operating systems, the plugin depext handles host system dependencies, e.g. if your opam package requires gmp to be installed, this requires slightly different names depending on host system or distribution, take a look at conf-gmp. This also means, opam has rather good information about both the opam dependencies and the host system dependencies for each package. Please note that the host system packages used during compilation are not yet recorded (i.e. which gmp package was installed and used during the build, only that a gmp package has to be installed). The base utilities mentioned above (C compiler, linker, shell) are also not recorded yet.

+

Operating system information available in opam (such as architecture, distribution, version), which in some cases maps to exact base utilities, is recorded in the build-environment, a separate artifact. The environment variable SOURCE_DATE_EPOCH, used for communicating the same timestamp when software is required to record a timestamp into the resulting binary, is also captured in the build environment.

+

Additional environment variables may be captured or used by opam packages to produce different output. To avoid this, both the initial builder and the rebuilder are run with minimal environment variables: only PATH (normalised to a whitelist of /bin, /usr/bin, /usr/local/bin and /opt/bin) and HOME are defined. Missing information at the moment includes CPU features: some libraries (gmp?, nocrypto) emit different code depending on the CPU feature.

+

Tooling

+

TL;DR: A build builds an opam package, and outputs .opam-switch, .build-hashes.N, and .build-environment.N. A rebuild uses these artifacts as input, builds the package and outputs another .build-hashes.M and .build-environment.M.

+

The command-line utility orb can be installed and used:

+
$ opam pin add orb git+https://github.com/hannesm/orb.git#active
+$ orb build --twice --keep-build-dir --diffoscope <your-favourite-opam-package>
+
+

It provides two subcommands build and rebuild. The build command takes a list of local opam --repos where to take opam packages from (defaults to default), a compiler (either a variant --compiler=4.09.0+flambda, a version --compiler=4.06.0, or a pin to a local development version --compiler-pin=~/ocaml), and optionally an existing switch --use-switch. It creates a switch, builds the packages, and emits the opam export, hashes of all files installed by these packages, and the build environment. The flags --keep-build retains the build products, opam's --keep-build-dir in addition temporary build products and generated source code. If --twice is provided, a rebuild (described next) is executed after the initial build.

+

The rebuild command takes a directory with the opam export and build environment to build the opam package. It first compares the build-environment with the host system, sets the SOURCE_DATE_EPOCH and switch location accordingly and executes the import. Once the build is finished, it compares the hashes of the resulting files with the previous run. On divergence, if build directories were kept in the previous build, and if diffoscope is available and --diffoscope was provided, diffoscope is run on the diverging files. If --keep-build-dir was provided as well, diff -ur can be used to compare the temporary build and sources, including build logs.

+

The builds are run in parallel, as opam does, this parallelism does not lead to different binaries in my experiments.

+

Results and discussion

+

All MirageOS unikernels I have deployed are reproducible \o/. Also, several binaries such as orb itself, opam, solo5-hvt, and all albatross utilities are reproducible.

+

The unikernel range from hello world, web servers (e.g. this blog, getting its data on startup via a git clone to memory), authoritative DNS servers, CalDAV server. They vary in size between 79 and 200 opam packages, resulting in 2MB - 16MB big ELF binaries (including debug symbols). The unikernel opam repository contains some reproducible unikernels used for testing. Some work-in-progress enhancements are needed to achieve this:

+

At the moment, the opam package of a MirageOS unikernel is automatically generated by mirage configure, but only used for tracking opam dependencies. I worked on mirage PR #1022 to extend the generated opam package with build and install instructions.

+

As mentioned above, if locale is set, ocamlgraph needs to be patched to emit a (locale-dependent) timestamp.

+

The OCaml program crunch embeds a subdirectory as OCaml code into a binary, which we use in MirageOS quite regularly for static assets, etc. This plays in several ways into reproducibility: on the one hand, it needs a timestamp for its last_modified functionality (and adheres since June 2018 to the SOURCE_DATE_EPOCH spec, thanks to Xavier Clerc). On the other hand, it used before version 3.2.0 (released Dec 14th) hashtables for storing the file contents, where iteration is not deterministic (the insertion is not sorted), fixed in PR #51 by using a Map instead.

+

In functoria, a tool used to configure MirageOS devices and their dependencies, can emit a list of opam packages which were required to build the unikernel. This uses opam list --required-by --installed --rec <pkgs>, which uses the cudf graph (thanks to Raja for explanation), that is during the rebuild dropping some packages. The PR #189 avoids by not using the --rec argument, but manually computing the fixpoint.

+

Certainly, the choice of environment variables, and whether to vary them (as debian does) or to not define them (or normalise) while building, is arguably. Since MirageOS does neither support time zone nor internationalisation, there is no need to prematurely solving this issue. On related note, even with different locale settings, MirageOS unikernels are reproducible apart from an issue in ocamlgraph #90 embedding the output of date, which is different depending on LANG and locale (LC_*) settings.

+

Prior art in reproducible MirageOS unikernels is the mirage-qubes-firewall. Since early 2017 it is reproducible. Their approach is different by building in a docker container with the opam repository pinned to an exact git commit.

+

Further work

+

I only tested a certain subset of opam packages and MirageOS unikernels, mainly on a single machine (my laptop) running FreeBSD, and am happy if others will test reproducibility of their OCaml programs with the tools provided. There could as well be CI machines rebuilding opam packages and reporting results to a central repository. I'm pretty sure there are more reproducibility issues in the opam ecosystem. I developed an reproducible testing opam repository with opam packages that do not depend on OCaml, mainly for further tooling development. Some tests were also conducted on a Debian system with the same result. The variations, apart from build time, were using a different user, and different locale settings.

+

As mentioned above, more environment, such as the CPU features, and external system packages, should be captured in the build environment.

+

When comparing OCaml libraries, some output files (cmt / cmti / cma / cmxa) are not deterministic, but contain minimal diverge where I was not able to spot the root cause. It would be great to fix this, likely in the OCaml compiler distribution. Since the final result, the binary I'm interested in, is not affected by non-identical intermediate build products, I hope someone (you?) is interested in improving on this side. OCaml bytecode output also seems to be non-deterministic. There is a discussion on the coq issue tracker which may be related.

+

In contrast to initial plans, I did not used the BUILD_PATH_PREFIX_MAP environment variable, which is implemented in OCaml by PR #1515 (and followups). The main reasons are that something in the OCaml toolchain (I suspect the bytecode interpreter) needed absolute paths to find libraries, thus I'd need a symlink from the left-hand side to the current build directory, which was tedious. Also, my installed assembler does not respect the build path prefix map, and BUILD_PATH_PREFIX_MAP is not widely supported. See e.g. the Debian zarith package with different build paths and its effects on the binary.

+

I'm fine with recording the build path (switch location) in the build environment for now - it turns out to end up only once in MirageOS unikernels, likely by the last linking step, which hopefully soon be solved by llvm 9.0.

+

What was fun was to compare the unikernel when built on Linux with gcc against a built on FreeBSD with clang and lld - spoiler: they emit debug sections with different dwarf versions, it is pretty big. Other fun differences were between OCaml compiler versions: the difference between minor versions (4.08.0 vs 4.08.1) is pretty small (~100kB as human-readable output), while the difference between major version (4.08.1 vs 4.09.0) is rather big (~900kB as human-readable diff).

+

An item on my list for the future is to distribute the opam export, build hashes and build environment artifacts in a authenticated way. I want to integrate this as in-toto style into conex, my not-yet-deployed implementation of tuf for opam that needs further development and a test installation, hopefully in 2020.

+

If you want to support our work on MirageOS unikernels, please donate to robur. I'm interested in feedback, either via twitter, hannesm@mastodon.social or via eMail.

+
\ No newline at end of file diff --git a/Posts/Solo5 b/Posts/Solo5 new file mode 100644 index 0000000..4a8275e --- /dev/null +++ b/Posts/Solo5 @@ -0,0 +1,66 @@ + +Minimising the virtual machine monitor

Minimising the virtual machine monitor

Written by hannes
Classified under: futuremirageossecurity
Published: 2016-07-02 (last updated: 2021-11-19)
    +
  • Update (2016-10-19): all has been merged upstream now! +
  • +
  • Update (2016-10-30): static_website_tls works (TLS,HTTP,network via tap device)! +
  • +
  • Update (2017-02-23): no more extra remotes, Mirage3 is released! +
  • +
+

What?

+

As described earlier, MirageOS is a library operating system developed in OCaml. The code size is already pretty small, deployments are so far either as a UNIX binary or as a Xen virtual machine.

+

Xen is a hypervisor, providing concrete device drivers for the actual hardware of a physical machine, memory management, scheduling, etc. The initial release of Xen was done in 2003, since then the code size and code complexity of Xen is growing. It also has various different mechanisms for virtualisation, hardware assisted ones or purely software based ones, some where the guest operating system needs to cooperate others where it does not need to cooperate.

+

Since 2005, Intel CPUs (as well as AMD CPUs) provide hardware assistance for virtualisation (the VT-x extension), since 2008 extended page tables (EPT) are around which allow a guest to safely access the MMU. Those features gave rise to much smaller hypervisors, such as KVM (mainly Linux), bhyve (FreeBSD), xhyve (MacOSX), vmm (OpenBSD), which do not need to emulate the MMU and other things in software. The boot sequence in those hypervisors uses kexec or multiboot, instead of doing all the 16 bit, 32 bit, 64 bit mode changes manually.

+

MirageOS initially targeted only Xen, in 2015 there was a port to use rumpkernel (a modularised NetBSD), and 2016 solo5 emerged where you can run MirageOS on. Solo5 comes in two shapes, either as ukvm on top of KVM, or as a multiboot image using virtio interfaces (block and network, plus a serial console). Solo5 is only ~1000 lines of code (plus dlmalloc), and ISC-licensed.

+

A recent paper describes the advantages of a tiny virtual machine monitor in detail, namely no more venom like security issues since there is no legacy hardware emulated. Also, each virtual machine monitor can be customised to the unikernel running on top of it: if the unikernel does not need a block device, the monitor shouldn't contain code for it. The idea is to have one customised monitor for each unikernel.

+

While lots of people seem to like KVM and Linux, I still prefer FreeBSD, their jails, and nowadays bhyve. I finally found some time, thanks to various cleanups to the solo5 code base, to finally look into porting solo5 to FreeBSD/bhyve. It runs and can output to console.

+

How?

+

These instructions are still slightly bumpy. If you've a FreeBSD with bhyve (I use FreeBSD-CURRENT), and OCaml and opam (>=1.2.2) installed, it is pretty straightforward to get solo5 running. First, I'd suggest to use a fresh opam switch in case you work on other OCaml projects: opam switch -A 4.04.0 solo5 (followed by eval `opam config env` to setup some environment variables).

+

You need some software from the ports: devel/pkgconf, devel/gmake, devel/binutils, and sysutils/grub2-bhyve.

+

An opam install mirage mirage-logs solo5-kernel-virtio mirage-bootvar-solo5 mirage-solo5 should provide you with a basic set of libraries.

+

Now you can get the mirage-skeleton repository, and inside of device-usage/console, run mirage configure --no-opam --virtio followed by make. There should be a resulting mir-console.virtio.

+

Once that is in place, start your VM:

+
sudo grub-bhyve -M 128M console
+> multiboot (host)/home/hannes/mirage-skeleton/console/mir-console.virtio
+> boot
+
+sudo bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -l com1,stdio -m 128M console
+
+

The following output will appear on your controlling terminal:

+
            |      ___|
+  __|  _ \  |  _ \ __ \
+\__ \ (   | | (   |  ) |
+____/\___/ _|\___/____/
+multiboot: Using memory: 0x100000 - 0x8000000
+TSC frequency estimate is 2593803000 Hz
+Solo5: new bindings
+STUB: getenv() called
+hello
+world
+hello
+world
+hello
+world
+hello
+world
+solo5_app_main() returned with 0
+Kernel done.
+Goodbye!
+
+

Network and TLS stack works as well (tested 30th October).

+

Open issues

+
    +
  • I'm not happy to require ld from the ports (but the one in base does not produce sensible binaries with -z max-page-size=0x1000 related) +
  • +
  • Via twitter, bhyve devs suggested to port ukvm to ubhyve. This is a great idea, to no longer depend on virtio, and get more speed. Any takers? +
  • +
  • Debugging via gdb should be doable somehow, bhyve has some support for gdb, but it is unclear to me what I need to do to enter the debugger (busy looping in the VM and a gdb remote to the port opened by bhyve does not work). +
  • +
+

Conclusion

+

I managed to get solo5 to work with bhyve. I even use clang instead of gcc and don't need to link libgcc.a. :) It is great to see further development in hypervisors and virtual machine monitors. Especially thanks to Martin Lucina for getting things sorted.

+

I'm interested in feedback, either via +twitter or via eMail.

+

Other updates in the MirageOS ecosystem

+

There were some busy times, several pull requests are still waiting to get merged (e.g. some cosmetics in mirage as preconditions for treemaps and dependency diagrams), I proposed to use sleep_ns : int64 -> unit io instead of the sleep : float -> unit io (nobody wants floating point numbers); also an RFC for random, Matt Gray proposed to get rid of CLOCK (and have a PCLOCK and a MCLOCK instead). Soon there will be a major MirageOS release which breaks all the previous unikernels! :)

+
\ No newline at end of file diff --git a/Posts/Summer2019 b/Posts/Summer2019 new file mode 100644 index 0000000..54a7b8d --- /dev/null +++ b/Posts/Summer2019 @@ -0,0 +1,31 @@ + +Summer 2019

Summer 2019

Written by hannes
Published: 2019-07-08 (last updated: 2021-11-19)

Working at robur

+

As announced previously, I started to work at robur early 2018. We're a collective of five people, distributed around Europe and the US, with the goal to deploy MirageOS unikernels. We do this by developing bespoke MirageOS unikernels which provide useful services, and deploy them for ourselves. We also develop new libraries and enhance existing ones and other components of MirageOS. Example unikernels include our website which uses Canopy, a CalDAV server that stores entries in a git remote, and DNS servers (the latter two are further described below).

+

Robur is part of the non-profit company Center for the Cultivation of Technology, who are managing the legal and administrative sides for us. We're ourselves responsible to acquire funding to pay ourselves reasonable salaries. We received funding for CalDAV from prototypefund and further funding from Tarides, for TLS 1.3 from OCaml Labs; security-audited an OCaml codebase, and received donations, also in the form of Bitcoins. We're looking for further funded collaborations and also contracting, mail us at team@robur.io. Please donate (tax-deductible in EU), so we can accomplish our goal of putting robust and sustainable MirageOS unikernels into production, replacing insecure legacy system that emit tons of CO2.

+

Deploying MirageOS unikernels

+

While several examples are running since years (the MirageOS website, Bitcoin Piñata, TLS demo server, etc.), and some shell-scripts for cloud providers are floating around, it is not (yet) streamlined.

+

Service deployment is complex: you have to consider its configuration, exfiltration of logs and metrics, provisioning with valid key material (TLS certificate, hmac shared secret) and authenticators (CA certificate, ssh key fingerprint). Instead of requiring millions lines of code during orchestration (such as Kubernetes), creating the images (docker), or provisioning (ansible), why not minimise the required configuration and dependencies?

+

Earlier in this blog I introduced Albatross, which serves in an enhanced version as our deployment platform on a physical machine (running 15 unikernels at the moment), I won't discuss more detail thereof in this article.

+

CalDAV

+

Steffi and I developed in 2018 a CalDAV server. Since November 2018 we have a test installation for robur, initially running as a Unix process on a virtual machine and persisting data to files on the disk. Mid-June 2019 we migrated it to a MirageOS unikernel, thanks to great efforts in git and irmin, unikernels can push to a remote git repository. We extended the ssh library with a ssh client and use this in git. This also means our CalDAV server is completely immutable (does not carry state across reboots, apart from the data in the remote repository) and does not have persistent state in the form of a block device. Its configuration is mainly done at compile time by the selection of libraries (syslog, monitoring, ...), and boot arguments passed to the unikernel at startup.

+

We monitored the resource usage when migrating our CalDAV server from Unix process to a MirageOS unikernel. The unikernel size is just below 10MB. The workload is some clients communicating with the server on a regular basis. We use Grafana with a influx time series database to monitor virtual machines. Data is collected on the host system (rusage sysctl, kinfo_mem sysctl, ifdata sysctl, vm_get_stats BHyve statistics), and our unikernels these days emit further metrics (mostly counters: gc statistics, malloc statistics, tcp sessions, http requests and status codes).

+

+

Please note that memory usage (upper right) and vm exits (lower right) use logarithmic scale. The CPU usage reduced by more than a factor of 4. The memory usage dropped by a factor of 25, and the network traffic increased - previously we stored log messages on the virtual machine itself, now we send them to a dedicated log host.

+

A MirageOS unikernel, apart from a smaller attack surface, indeed uses fewer resources and actually emits less CO2 than the same service on a Unix virtual machine. So we're doing something good for the environment! :)

+

Our calendar server contains at the moment 63 events, the git repository had around 500 commits in the past month: nearly all of them from the CalDAV server itself when a client modified data via CalDAV, and two manual commits: the initial data imported from the file system, and one commit for fixing a bug of the encoder in our icalendar library.

+

Our CalDAV implementation is very basic, scheduling, adding attendees (which requires sending out eMail), is not supported. But it works well for us, we have individual calendars and a shared one which everyone can write to. On the client side we use macOS and iOS iCalendar, Android DAVdroid, and Thunderbird. If you like to try our CalDAV server, have a look at our installation instructions. Please report issues if you find issues or struggle with the installation.

+

DNS

+

There has been more work on our DNS implementation, now here. We included a DNS client library, and some example unikernels are available. They as well require our opam repository overlay. Please report issues if you run into trouble while experimenting with that.

+

Most prominently is primary-git, a unikernel which acts as a primary authoritative DNS server (UDP and TCP). On startup, it fetches a remote git repository that contains zone files and shared hmac secrets. The zones are served, and secondary servers are notified with the respective serial numbers of the zones, authenticated using TSIG with the shared secrets. The primary server provides dynamic in-protocol updates of DNS resource records (nsupdate), and after successful authentication pushes the change to the remote git. To change the zone, you can just edit the zonefile and push to the git remote - with the proper pre- and post-commit-hooks an authenticated notify is send to the primary server which then pulls the git remote.

+

Another noteworthy unikernel is letsencrypt, which acts as a secondary server, and whenever a TLSA record with custom type (0xFF) and a DER-encoded certificate signing request is observed, it requests a signature from letsencrypt by solving the DNS challenge. The certificate is pushed to the DNS server as TLSA record as well. The DNS implementation provides ocertify and dns-mirage-certify which use the above mechanism to retrieve valid let's encrypt certificates. The caller (unikernel or Unix command-line utility) either takes a private key directly or generates one from a (provided) seed and generates a certificate signing request. It then looks in DNS for a certificate which is still valid and matches the public key and the hostname. If such a certificate is not present, the certificate signing request is pushed to DNS (via the nsupdate protocol), authenticated using TSIG with a given secret. This way our public facing unikernels (website, this blog, TLS demo server, ..) block until they got a certificate via DNS on startup - we avoid embedding of the certificate into the unikernel image.

+

Monitoring

+

We like to gather statistics about the resource usage of our unikernels to find potential bottlenecks and observe memory leaks ;) The base for the setup is the metrics library, which is similarly in design to the logs library: libraries use the core to gather metrics. A different aspect is the reporter, which is globally registered and responsible for exfiltrating the data via their favourite protocol. If no reporter is registered, the work overhead is negligible.

+

+

This is a dashboard which combines both statistics gathered from the host system and various metrics from the MirageOS unikernel. The monitoring branch of our opam repository overlay is used together with monitoring-experiments. The logs errors counter (middle right) was the icalendar parser which tried to parse its badly emitted ics (the bug is now fixed, the dashboard is from last month).

+

OCaml libraries

+

The domain-name library was developed to handle RFC 1035 domain names and host names. It initially was part of the DNS code, but is now freestanding to be used in other core libraries (such as ipaddr) with a small dependency footprint.

+

The GADT map is a normal OCaml Map structure, but takes key-dependent value types by using a GADT. This library also was part of DNS, but is more broadly useful, we already use it in our icalendar (the data format for calendar entries in CalDAV) library, our OpenVPN configuration parser uses it as well, and also x509 - which got reworked quite a bit recently (release pending), and there's preliminary PKCS12 support (which deserves its own article). TLS 1.3 is available on a branch, but is not yet merged. More work is underway, hopefully with sufficient time to write more articles about it.

+

Conclusion

+

More projects are happening as we speak, it takes time to upstream all the changes, such as monitoring, new core libraries, getting our DNS implementation released, pushing Conex into production, more features such as DNSSec, ...

+

I'm interested in feedback, either via twitter hannesm@mastodon.social or via eMail.

+
\ No newline at end of file diff --git a/Posts/Syslog b/Posts/Syslog new file mode 100644 index 0000000..1a060fb --- /dev/null +++ b/Posts/Syslog @@ -0,0 +1,152 @@ + +Exfiltrating log data using syslog

Exfiltrating log data using syslog

Written by hannes
Classified under: mirageosprotocollogging
Published: 2016-11-05 (last updated: 2021-11-19)

It has been a while since my last entry... I've been busy working on too many +projects in parallel, and was also travelling on several continents. I hope to +get back to a biweekly cycle.

+

What is syslog?

+

According to Wikipedia, syslog is a +standard for message logging. Syslog permits separation of the software which +generates, stores, reports, and analyses the message. A syslog message contains +at least a timestamp, a facility, and a severity. It was initially specified in +RFC 3164, though usage predates this RFC.

+

For a unikernel, which likely won't have any persistent storage, syslog is a way +to emit log messages (HTTP access log, debug messages, ...) via the network, and +defer the persistency problem to some other service.

+

Lots of programming languages have logger libraries, which reflect the different +severity of syslog roughly as log levels (debug, informational, warning, error, +fatal). So does OCaml since the beginning of 2016, there is the +Logs library which separates log message +generation from reporting: the closure producing the log string is only +evaluated if there is a reporter which needs to send it out. Additionally, the +reporter can extend the message with the log source name, a timestamp, etc.

+

The Logs library is slowly getting adopted by the MirageOS community (you can +see an incomplete list +here), there are reporters +available which integrate into Apple System +Log, Windows event +log, and also for MirageOS +console. There is a command-line +argument interface to set the log levels of your individual sources, which is +pretty neat. For debugging and running on Unix, console output is usually +sufficient, but for production usage having a console in some screen or tmux +or dumped to a file is usually annoying.

+

Gladly there was already the +syslog-message library, which +encodes and decodes syslog messages from the wire format to a typed +representation. I plugged those together and implemented a +reporter. The +simplest +one +emits each log message via UDP to a log collector. All reporters contain a +socket and handle socket errors themselves (trying to recover) - your +application (or unikernel) shouldn't fail just because the log collector is +currently offline.

+

The setup for Unix is straightforward:

+
Logs.set_reporter (udp_reporter (Unix.inet_addr_of_string "127.0.0.1") ())
+
+

It will report all log messages (you have to set the Log level yourself, +defaults to warning) to your local syslog. You might have already listening a +collector on your host, look in netstat -an for UDP port 514 (and in your +/etc/syslog.conf to see where log messages are routed to).

+

You can even do this from the OCaml toplevel (after opam install logs-syslog):

+
$ utop
+# #require "logs-syslog.unix";;
+# Logs.set_reporter (Logs_syslog_unix.udp_reporter (Unix.inet_addr_of_string "127.0.0.1") ());;
+# Logs.app (fun m -> m "hello, syslog world");;
+
+

I configured my syslog to have all informational messages routed to +/var/log/info.log, you can also try Logs.err (fun m -> m "err");; and look +into your /var/log/messages.

+

This is a good first step, but we want more: on the one side integration into +MirageOS, and a more reliable log stream (what about authentication and +encryption?). I'll cover both topics in the rest of this article.

+

MirageOS integration

+

Since Mirage3, syslog is integrated (see +documentation). +Some additions to your config.ml are needed, see ns +example or +marrakech +example.

+
let logger =
+  syslog_udp (* or _tcp or _tls *)
+    (syslog_config ~truncate:1484 "my_first_unikernel"
+       (Ipaddr.V4.of_string_exn "10.0.0.1")) (* your log host *)
+    stack
+
+let () =
+  register "my_first_unikernel" [
+    foreign ~deps:[abstract logger]
+    ...
+
+

Reliable syslog

+

The old BSD syslog RFC is obsoleted by RFC +5424, which describes a new wire format, +and also a transport over TCP, and TLS in +a subsequent RFC. Unfortunately the syslog-message library does not yet +support the new format (which supports user-defined structured data (key/value +fields), and unicode encoding), but I'm sure one day it will.

+

Another competing syslog RFC 3195 uses +XML encoding, but I have not bothered to look deeper into that one.

+

I implemented both the transport via TCP and via TLS. There are various +solutions used for framing (as described in RFC +6587): either prepend a decimal encoded +length (also specified in RFC6524, but obviously violates streaming +characteristics: the log source needs to have the full message in memory before +sending it out), or have a special delimiter between messages (0 byte, line +feed, CR LN, a custom byte sequence).

+

The TLS +reporter +uses our TLS library written entirely in OCaml, and requires mutual +authentication, both the log reporter has a private key and certificate, and the +log collector needs to present a certificate chain rooted in a provided CA +certificate.

+

Logs supports synchronous and asynchronous logging (where the latter is the +default, please read the note on synchronous +logging). In logs-syslog +this behaviour is not altered. There is no buffer or queue and single writer +task to emit log messages, but a mutex and error recovery which tries to +reconnect once for each log message (of course only if there is not already a +working connection). It is still not clear to me what the desired behaviour +should be, but when introducing buffers I'd loose the synchronous logging (or +will have to write rather intricate code).

+

To rewrap, logs-syslog implements the old BSD syslog protocol via UDP, TCP, +and TLS. There are reporters available using only the Caml +Unix module +(dependency-free!), using +Lwt (also +lwt-tls, +and using MirageOS +interface +(also +TLS). +The code size is below 500 lines in total.

+

MirageOS syslog in production

+

As collector I use syslog-ng, which is capable of receiving both the new and the +old syslog messages on all three transports. The configuration snippet for a +BSD syslog TLS collector is as following:

+
source s_tls {
+  tcp(port(6514)
+      tls(peer-verify(require-trusted)
+          cert-file("/etc/log/server.pem")
+          key-file("/etc/log/server.key")
+          ca-dir("/etc/log/certs"))); };
+
+destination d_tls { file("/var/log/ng-tls.log"); };
+
+log { source(s_tls); destination(d_tls); };
+
+

The "/etc/log/certs" directory contains the CA certificates, together with +links to their hashes (with a 0 appended: ln -s cacert.pem `openssl x509 -noout -hash -in cacert.pem`.0). I used +certify to generate the CA +infrastructure (CA cert, a server certificate for syslog-ng, and a client +certificate for my MirageOS unikernel).

+

It is running since a week like a +charm (already collected 700KB of HTTP access log), and feels much better than +previous ad-hoc solutions to exfiltrate log data.

+

The downside of syslog is obviously that it only works when the network is up -- +thus it does not work while booting, or when a persistent network failure +occured.

+

Code is on GitHub, documentation is +online, released in opam.

+

I'm interested in feedback, either via +twitter or via eMail.

+
\ No newline at end of file diff --git a/Posts/Traceroute b/Posts/Traceroute new file mode 100644 index 0000000..60f356a --- /dev/null +++ b/Posts/Traceroute @@ -0,0 +1,346 @@ + +Traceroute

Traceroute

Written by hannes
Classified under: mirageosprotocol
Published: 2020-06-24 (last updated: 2021-11-19)

Traceroute

+

Is a diagnostic utility which displays the route and measures transit delays of +packets across an Internet protocol (IP) network.

+
$ doas solo5-hvt --net:service=tap0 -- traceroute.hvt --ipv4=10.0.42.2/24 --ipv4-gateway=10.0.42.1 --host=198.167.222.207
+            |      ___|
+  __|  _ \  |  _ \ __ \
+\__ \ (   | | (   |  ) |
+____/\___/ _|\___/____/
+Solo5: Bindings version v0.6.5
+Solo5: Memory map: 512 MB addressable:
+Solo5:   reserved @ (0x0 - 0xfffff)
+Solo5:       text @ (0x100000 - 0x212fff)
+Solo5:     rodata @ (0x213000 - 0x24bfff)
+Solo5:       data @ (0x24c000 - 0x317fff)
+Solo5:       heap >= 0x318000 < stack < 0x20000000
+2020-06-22 15:41:25 -00:00: INF [netif] Plugging into service with mac 76:9b:36:e0:e5:74 mtu 1500
+2020-06-22 15:41:25 -00:00: INF [ethernet] Connected Ethernet interface 76:9b:36:e0:e5:74
+2020-06-22 15:41:25 -00:00: INF [ARP] Sending gratuitous ARP for 10.0.42.2 (76:9b:36:e0:e5:74)
+2020-06-22 15:41:25 -00:00: INF [udp] UDP interface connected on 10.0.42.2
+2020-06-22 15:41:25 -00:00: INF [application]  1  10.0.42.1  351us
+2020-06-22 15:41:25 -00:00: INF [application]  2  192.168.42.1  1.417ms
+2020-06-22 15:41:25 -00:00: INF [application]  3  192.168.178.1  1.921ms
+2020-06-22 15:41:25 -00:00: INF [application]  4  88.72.96.1  16.716ms
+2020-06-22 15:41:26 -00:00: INF [application]  5  *
+2020-06-22 15:41:27 -00:00: INF [application]  6  92.79.215.112  16.794ms
+2020-06-22 15:41:27 -00:00: INF [application]  7  145.254.2.215  21.305ms
+2020-06-22 15:41:27 -00:00: INF [application]  8  145.254.2.217  22.05ms
+2020-06-22 15:41:27 -00:00: INF [application]  9  195.89.99.1  21.088ms
+2020-06-22 15:41:27 -00:00: INF [application] 10  62.115.9.133  20.105ms
+2020-06-22 15:41:27 -00:00: INF [application] 11  213.155.135.82  30.861ms
+2020-06-22 15:41:27 -00:00: INF [application] 12  80.91.246.200  30.716ms
+2020-06-22 15:41:27 -00:00: INF [application] 13  80.91.253.163  28.315ms
+2020-06-22 15:41:27 -00:00: INF [application] 14  62.115.145.27  30.436ms
+2020-06-22 15:41:27 -00:00: INF [application] 15  80.67.4.239  42.826ms
+2020-06-22 15:41:27 -00:00: INF [application] 16  80.67.10.147  47.213ms
+2020-06-22 15:41:27 -00:00: INF [application] 17  198.167.222.207  48.598ms
+Solo5: solo5_exit(0) called
+
+

This means with a traceroute utility you can investigate which route is taken +to a destination host, and what the round trip time(s) on the path are. The +sample output above is taken from a virtual machine on my laptop to the remote +host 198.167.222.207. You can see there are 17 hops between us, with the first +being my laptop with a tiny round trip time of 351us, the second and third are +using private IP addresses, and are my home network. The round trip time of the +fourth hop is much higher, this is the first hop on the other side of my DSL +modem. You can see various hops on the public Internet: the packets pass from +my Internet provider's backbone across some exchange points to the destination +Internet provider somewhere in Sweden.

+

The implementation of traceroute relies mainly on the time-to-live (ttl) field +(in IPv6 lingua it is "hop limit") of IP packets, which is meant to avoid route +cycles that would infinitely forward IP packets in circles. Every router, when +forwarding an IP packet, first checks that the ttl field is greater than zero, +and then forwards the IP packet where the ttl is decreased by one. If the ttl +field is zero, instead of forwarding, an ICMP time exceeded packet is sent back +to the source.

+

Traceroute works by exploiting this mechanism: a series of IP packets with +increasing ttls is sent to the destination. Since upfront the length of the +path is unknown, it is a reactive system: first send an IP packet with a ttl of +one, if a ICMP time exceeded packet is returned, send an IP packet with a ttl of +two, etc. -- until an ICMP packet of type destination unreachable is received. +Since some hosts do not reply with a time exceeded message, it is crucial for +not getting stuck to use a timeout for each packet: when the timeout is reached, +an IP packet with an increased ttl is sent and an unknown for the ttl is +printed (see the fifth hop in the example above).

+

The packets send out are conventionally UDP packets without payload. From a +development perspective, one question is how to correlate the ICMP packet +with the sent UDP packet. Conveniently, ICMP packets contain the IP header and +the first eight bytes of the next protocol - the UDP header containing source +port, destination port, checksum, and payload length (each fields of size two +bytes). This means when we record the outgoing ports together with the sent +timestamp, and correlate the later received ICMP packet to the sent packet. +Great.

+

But as a functional programmer, let's figure whether we can abolish the +(globally shared) state. Since the ICMP packet contains the original IP +header and the first eight bytes of the UDP header, this is where we will +embed data. As described above, the data is the sent timestamp and the value +of the ttl field. For the latter, we can arbitrarily restrict it to 31 (5 bits). +For the timestamp, it is mainly a question about precision and maximum expected +round trip time. Taking the source and destination port are 32 bits, using 5 for +ttl, remaining are 27 bits (an unsigned value up to 134217727). Looking at the +decimal representation, 1 second is likely too small, 13 seconds are sufficient +for the round trip time measurement. This implies our precision is 100ns, by +counting the digits.

+

Finally to the code. First we need forth and back conversions between ports +and ttl, timestamp:

+
(* takes a time-to-live (int) and timestamp (int64, nanoseconda), encodes them
+   into 16 bit source port and 16 bit destination port:
+   - the timestamp precision is 100ns (thus, it is divided by 100)
+   - use the bits 27-11 of the timestamp as source port
+   - use the bits 11-0 as destination port, and 5 bits of the ttl
+*)
+let ports_of_ttl_ts ttl ts =
+  let ts = Int64.div ts 100L in
+  let src_port = 0xffff land (Int64.(to_int (shift_right ts 11)))
+  and dst_port = 0xffe0 land (Int64.(to_int (shift_left ts 5))) lor (0x001f land ttl)
+  in
+  src_port, dst_port
+
+(* inverse operation of ports_of_ttl_ts for the range (src_port and dst_port
+   are 16 bit values) *)
+let ttl_ts_of_ports src_port dst_port =
+  let ttl = 0x001f land dst_port in
+  let ts =
+    let low = Int64.of_int (dst_port lsr 5)
+    and high = Int64.(shift_left (of_int src_port) 11)
+    in
+    Int64.add low high
+  in
+  let ts = Int64.mul ts 100L in
+  ttl, ts
+
+

They should be inverse over the range of valid input: ports are 16 bit numbers, +ttl expected to be at most 31, ts a int64 expressed in nanoseconds.

+

Related is the function to print out one hop and round trip measurement:

+
(* write a log line of a hop: the number, IP address, and round trip time *)
+let log_one now ttl sent ip =
+  let now = Int64.(mul (logand (div now 100L) 0x7FFFFFFL) 100L) in
+  let duration = Mtime.Span.of_uint64_ns (Int64.sub now sent) in
+  Logs.info (fun m -> m "%2d  %a  %a" ttl Ipaddr.V4.pp ip Mtime.Span.pp duration)
+
+

The most logic is when a ICMP packet is received:

+
module Icmp = struct
+  type t = {
+    send : int -> unit Lwt.t ;
+    log : int -> int64 -> Ipaddr.V4.t -> unit ;
+    task_done : unit Lwt.u ;
+  }
+
+  let connect send log task_done =
+    let t = { send ; log ; task_done } in
+    Lwt.return t
+
+  (* This is called for each received ICMP packet. *)
+  let input t ~src ~dst buf =
+    let open Icmpv4_packet in
+    (* Decode the received buffer (the IP header has been cut off already). *)
+    match Unmarshal.of_cstruct buf with
+    | Error s ->
+      Lwt.fail_with (Fmt.strf "ICMP: error parsing message from %a: %s" Ipaddr.V4.pp src s)
+    | Ok (message, payload) ->
+      let open Icmpv4_wire in
+      (* There are two interesting cases: Time exceeded (-> send next packet),
+         and Destination (port) unreachable (-> we reached the final host and can exit) *)
+      match message.ty with
+      | Time_exceeded ->
+        (* Decode the payload, which should be an IPv4 header and a protocol header *)
+        begin match Ipv4_packet.Unmarshal.header_of_cstruct payload with
+          | Ok (pkt, off) when
+              (* Ensure this packet matches our sent packet: the protocol is UDP
+                 and the destination address is the host we're tracing *)
+              pkt.Ipv4_packet.proto = Ipv4_packet.Marshal.protocol_to_int `UDP &&
+              Ipaddr.V4.compare pkt.Ipv4_packet.dst (Key_gen.host ()) = 0 ->
+            let src_port = Cstruct.BE.get_uint16 payload off
+            and dst_port = Cstruct.BE.get_uint16 payload (off + 2)
+            in
+            (* Retrieve ttl and sent timestamp, encoded in the source port and
+               destination port of the UDP packet we sent, and received back as
+               ICMP payload. *)
+            let ttl, sent = ttl_ts_of_ports src_port dst_port in
+            (* Log this hop. *)
+            t.log ttl sent src;
+            (* Sent out the next UDP packet with an increased ttl. *)
+            let ttl' = succ ttl in
+            Logs.debug (fun m -> m "ICMP time exceeded from %a to %a, now sending with ttl %d"
+                           Ipaddr.V4.pp src Ipaddr.V4.pp dst ttl');
+            t.send ttl'
+          | Ok (pkt, _) ->
+            (* Some stray ICMP packet. *)
+            Logs.debug (fun m -> m "unsolicited time exceeded from %a to %a (proto %X dst %a)"
+                           Ipaddr.V4.pp src Ipaddr.V4.pp dst pkt.Ipv4_packet.proto Ipaddr.V4.pp pkt.Ipv4_packet.dst);
+            Lwt.return_unit
+          | Error e ->
+            (* Decoding error. *)
+            Logs.warn (fun m -> m "couldn't parse ICMP time exceeded payload (IPv4) (%a -> %a) %s"
+                          Ipaddr.V4.pp src Ipaddr.V4.pp dst e);
+            Lwt.return_unit
+        end
+      | Destination_unreachable when Ipaddr.V4.compare src (Key_gen.host ()) = 0 ->
+        (* We reached the final host, and the destination port was not listened to *)
+        begin match Ipv4_packet.Unmarshal.header_of_cstruct payload with
+          | Ok (_, off) ->
+            let src_port = Cstruct.BE.get_uint16 payload off
+            and dst_port = Cstruct.BE.get_uint16 payload (off + 2)
+            in
+            (* Retrieve ttl and sent timestamp. *)
+            let ttl, sent = ttl_ts_of_ports src_port dst_port in
+            (* Log the final hop. *)
+            t.log ttl sent src;
+            (* Wakeup the waiter task to exit the unikernel. *)
+            Lwt.wakeup t.task_done ();
+            Lwt.return_unit
+          | Error e ->
+            (* Decoding error. *)
+            Logs.warn (fun m -> m "couldn't parse ICMP unreachable payload (IPv4) (%a -> %a) %s"
+                          Ipaddr.V4.pp src Ipaddr.V4.pp dst e);
+            Lwt.return_unit
+        end
+      | ty ->
+        Logs.debug (fun m -> m "ICMP unknown ty %s from %a to %a: %a"
+                       (ty_to_string ty) Ipaddr.V4.pp src Ipaddr.V4.pp dst
+                       Cstruct.hexdump_pp payload);
+        Lwt.return_unit
+end
+
+

Now, the remaining main unikernel is the module Main:

+
module Main (R : Mirage_random.S) (M : Mirage_clock.MCLOCK) (Time : Mirage_time.S) (N : Mirage_net.S) = struct
+  module ETH = Ethernet.Make(N)
+  module ARP = Arp.Make(ETH)(Time)
+  module IPV4 = Static_ipv4.Make(R)(M)(ETH)(ARP)
+  module UDP = Udp.Make(IPV4)(R)
+
+  (* Global mutable state: the timeout task for a sent packet. *)
+  let to_cancel = ref None
+
+  (* Send a single packet with the given time to live. *)
+  let rec send_udp udp ttl =
+    (* This is called by the ICMP handler which successfully received a
+       time exceeded, thus we cancel the timeout task. *)
+    (match !to_cancel with
+     | None -> ()
+     | Some t -> Lwt.cancel t ; to_cancel := None);
+    (* Our hop limit is 31 - 5 bit - should be sufficient for most networks. *)
+    if ttl > 31 then
+      Lwt.return_unit
+    else
+      (* Create a timeout task which:
+         - sleeps for --timeout interval
+         - logs an unknown hop
+         - sends another packet with increased ttl
+      *)
+      let cancel =
+        Lwt.catch (fun () ->
+            Time.sleep_ns (Duration.of_ms (Key_gen.timeout ())) >>= fun () ->
+            Logs.info (fun m -> m "%2d  *" ttl);
+            send_udp udp (succ ttl))
+          (function Lwt.Canceled -> Lwt.return_unit | exc -> Lwt.fail exc)
+      in
+      (* Assign this timeout task. *)
+      to_cancel := Some cancel;
+      (* Figure out which source and destination port to use, based on ttl
+         and current timestamp. *)
+      let src_port, dst_port = ports_of_ttl_ts ttl (M.elapsed_ns ()) in
+      (* Send packet via UDP. *)
+      UDP.write ~ttl ~src_port ~dst:(Key_gen.host ()) ~dst_port udp Cstruct.empty >>= function
+      | Ok () -> Lwt.return_unit
+      | Error e -> Lwt.fail_with (Fmt.strf "while sending udp frame %a" UDP.pp_error e)
+
+  (* The main unikernel entry point. *)
+  let start () () () net =
+    let cidr = Key_gen.ipv4 ()
+    and gateway = Key_gen.ipv4_gateway ()
+    in
+    let log_one = fun port ip -> log_one (M.elapsed_ns ()) port ip
+    (* Create a task to wait on and a waiter to wakeup. *)
+    and t, w = Lwt.task ()
+    in
+    (* Setup network stack: ethernet, ARP, IPv4, UDP, and ICMP. *)
+    ETH.connect net >>= fun eth ->
+    ARP.connect eth >>= fun arp ->
+    IPV4.connect ~cidr ~gateway eth arp >>= fun ip ->
+    UDP.connect ip >>= fun udp ->
+    let send = send_udp udp in
+    Icmp.connect send log_one w >>= fun icmp ->
+
+    (* The callback cascade for an incoming network packet. *)
+    let ethif_listener =
+      ETH.input
+        ~arpv4:(ARP.input arp)
+        ~ipv4:(
+          IPV4.input
+            ~tcp:(fun ~src:_ ~dst:_ _ -> Lwt.return_unit)
+            ~udp:(fun ~src:_ ~dst:_ _ -> Lwt.return_unit)
+            ~default:(fun ~proto ~src ~dst buf ->
+                match proto with
+                | 1 -> Icmp.input icmp ~src ~dst buf
+                | _ -> Lwt.return_unit)
+            ip)
+        ~ipv6:(fun _ -> Lwt.return_unit)
+        eth
+    in
+    (* Start the callback in a separate asynchronous task. *)
+    Lwt.async (fun () ->
+        N.listen net ~header_size:Ethernet_wire.sizeof_ethernet ethif_listener >|= function
+        | Ok () -> ()
+        | Error e -> Logs.err (fun m -> m "netif error %a" N.pp_error e));
+    (* Send the initial UDP packet with a ttl of 1. This entails the domino
+       effect to receive ICMP packets, send out another UDP packet with ttl
+       increased by one, etc. - until a destination unreachable is received,
+       or the hop limit is reached. *)
+    send 1 >>= fun () ->
+    t
+end
+
+

The configuration (config.ml) for this unikernel is as follows:

+
open Mirage
+
+let host =
+  let doc = Key.Arg.info ~doc:"The host to trace." ["host"] in
+  Key.(create "host" Arg.(opt ipv4_address (Ipaddr.V4.of_string_exn "141.1.1.1") doc))
+
+let timeout =
+  let doc = Key.Arg.info ~doc:"Timeout (in millisecond)" ["timeout"] in
+  Key.(create "timeout" Arg.(opt int 1000 doc))
+
+let ipv4 =
+  let doc = Key.Arg.info ~doc:"IPv4 address" ["ipv4"] in
+  Key.(create "ipv4" Arg.(required ipv4 doc))
+
+let ipv4_gateway =
+  let doc = Key.Arg.info ~doc:"IPv4 gateway" ["ipv4-gateway"] in
+  Key.(create "ipv4-gateway" Arg.(required ipv4_address doc))
+
+let main =
+  let packages = [
+    package ~sublibs:["ipv4"; "udp"; "icmpv4"] "tcpip";
+    package "ethernet";
+    package "arp-mirage";
+    package "mirage-protocols";
+    package "mtime";
+  ] in
+  foreign
+    ~keys:[Key.abstract ipv4 ; Key.abstract ipv4_gateway ; Key.abstract host ; Key.abstract timeout]
+    ~packages
+    "Unikernel.Main"
+    (random @-> mclock @-> time @-> network @-> job)
+
+let () =
+  register "traceroute"
+    [ main $ default_random $ default_monotonic_clock $ default_time $ default_network ]
+
+

And voila, that's all the code. If you copy it together (or download the two +files from the GitHub repository), +and have OCaml, opam, and mirage (>= 3.8.0) installed, +you should be able to:

+
$ mirage configure -t hvt
+$ make depend
+$ make
+$ solo5-hvt --net:service=tap0 -- traceroute.hvt ...
+... get the output shown at top ...
+
+

Enhancements may be to use a different protocol (TCP? or any other protocol ID (may be used to encode more information), encode data into IPv4 ID, or the full 8 bytes of the upper protocol), encrypt/authenticate the data transmitted (and verify it has not been tampered with in the ICMP reply), improve error handling and recovery, send multiple packets for improved round trip time measurements, ...

+

If you develop enhancements you'd like to share, please sent a pull request to the git repository.

+

Motivation for this traceroute unikernel was while talking with Aaron and Paul, who contributed several patches to the IP stack which pass the ttl through.

+

If you want to support our work on MirageOS unikernels, please donate to robur. I'm interested in feedback, either via twitter, hannesm@mastodon.social or via eMail.

+
\ No newline at end of file diff --git a/Posts/VMM b/Posts/VMM new file mode 100644 index 0000000..7d0203a --- /dev/null +++ b/Posts/VMM @@ -0,0 +1,279 @@ + +Albatross - provisioning, deploying, managing, and monitoring virtual machines

Albatross - provisioning, deploying, managing, and monitoring virtual machines

Written by hannes
Published: 2017-07-10 (last updated: 2021-11-19)

How to deploy unikernels?

+

MirageOS has a pretty good story on how to compose your OCaml libraries into a +virtual machine image. The mirage command line utility contains all the +knowledge about which backend requires which library. This enables it to write a +unikernel using abstract interfaces (such as a network device). Additionally the +mirage utility can compile for any backend. (It is still unclear whether this +is a sustainable idea, since the mirage tool needs to be adjusted for every +new backend, but also for additional implementations of an interface.)

+

Once a virtual machine image has been created, it needs to be deployed. I run +my own physical hardware, with all the associated upsides and downsides. +Specifically I run several physical FreeBSD machines on +the Internet, and use the bhyve hypervisor with MirageOS as +described earlier. Recently, Martin +Lucina +developed +a +vmm +backend for Solo5. This means there is no +need to use virtio anymore, or grub2-bhyve, or the bhyve binary (which links +libvmmapi that already had a security +advisory). +Instead of the bhyve binary, a ~70kB small ukvm-bin binary (dynamically +linking libc) can be used which is the solo5 virtual machine monitor on the host +side.

+

Until now, I manually created and deployed virtual machines using shell scripts, +ssh logins, and a network file system shared with the FreeBSD virtual machine +which builds my MirageOS unikernels.

+

But there are several drawbacks with this approach, the biggest is that sharing +resources is hard - to enable a friend to run their unikernel on my server, +they'll need to have a user account, and even privileged permissions to +create virtual network interfaces and execute virtual machines.

+

To get rid of these ad-hoc shell scripts and copying of virtual machine images, +I developed an UNIX daemon which accomplishes the required work. This daemon +waits for (mutually!) authenticated network connections, and provides the +desired commands; to create a new virtual machine, to acquire a block device of +a given size, to destroy a virtual machine, to stream the console output of a +virtual machine.

+

System design

+

The system bears minimalistic characteristics. The single interface to the +outside world is a TLS stream over TCP. Internally, there is a family of +processes, one of which has superuser privileges, communicating via unix domain +sockets. The processes do not need any persistent storage (apart from the +revocation lists). A brief enumeration of the processes is provided below:

+
    +
  • vmmd (superuser privileges), which terminates TLS sessions, proxies messages, and creates and destroys virtual machines (including setup and teardown of network interfaces and virtual block devices) +
  • +
  • vmm_stats periodically gathers resource usage and network interface statistics +
  • +
  • vmm_console reads console output of every provided fifo, and stores this in a ringbuffer, replaying to a client on demand +
  • +
  • vmm_log consumes the event log (login, starting, and stopping of virtual machines) +
  • +
+

The system uses X.509 certificates as tokens. These are authenticated key value +stores. There are four shapes of certificates: a virtual machine certificate +which embeds the entire virtual machine image, together with configuration +information (resource usage, how many and which network interfaces, block device +access); a command certificate (for interactive use, allowing (a subset of) +commands such as attaching to console output); a revocation certificate which +contains a list of revoked certificates; and a delegation certificate to +distribute resources to someone else (an intermediate CA certificate).

+

The resources which can be controlled are CPUs, memory consumption, block +storage, and access to bridge interfaces (virtual switches) - encoded in the +virtual machine and delegation certificates. Additionally, delegation +certificates can limit the number of virtual machines.

+

Leveraging the X.509 system ensures that the client always has to present a +certificate chain from the root certificate. Each intermediate certificate is a +delegation certificate, which may further restrict resources. The serial +numbers of the chain is used as unique identifier for each virtual machine and +other certificates. The chain restricts access of the leaf certificate as well: +only the subtree of the chain can be viewed. E.g. if there are delegations to +both Alice and Bob from the root certificate, they can not see each other +virtual machines.

+

Connecting to the vmmd requires a TLS client, a CA certificate, a leaf +certificate (and the delegation chain) and its private key. In the background, +it is a multi-step process using TLS: first, the client establishes a TLS +connection where it authenticates the server using the CA certificate, then the +server demands a TLS renegotiation where it requires the client to authenticate +with its leaf certificate and private key. Using renegotiation over the +encrypted channel prevents passive observers to see the client certificate in +clear.

+

Depending on the leaf certificate, the server logic is slightly different. A +command certificate opens an interactive session where - depending on +permissions encoded in the certificate - different commands can be issued: the +console output can be streamed, the event log can be viewed, virtual machines +can be destroyed, statistics can be collected, and block devices can be managed.

+

When a virtual machine certificate is presented, the desired resource usage is +checked against the resource policies in the delegation certificate chain and +the currently running virtual machines. If sufficient resources are free, the +embedded virtual machine is started. In addition to other resource information, +a delegation certificate may embed IP usage, listing the network configuration +(gateway and netmask), and which addresses you're supposed to use. Boot +arguments can be encoded in the certificate as well, they are just passed to the +virtual machine (for easy deployment of off-the-shelf systems).

+

If a revocation certificate is presented, the embodied revocation list is +verified, and stored on the host system. Revocation is enforced by destroying +any revoked virtual machines and terminating any revoked interactive sessions. +If a delegation certificate is revoked, additionally the connected block devices +are destroyed.

+

The maximum size of a virtual machine image embedded into a X.509 certificate +transferred over TLS is 2 ^ 24 - 1 bytes, roughly 16 MB. If this turns out to +be not sufficient, compression may help. Or staging of deployment.

+

An example

+

Instructions on how to setup vmmd and the certificate authority are in the +README file of the albatross git repository. Here +is some (stripped) terminal output:

+
> openssl x509 -text -noout -in admin.pem
+Certificate:
+    Data:
+        Serial Number: b7:aa:77:f6:ca:08:ee:6a
+    Signature Algorithm: sha256WithRSAEncryption
+        Issuer: CN=dev
+        Subject: CN=admin
+        X509v3 extensions:
+            1.3.6.1.4.1.49836.42.42: ....
+            1.3.6.1.4.1.49836.42.0: ...
+
+> openssl asn1parse -in admin.pem
+  403:d=4  hl=2 l=  18 cons: SEQUENCE
+  405:d=5  hl=2 l=  10 prim: OBJECT            :1.3.6.1.4.1.49836.42.42
+  417:d=5  hl=2 l=   4 prim: OCTET STRING      [HEX DUMP]:03020780
+  423:d=4  hl=2 l=  17 cons: SEQUENCE
+  425:d=5  hl=2 l=  10 prim: OBJECT            :1.3.6.1.4.1.49836.42.0
+  437:d=5  hl=2 l=   3 prim: OCTET STRING      [HEX DUMP]:020100
+
+> openssl asn1parse -in hello.pem
+  410:d=4  hl=2 l=  18 cons: SEQUENCE
+  412:d=5  hl=2 l=  10 prim: OBJECT            :1.3.6.1.4.1.49836.42.42
+  424:d=5  hl=2 l=   4 prim: OCTET STRING      [HEX DUMP]:03020520
+  430:d=4  hl=2 l=  18 cons: SEQUENCE
+  432:d=5  hl=2 l=  10 prim: OBJECT            :1.3.6.1.4.1.49836.42.5
+  444:d=5  hl=2 l=   4 prim: OCTET STRING      [HEX DUMP]:02020200
+  450:d=4  hl=2 l=  17 cons: SEQUENCE
+  452:d=5  hl=2 l=  10 prim: OBJECT            :1.3.6.1.4.1.49836.42.6
+  464:d=5  hl=2 l=   3 prim: OCTET STRING      [HEX DUMP]:020101
+  469:d=4  hl=5 l=3054024 cons: SEQUENCE
+  474:d=5  hl=2 l=  10 prim: OBJECT            :1.3.6.1.4.1.49836.42.9
+  486:d=5  hl=5 l=3054007 prim: OCTET STRING      [HEX DUMP]:A0832E99B204832E99AD7F454C46
+
+

The MirageOS private enterprise number is 1.3.6.1.4.1.49836, I use the arc 42 +here. I use 0 as version (an integer), where 0 is the current version.

+

42 is a bit string representing the permissions. 5 the amount of memory, 6 the +CPU id, and 9 finally the virtual machine image (as ELF binary). If you're +eager to see more, look into the Vmm_asn module.

+

Using a command certificate establishes an interactive session where you can +review the event log, see all currently running virtual machines, or attach to +the console (which is then streamed, if new console output appears while the +interactive session is active, you'll be notified). The db file is used to +translate between the internal names (mentioned above, hashed serial numbers) to +common names of the certificates - both on command input (attach hello) and +output.

+
> vmm_client cacert.pem admin.bundle admin.key localhost:1025 --db dev.db
+$ info
+info sn.nqsb.io: 'cpuset' '-l' '7' '/tmp/vmm/ukvm-bin.net' '--net=tap27' '--' '/tmp/81363f.0237f3.img' 91540 taps tap27
+info nqsbio: 'cpuset' '-l' '5' '/tmp/vmm/ukvm-bin.net' '--net=tap26' '--' '/tmp/81363f.43a0ff.img' 91448 taps tap26
+info marrakesh: 'cpuset' '-l' '4' '/tmp/vmm/ukvm-bin.net' '--net=tap25' '--' '/tmp/81363f.cb53e2.img' 91368 taps tap25
+info tls.nqsb.io: 'cpuset' '-l' '9' '/tmp/vmm/ukvm-bin.net' '--net=tap28' '--' '/tmp/81363f.ec692e.img' 91618 taps tap28
+$ log
+log: 2017-07-10 09:43:39 +00:00: marrakesh LOGIN 128.232.110.109:43142
+log: 2017-07-10 09:43:39 +00:00: marrakesh STARTED 91368 (tap tap25, block no)
+log: 2017-07-10 09:43:51 +00:00: nqsbio LOGIN 128.232.110.109:44663
+log: 2017-07-10 09:43:51 +00:00: nqsbio STARTED 91448 (tap tap26, block no)
+log: 2017-07-10 09:44:07 +00:00: sn.nqsb.io LOGIN 128.232.110.109:38182
+log: 2017-07-10 09:44:07 +00:00: sn.nqsb.io STARTED 91540 (tap tap27, block no)
+log: 2017-07-10 09:44:21 +00:00: tls.nqsb.io LOGIN 128.232.110.109:11178
+log: 2017-07-10 09:44:21 +00:00: tls.nqsb.io STARTED 91618 (tap tap28, block no)
+log: 2017-07-10 09:44:25 +00:00: hannes LOGIN 128.232.110.109:24207
+success
+$ attach hello
+console hello: 2017-07-09 18:44:52 +00:00             |      ___|
+console hello: 2017-07-09 18:44:52 +00:00   __|  _ \  |  _ \ __ \
+console hello: 2017-07-09 18:44:52 +00:00 \__ \ (   | | (   |  ) |
+console hello: 2017-07-09 18:44:52 +00:00 ____/\___/ _|\___/____/
+console hello: 2017-07-09 18:44:52 +00:00 Solo5: Memory map: 512 MB addressable:
+console hello: 2017-07-09 18:44:52 +00:00 Solo5:     unused @ (0x0 - 0xfffff)
+console hello: 2017-07-09 18:44:52 +00:00 Solo5:       text @ (0x100000 - 0x1e4fff)
+console hello: 2017-07-09 18:44:52 +00:00 Solo5:     rodata @ (0x1e5000 - 0x217fff)
+console hello: 2017-07-09 18:44:52 +00:00 Solo5:       data @ (0x218000 - 0x2cffff)
+console hello: 2017-07-09 18:44:52 +00:00 Solo5:       heap >= 0x2d0000 < stack < 0x20000000
+console hello: 2017-07-09 18:44:52 +00:00 STUB: getenv() called
+console hello: 2017-07-09 18:44:52 +00:00 2017-07-09 18:44:52 -00:00: INF [application] hello
+console hello: 2017-07-09 18:44:53 +00:00 2017-07-09 18:44:53 -00:00: INF [application] hello
+console hello: 2017-07-09 18:44:54 +00:00 2017-07-09 18:44:54 -00:00: INF [application] hello
+console hello: 2017-07-09 18:44:55 +00:00 2017-07-09 18:44:55 -00:00: INF [application] hello
+
+

If you use a virtual machine certificate, depending on allowed resource the +virtual machine is started or not:

+
> vmm_client cacert.pem hello.bundle hello.key localhost:1025
+success VM started
+
+

Sharing is caring

+

Deploying unikernels is now easier for myself on my physical machine. That's +fine. Another aspect comes for free by reusing X.509: further delegation (and +limiting thereof). Within a delegation certificate, the basic constraints +extension must be present which marks this certificate as a CA certificate. +This may as well contain a path length - how many other delegations may follow - +or whether the resources may be shared further.

+

If I delegate 2 virtual machines and 2GB of memory to Alice, and allow an +arbitrary path length, she can issue tokens to her friend Carol and Dan, each up +to 2 virtual machines and 2 GB memory (but also less -- within the X.509 system +even more, but vmmd will reject any resource increase in the chain) - who can +further delegate to Eve, .... Carol and Dan won't know of each other, +and vmmd will only start up to 2 virtual machines using 2GB of memory in total +(sum of Alice, Carol, and Dan deployed virtual machines). Alice may revoke any +issued delegation (using a revocation certificate described above) to free up +some resources for herself. I don't need to interact when Alice or Dan share +their delegated resources further.

+

Security

+

There are several security properties preserved by vmmd, such as the virtual +machine image is never transmitted in clear. Only properly authenticated +clients can create, destroy, gather statistics of their virtual machines.

+

Two disjoint paths in the delegation tree are not able to discover anything +about each other (apart from caches, which depend on how CPUs are delegated and +their concrete physical layout). Only smaller amounts of resources can be +delegated further down. Each running virtual machine image is strongly isolated +from all other virtual machines.

+

As mentioned in the last section, delegations of delegations may end up in the +hands of malicious people. Vmmd limits delegations to allocate resources on the +host system, namely bridges and file systems. Only top delegations - directly +signed by the certificate authority - create bridge interfaces (which are +explicitly named in the certificate) and file systems (one zfs for each top +delegation (to allow easy snapshots and backups)).

+

The threat model is that clients have layer 2 access to the hosts network +interface card, and all guests share a single bridge (if this turns out to be a +problem, there are ways to restrict to a point-to-point interface with routed IP +addresses). A malicious virtual machine can try to hijack ethernet and IP +addresses.

+

Possible DoS scenarios include also to spawn VMs very fast (which immediately +crash) or generating a lot of console output. Both is indirectly handled by the +control channel: to create a virtual machine image, you need to setup a TLS +connection (with two handshakes) and transfer the virtual machine image (there +is intentionally no "respawn on quit" option). The console output is read by a +single process with user privileges (in the future there may be one console +reading process for each top delegation). It may further be rate limited as +well. The console stream is only ever sent to a single session, as soon as +someone attaches to the console in one session, all other sessions have this +console detached (and are notified about that).

+

The control channel itself can be rate limited using the host system firewall.

+

The only information persistently stored on a block device are the certificate +revocation lists - virtual machine images, FIFOs, unix domain sockets are all +stored in a memory-backed file system. A virtual machine with a lots of disk +operation may only delay or starve revocation list updates - if this turns out +to be a problem, the solution may be to use separate physical block devices for +the revocation lists and virtual block devices for clients.

+

Conclusion

+

I showed a minimalistic system to provision, deploy, and manage virtual machine +images. It also allows to delegate resources (CPU, disk, ..) further. I'm +pretty satisfied with the security properties of the system.

+

The system embeds all data (configuration, resource policies, virtual machine +images) into X.509 certificates, and does not rely on an external file transfer +protocol. An advantage thereof is that all deployed images have been signed +with a private key.

+

All communication between the processes and between the client and the server +use a wire protocol, with structured input and output - this enables more +advanced algorithms (e.g. automated scaling) and fancier user interfaces than +the currently provided terminal based one.

+

The delegation mechanism allows to actually share computing resources in a +decentralised way - without knowing the final recipient. Revocation is builtin, +which can at any point delete access of a subtree or individual virtual machine +to the system. Instead of requesting revocation lists during the handshake, +they are pushed explicitly by the (sub)CA revoking a certificate.

+

While this system was designed for a physical server, it should be +straightforward to develop a Google compute engine / EC2 backend which extracts +the virtual machine image, commands, etc. from the certificate and deploys it to +your favourite cloud provider. A virtual machine image itself is only +processor-specific, and should be portable between different hypervisors - being +it FreeBSD and VMM, Linux and KVM, or MacOSX and Hypervisor.Framework.

+

The code is available on GitHub. If you want +to deploy your unikernel on my hardware, please send me a certificate signing +request. I'm interested in feedback, either via +twitter or open issues in the repository. This +article itself is stored in a different +repository (in case you have typo or +grammatical corrections).

+

I'm very thankful to people who gave feedback on earlier versions of this +article, and who discussed the system design with me. These are Addie, Chris, +Christiano, Joe, mato, Mindy, Mort, and sg.

+
\ No newline at end of file diff --git a/Posts/X50907 b/Posts/X50907 new file mode 100644 index 0000000..c2cc310 --- /dev/null +++ b/Posts/X50907 @@ -0,0 +1,54 @@ + +X509 0.7

X509 0.7

Written by hannes
Classified under: mirageossecuritytls
Published: 2019-08-15 (last updated: 2021-11-19)

Cryptographic material

+

Once a private and public key pair is generated (doesn't matter whether it is plain RSA, DSA, ECC on any curve), this is fine from a scientific point of view, and can already be used for authenticating and encrypting. From a practical point of view, the public parts need to be exchanged and verified (usually a fingerprint or hash thereof). This leads to the struggle how to encode this cryptographic material, and how to embed an identity (or multiple), capabilities, and other information into it. X.509 is a standard to solve this encoding and embedding, and provides more functionality, such as establishing chains of trust and revocation of invalidated or compromised material. X.509 uses certificates, which contain the public key, and additional information (in a extensible key-value store), and are signed by an issuer, either the private key corresponding to the public key - a so-called self-signed certificate - or by a different private key, an authority one step up the chain. A rather long, but very good introduction to certificates by Mike Malone is available here.

+

OCaml ecosystem evolving

+

More than 5 years ago David Kaloper and I released the initial ocaml-x509 package as part of our TLS stack, which contained code for decoding and encoding certificates, and path validation of a certificate chain (as described in RFC 5280). The validation logic and the decoder/encoder, based on the ASN.1 grammar specified in the RFC, implemented using David's asn1-combinators library changed much over time.

+

The OCaml ecosystem evolved over the years, which lead to some changes:

+
    +
  • Camlp4 deprecation - we used camlp4 for stream parsers of PEM-encoded certificates, and sexplib.syntax to derive s-expression decoders and encoders; +
  • +
  • Avoiding brittle ppx converters - which we used for s-expression decoders and encoders of certificates after camlp4 was deprecated; +
  • +
  • Build and release system iterations - initially oasis and a packed library, then topkg and ocamlbuild, now dune; +
  • +
  • Introduction of the result type in the standard library - we used to use [ `Ok of certificate option | `Fail of failure ]; +
  • +
  • No more leaking exceptions in the public API; +
  • +
  • Usage of pretty-printers, esp with the fmt library val pp : Format.formatter -> 'a -> unit, instead of val to_string : t -> string functions; +
  • +
  • Release of ptime, a platform-independent POSIX time support; +
  • +
  • Release of rresult, which includes combinators for computation results; +
  • +
  • Release of gmap, a Map whose value types depend on the key, used for X.509 extensions, GeneralName, DistinguishedName, etc.; +
  • +
  • Release of domain-name, a library for domain name operations (as specified in RFC 1035) - used for name validation; +
  • +
  • Usage of the alcotest unit testing framework (instead of oUnit). +
  • +
+

More use cases for X.509

+

Initially, we designed and used ocaml-x509 for providing TLS server endpoints and validation in TLS clients - mostly on the public web, where each operating system ships a set of ~100 trust anchors to validate any web server certificate against. But once you have a X.509 implementation, every authentication problem can be solved by applying it.

+

Authentication with path building

+

It turns out that the trust anchor sets are not equal across operating systems and versions, thus some web servers serve sets, instead of chains, of certificates - as described in RFC 4158, where the client implementation needs to build valid paths and accept a connection if any path can be validated. The path building was initially in 0.5.2 slightly wrong, but fixed quickly in 0.5.3.

+

Fingerprint authentication

+

The chain of trust validation is useful for the open web, where you as software developer don't know to which remote endpoint your software will ever connect to - as long as the remote has a certificate signed (via intermediates) by any of the trust anchors. In the early days, before let's encrypt was launched and embedded as trust anchors (or cross-signed by already deployed trust anchors), operators needed to pay for a certificate - a business model where some CAs did not bother to check the authenticity of a certificate signing request, and thus random people owning valid certificates for microsoft.com or google.com.

+

Instead of using the set of trust anchors, the fingerprint of the server certificate, or preferably the fingerprint of the public key of the certificate, can be used for authentication, as optionally done since some years in jackline, an XMPP client. Support for this certificate / public key pinning was added in x509 0.2.1 / 0.5.0.

+

Certificate signing requests

+

Until x509 0.4.0 there was no support for generating certificate signing requests (CSR), as defined in PKCS 10, which are self-signed blobs containing a public key, an identity, and possibly extensions. Such as CSR is sent to the certificate authority, and after validation of ownership of the identity and paying a fee, the certificate is issued. Let's encrypt specified the ACME protocol which automates the proof of ownership: they provide a HTTP API for requesting a challenge, providing the response (the proof of ownership) via HTTP or DNS, and then allow the submission of a CSR and downloading the signed certificate. The ocaml-x509 library provides operations for creating such a CSR, and also for signing a CSR to generate a certificate.

+

Mindy developed the command-line utility certify which uses these operations from the ocaml-x509 library and acts as a swiss-army knife purely in OCaml for these required operations.

+

Maker developed a let's encrypt library which implements the above mentioned ACME protocol for provisioning CSR to certificates, also using our ocaml-x509 library.

+

To complete the required certificate authority functionality, in x509 0.6.0 certificate revocation lists, both validation and signing, was implemented.

+

Deploying unikernels

+

As described in another post, I developed albatross, an orchestration system for MirageOS unikernels. This uses ASN.1 for internal socket communication and allows remote management via a TLS connection which is mutually authenticated with a X.509 client certificate. To encrypt the X.509 client certificate, first a TLS handshake where the server authenticates itself to the client is established, and over that connection another TLS handshake is established where the client certificate is requested. Note that this mechanism can be dropped with TLS 1.3, since there the certificates are transmitted over an already encrypted channel.

+

The client certificate already contains the command to execute remotely - as a custom extension, being it "show me the console output", or "destroy the unikernel with name = YYY", or "deploy the included unikernel image". The advantage is that the commands are already authenticated, and there is no need for developing an ad-hoc protocol on top of the TLS session. The resource limits, assigned by the authority, are also part of the certificate chain - i.e. the number of unikernels, access to network bridges, available accumulated memory, accumulated size for block devices, are constrained by the certificate chain presented to the server, and currently running unikernels. The names of the chain are used for access control - if Alice and Bob have intermediate certificates from the same CA, neither Alice may manage Bob's unikernels, nor Bob may manage Alice's unikernels. I'm using albatross since 2.5 years in production on two physical machines with ~20 unikernels total (multiple users, multiple administrative domains), and it works stable and is much nicer to deal with than scp and custom hacked shell scripts.

+

Why 0.7?

+

There are still some missing pieces in our ocaml-x509 implementation, namely modern ECC certificates (depending on elliptic curve primitives not yet available in OCaml), RSA-PSS signing (should be straightforward), PKCS 12 (there is a pull request, but this should wait until asn1-combinators supports the ANY defined BY construct to cleanup the code), ... +Once these features are supported, the library should likely be named PKCS since it supports more than X.509, and released as 1.0.

+

The 0.7 release series moved a lot of modules and function names around, thus it is a major breaking release. By using a map instead of lists for extensions, GeneralName, ..., the API was further revised - invariants that each extension key (an ASN.1 object identifier) may occur at most once are now enforced. By not leaking exceptions through the public interface, the API is easier to use safely - see let's encrypt, openvpn, certify, tls, capnp, albatross.

+

I intended in 0.7.0 to have much more precise types, esp. for the SubjectAlternativeName (SAN) extension that uses a GeneralName, but it turns out the GeneralName is as well used for NameConstraints (NC) in a different way -- IP in SAN is an IPv4 or IPv6 address, in CN it is the IP/netmask; DNS is a domain name in SAN, in CN it is a name starting with a leading dot (i.e. ".example.com"), which is not a valid domain name. In 0.7.1, based on a bug report, I had to revert these variants and use less precise types.

+

Conclusion

+

The work on X.509 was sponsored by OCaml Labs. You can support our work at robur by a donation, which we will use to work on our OCaml and MirageOS projects. You can also reach out to us to realize commercial products.

+

I'm interested in feedback, either via twitter hannesm@mastodon.social or via eMail.

+
\ No newline at end of file diff --git a/Posts/index.html b/Posts/index.html new file mode 100644 index 0000000..0f6df0c --- /dev/null +++ b/Posts/index.html @@ -0,0 +1,27 @@ + +full stack engineer

Mirroring the opam repository and all tarballs

Written by hannes

Re-developing an opam cache from scratch, as a MirageOS unikernel

+

All your metrics belong to influx

Written by hannes

How to monitor your MirageOS unikernel with albatross and monitoring-experiments

+

Deploying binary MirageOS unikernels

Written by hannes

Finally, we provide reproducible binary MirageOS unikernels together with packages to reproduce them and setup your own builder

+

Cryptography updates in OCaml and MirageOS

Written by hannes

Elliptic curves (ECDSA/ECDH) are supported in a maintainable and secure way.

+

The road ahead for MirageOS in 2021

Written by hannes

Home office, MirageOS unikernels, 2020 recap, 2021 tbd

+

Traceroute

Written by hannes

A MirageOS unikernel which traces the path between itself and a remote host.

+

Deploying authoritative OCaml-DNS servers as MirageOS unikernels

Written by hannes

A tutorial how to deploy authoritative name servers, let's encrypt, and updating entries from unix services.

+

Reproducible MirageOS unikernel builds

Written by hannes

MirageOS unikernels are reproducible :)

+

X509 0.7

Written by hannes

Five years since ocaml-x509 initial release, it has been reworked and used more widely

+

Summer 2019

Written by hannes

Bringing MirageOS into production, take IV monitoring, CalDAV, DNS

+

The Bitcoin Piñata - no candy for you

Written by hannes

More than three years ago we launched our Bitcoin Piñata as a transparent security bait. It is still up and running!

+

My 2018 contains robur and starts with re-engineering DNS

Written by hannes

New year brings new possibilities and a new environment. I've been working on the most Widely deployed key-value store, the domain name system. Primary and secondary name services are available, including dynamic updates, notify, and tsig authentication.

+

Albatross - provisioning, deploying, managing, and monitoring virtual machines

Written by hannes

all we need is X.509

+

Conex, establish trust in community repositories

Written by hannes

Conex is a library to verify and attest package release integrity and authenticity through the use of cryptographic signatures.

+

Who maintains package X?

Written by hannes

We describe why manual gathering of metadata is out of date, and version control systems are awesome.

+

Jackline, a secure terminal-based XMPP client

Written by hannes

implement it once to know you can do it. implement it a second time and you get readable code. implementing it a third time from scratch may lead to useful libraries.

+

Exfiltrating log data using syslog

Written by hannes

sometimes preservation of data is useful

+

Re-engineering ARP

Written by hannes

If you want it as you like, you've to do it yourself

+

Minimising the virtual machine monitor

Written by hannes

MirageOS solo5 multiboot native on bhyve

+

Counting Bytes

Written by hannes

looking into dependencies and their sizes

+

Configuration DSL step-by-step

Written by hannes

how to actually configure the system

+

Catch the bug, walking through the stack

Written by hannes

10BTC could've been yours

+

Fitting the things together

Written by hannes

building a simple website

+

Why OCaml

Written by hannes

a gentle introduction into OCaml

+

Operating systems

Written by hannes

Operating systems and MirageOS

+

\ No newline at end of file diff --git a/Posts/nqsbWebsite b/Posts/nqsbWebsite new file mode 100644 index 0000000..1c60e9e --- /dev/null +++ b/Posts/nqsbWebsite @@ -0,0 +1,194 @@ + +Fitting the things together

Fitting the things together

Written by hannes
Classified under: mirageoshttptlsprotocol
Published: 2016-04-24 (last updated: 2021-11-19)

Task

+

Our task is to build a small unikernel which provides a project website. On our way we will wade through various layers using code examples. The website itself contains a few paragraphs of text, some link lists, and our published papers in pdf form.

+

Spoiler alert final result can be seen here, the full code here.

+

A first idea

+

We could go all the way to use conduit for wrapping connections, and mirage-http (using cohttp, a very lightweight HTTP server). We'd just need to write routing code which in the end reads from a virtual file system, and some HTML and CSS for the actual site.

+

Turns out, the conduit library is already 1.7 MB in size and depends on 34 libraries, cohttp is another 3.7 MB and 40 dependent libraries. +Both libraries are actively developed, combined there were 25 releases within the last year.

+

Plan

+

Let me state our demands more clearly:

+
    +
  • easy to maintain +
  • +
  • updates roughly 3 times a year +
  • +
  • reasonable performance +
  • +
+

To achieve easy maintenance we keep build and run time dependencies small, use a single virtual machine image to ease deployment. We try to develop only little new code. Our general approach to performance is to do as little work as we can on each request, and precompute at compile time or once at startup as much as we can.

+

HTML code

+

From the tyxml description: "Tyxml provides a set of combinators to build Html5 and Svg documents. These combinators use the OCaml type-system to ensure the validity of the generated Html5 and Svg." A tutorial is available.

+

You can plug elements (or attributes) inside each other only if the HTML specification allows this (no <body> inside of a <body>). An example that can be rendered to a div with pcdata inside.

+

If you use utop (as interactive read-eval-print-loop), you first need to load tyxml by #require "tyxml".

+
open Html5.M
+
+let mycontent =
+  div ~a:[ a_class ["content"] ]
+    [ pcdata "This is a fabulous content." ]
+
+

In the end, our web server will deliver the page as a string, tyxml provides the function Html5.P.print : output:(string -> unit) -> doc -> unit. We use a temporary Buffer to print the document into.

+
# let buf = Buffer.create 100
+# Html5.P.print ~output:(Buffer.add_string buf) mycontent
+
+Error: This expression has type ([> Html5_types.div ] as 'a) elt but an expression was expected of type doc = [ `Html ] elt Type 'a = [> `Div ] is not compatible with type [ `Html ]
+The second variant type does not allow tag(s) `Div
+
+

This is pretty nice, we can only print complete HTML5 documents this way (there are printers for standalone elements as well), and will not be able to serve an incomplete page fragment!

+

To get it up and running, we wrap it inside of a html which has a header and a body:

+
# Html5.P.print ~output:(Buffer.add_string buf) (html (head (title (pcdata "title")) []) (body [ mycontent ]))
+# Buffer.contents buf
+
+"<!DOCTYPE html>\n<html xmlns=\"http://www.w3.org/1999/xhtml\"><head><title>title</title></head><body><div class=\"content\">This is a fabulous content.</div></body></html>"
+
+

The HTML content is done (in a pure way, no effects!), let's work on the binary pdfs. +Our full page source (CSS embedding is done via a string, no fancy types there (yet!?)) is on GitHub. Below we will use the render function for our content:

+
let render =
+  let buf = Buffer.create 500 in
+  Html5.P.print ~output:(Buffer.add_string buf) @@
+  html
+    (header "not quite so broken")
+    (body [ mycontent ]) ;
+  Cstruct.of_string @@ Buffer.contents buf
+
+

Binary data

+

There are various ways how to embed binary data into MirageOS:

+
    +
  • connect an external (FAT) disk image; upside: works for large data, independent, can be shared with other systems; downside: an extra file to distribute onto the production machine, lots of code (block storage and file system access) which can contain directory traversals and other issues +
  • +
  • embed as a special ELF section; downside: not yet implemented +
  • +
  • embed strings in the code; upside: no deployment hassle, works everywhere for small files; downside: need to encode binary to string chunks during build and decode in the MirageOS unikernel, it breaks with large files +
  • +
  • likely others such as use a tar image, read it during build or runtime; wait during bootup on a network socket (or connect somewhere) to receive the static data; use a remote git repository +
  • +
+

We'll use the embedding. There is support for this built in the mirage tool via crunch. If you crunch "foo" in config.ml, it will create a read-only key value store (KV_RO) named static1.ml containing everything in the local "foo" directory during mirage configure. +Each file is encoded as a list of chunks, each 4096 bytes in size in ASCII (octal escaping \000 of control characters).

+

The API is:

+
val read : t -> string -> int -> int -> [ `Ok of page_aligned_buffer list | `UnknownKey of string ]
+val size : t -> string -> [ `Ok of int64 | `UnknownKey of string ]
+
+

The lookup needs to retrieve first the chunk list for the given filename, and then splice the fragments together and return the requested offset and length as page aligned structures. Since this is not for free, we will read our pdfs only once during startup, and then keep a reference to the full pdfs which we deliver upon request:

+
let read_full_kv kv name =
+  KV.size kv name >>= function
+  | `Error (KV.Unknown_key e) -> Lwt.fail (Invalid_argument e)
+  | `Ok size ->
+    KV.read kv name 0 (Int64.to_int size) >>= function
+    | `Error (KV.Unknown_key e) -> Lwt.fail (Invalid_argument e)
+    | `Ok bufs -> Lwt.return (Cstruct.concat bufs)
+
+let start kv =
+  let d_nqsb = Page.render in
+  read_full_kv kv "nqsbtls-usenix-security15.pdf" >>= fun d_usenix ->
+  read_full_kv kv "tron.pdf" >>= fun d_tron ->
+  ...
+
+

The funny >>= syntax notes that something is doing input/output, which might be blocking or interrupted or failing. It composes effects using an imperative style (semicolon in other languages, another term is monadic bind). The Page.render function is described above, and is pure, thus no >>=.

+

We now have all the resources we wanted available inside our MirageOS unikernel. There is some cost during configuration (converting binary into code), and startup (concatenating lists, lookups, rendering HTML into string representation).

+

Building a HTTP response

+

HTTP consists of headers and data, we already have the data. A HTTP header contains of an initial status line (HTTP/1.1 200 OK), and a list of keys-values, each of the form key + ": " + value + "\r\n" (+ is string concatenation). The header is separated with "\r\n\r\n" from the data:

+
let http_header ~status xs =
+  let headers = List.map (fun (k, v) -> k ^ ": " ^ v) xs in
+  let lines = status :: headers @ [ "\r\n" ] in
+  String.concat "\r\n" lines
+
+let header content_type =
+  http_header ~status:"HTTP/1.1 200 OK" [ ("content-type", content_type) ]
+
+

We also know statically (at compile time) which headers to send: content-type should be text/html for our main page, and application/pdf for the pdf files. The status code 200 is used in HTTP to signal that the request is successful. We can combine the headers and the data during startup, because our single communication channel is HTTP, and thus we don't need access to the data or headers separately (support for HTTP caching etc. is out of scope).

+

We are now finished with the response side of HTTP, and can emit three different resources. Now, we need to handle incoming HTTP requests and dispatch them to the resource. Let's first to a brief detour to HTTPS (and thus TLS).

+

Security via HTTPS

+

Transport layer security is a protocol on top of TCP providing an end-to-end encrypted and authenticated channel. In our setting, our web server has a certificate and a private key to authenticate itself to the clients.

+

A certificate is a token containing a public key, a name, a validity period, and a signature from the authority which issued the certificate. The authority is crucial here: this infrastructure only works if the client trusts the public key of the authority (and thus can verify their signature on our certificate). I used let's encrypt (actually the letsencrypt.sh client (would be great to have one natively in OCaml) to get a signed certificate, which is widely accepted by web browsers.

+

The MirageOS interface for TLS is that it takes a FLOW (byte stream, e.g. TCP) and provides a FLOW. Libraries can be written to be agnostic whether they use a TCP stream or a TLS session to carry data.

+

We need to setup our unikernel that on new connections to the HTTPS port, it should first do a TLS handshake, and afterwards talk the HTTP protocol. For a TLS handshake we need to put the certificate and private key into the unikernel, using yet another key value store (crunch "tls").

+

On startup we read the certificate and private key, and use that to create a TLS server config:

+
let start stack kv keys =
+  read_cert keys "nqsb" >>= fun c_nqsb ->
+  let config = Tls.Config.server ~certificates:(`Single c_nqsb) () in
+  S.listen_tcpv4 stack ~port:443 (tls_accept config)
+
+

The listen_tcpv4 is provided by the stackv4 module, and gets a stack, a port number (HTTPS uses 443), and a function which receives a flow instance.

+
let tls_accept cfg tcp =
+  TLS.server_of_flow cfg tcp >>= function
+    | `Error _ | `Eof -> TCP.close tcp
+    | `Ok tls  -> ...
+
+

The server_of_flow is provided by the Mirage TLS layer. It can fail, if the client and server do not speak a common protocol suite, ciphersuite, or if one of the sides behaves non-protocol compliant.

+

To wrap up, we managed to listen on the HTTPS port and establish TLS sessions for each incoming request. We have our resources available, and now need to dispatch the request onto the resource.

+

HTTP request handling

+

HTTP is a string based protocol, the first line of the request contains the method and resource which the client wants to access, GET / HTTP/1.1. We could read a single line from the client, and cut the part between GET and HTTP/1.1 to deliver the resource.

+

This would either need some (brittle?) string processing, or a full-blown HTTP library on our side. "I'm sorry Dave, I can't do that". There is no way we'll do string processing on data received from the network for this.

+

Looking a bit deeper into TLS, there is a specification for server name indication from 2003. The main purpose is to run on a single IP(v4) address multiple TLS services. The client indicates in the TLS handshake what the server name is it wants to talk to. This extension looks in wikipedia widely enough deployed.

+

During the TLS handshake there is already some server name information exposed, and we have a very small set of available resources. Thanks to let's encrypt, generating certificates is easy and free of cost.

+

And, if we're down to a single resource, we can use the same technique used by David in +the BTC Piñata: just send the resource back without waiting for a request.

+

Putting it all together

+

What we need is a hostname for each resource, and certificates and private keys for them, or a single certificate with all hostnames as alternative names.

+

Our TLS library supports to select a certificate chain based on the requested name (look here). The following snippet is a setup to use the nqsb.io certificate chain by default (if no SNI is provided, or none matches), and also have a usenix15 and a tron certificate chain.

+
let start stack keys kv =
+  read_cert keys "nqsb" >>= fun c_nqsb ->
+  read_cert keys "usenix15" >>= fun c_usenix ->
+  read_cert keys "tron" >>= fun c_tron ->
+  let config = Tls.Config.server ~certificates:(`Multiple_default (c_nqsb, [ c_usenix ; c_tron])) () in
+  S.listen_tcpv4 stack ~port:443 (tls_accept config) ;
+
+

Back to dispatching code. We can extract the hostname information from an opaque tls value (the epoch data is fully described here:

+
let extract_name tls =
+  match TLS.epoch tls with
+   | `Error -> None
+   | `Ok e -> e.Tls.Core.own_name
+
+

Since this TLS extension is optional, the return type will be a string option.

+

Now, putting the dispatch together we need a function that gets all resources and the tls state value, and returns the data to send out:

+
let dispatch nqsb usenix tron tls =
+  match extract_name tls with
+   | Some "usenix15.nqsb.io" -> usenix
+   | Some "tron.nqsb.io" ->  tron
+   | Some "nqsb.io" ->  nqsb
+   | _ -> nqsb
+
+

This is again pure code, we need to put it now into the handler, our tls_accept calls the provided function with the tls flow:

+
let tls_accept f cfg tcp =
+  TLS.server_of_flow cfg tcp >>= function
+   | `Error _ | `Eof -> TCP.close tcp
+   | `Ok tls  -> Tls.writev tls (f tls) >>= fun _ -> TLS.close tls
+
+

And our full startup code:

+
let start stack keys kv =
+  let d_nqsb = [ header "text/html;charset=utf-8" ; Page.render ] in
+  read_pdf kv "nqsbtls-usenix-security15.pdf" >>= fun d_usenix ->
+  read_pdf kv "tron.pdf" >>= fun d_tron ->
+  let f = dispatch d_nqsb d_usenix d_tron in
+
+  read_cert keys "nqsb" >>= fun c_nqsb ->
+  read_cert keys "usenix15" >>= fun c_usenix ->
+  read_cert keys "tron" >>= fun c_tron ->
+  let config = Tls.Config.server ~certificates:(`Multiple_default (c_nqsb, [ c_usenix ; c_tron])) () in
+
+  S.listen_tcpv4 stack ~port:443 (tls_accept f config) ;
+  S.listen stack
+
+

That's it, the nqsb.io contains slightly more code to log onto a console, and to redirect requests on port 80 (HTTP) to port 443 (by signaling a 301 Moved permanently HTTP status code).

+

Conclusion

+

A comparison using Firefox builtin network diagnostics shows that the waiting before receiving data is minimal (3ms, even spotted 0ms).

+

+

We do not render HTML for each request, we do not splice data together, we don't even read the client request. And I'm sure we can improve the performance even more by profiling.

+

We saw a journey from typed XML over key value stores, HTTP, TLS, and HTTPS. The actual application code of our unikernel serving nqsb,io is less than 100 lines of OCaml. We used MirageOS for our minimal HTTPS website, serving a single resource per hostname. We depend (directly) on the tyxml library, the mirage tool and network stack, and the tls library. That's it.

+

There is a long list of potential features, such as full HTTP protocol compliance (caching, favicon, ...), logging, natively getting let's encrypt certificates -- but in the web out there it is sufficient to get picked up by search engines, and the maintenance is marginal.

+

For a start in MirageOS unikernels, look into our mirage-skeleton project, and into the /dev/winter presentation by Matt Gray.

+

I'm interested in feedback, either via +twitter or via eMail.

+

Other updates in the MirageOS ecosystem

+ +
\ No newline at end of file diff --git a/atom b/atom new file mode 100644 index 0000000..92732f6 --- /dev/null +++ b/atom @@ -0,0 +1,1059 @@ +urn:uuid:981361ca-e71d-4997-a52c-baeee78e4156full stack engineer2022-10-11T12:14:07-00:00<p>Re-developing an opam cache from scratch, as a MirageOS unikernel</p> +2022-09-29T13:04:14-00:00<p>We at <a href="https://robur.coop">robur</a> developed <a href="https://git.robur.io/robur/opam-mirror">opam-mirror</a> in the last month and run a public opam mirror at https://opam.robur.coop (updated hourly).</p> +<h1>What is opam and why should I care?</h1> +<p><a href="https://opam.ocaml.org">Opam</a> is the OCaml package manager (also used by other projects such as <a href="https://coq.inria.fr">coq</a>). It is a source based system: the so-called repository contains the metadata (url to source tarballs, build dependencies, author, homepage, development repository) of all packages. The main repository is hosted on GitHub as <a href="https://github.com/ocaml/opam-repository">ocaml/opam-repository</a>, where authors of OCaml software can contribute (as pull request) their latest releases.</p> +<p>When opening a pull request, automated systems attempt to build not only the newly released package on various platforms and OCaml versions, but also all reverse dependencies, and also with dependencies with the lowest allowed version numbers. That's crucial since neither semantic versioning has been adapted across the OCaml ecosystem (which is tricky, for example due to local opens any newly introduced binding will lead to a major version bump), neither do many people add upper bounds of dependencies when releasing a package (nobody is keen to state &quot;my package will not work with <a href="https://erratique.ch/software/cmdliner">cmdliner</a> in version 1.2.0&quot;).</p> +<p>So, the opam-repository holds the metadata of lots of OCaml packages (around 4000 at the moment this article was written) with lots of versions (in total 25000) that have been released. It is used by the opam client to figure out which packages to install or upgrade (using a solver that takes the version bounds into consideration).</p> +<p>Of course, opam can use other repositories (overlays) or forks thereof. So nothing stops you from using any other opam repository. The url to the source code of each package may be a tarball, or a git repository or other version control systems.</p> +<p>The vast majority of opam packages released to the opam-repository include a link to the source tarball and a cryptographic hash of the tarball. This is crucial for security (under the assumption the opam-repository has been downloaded from a trustworthy source - check back later this year for updates on <a href="/Posts/Conex">conex</a>). At the moment, there are some weak spots in respect to security: md5 is still allowed, and the hash and the tarball are downloaded from the same server: anyone who is in control of that server can inject arbitrary malicious data. As outlined above, we're working on infrastructure which fixes the latter issue.</p> +<h1>How does the opam client work?</h1> +<p>Opam, after initialisation, downloads the <code>index.tar.gz</code> from <code>https://opam.ocaml.org/index.tar.gz</code>, and uses this as the local opam universe. An <code>opam install cmdliner</code> will resolve the dependencies, and download all required tarballs. The download is first tried from the cache, and if that failed, the URL in the package file is used. The download from the cache uses the base url, appends the archive-mirror, followed by the hash algorithm, the first two characters of the has of the tarball, and the hex encoded hash of the archive, i.e. for cmdliner 1.1.1 which specifies its sha512: <code>https://opam.ocaml.org/cache/sha512/54/5478ad833da254b5587b3746e3a8493e66e867a081ac0f653a901cc8a7d944f66e4387592215ce25d939be76f281c4785702f54d4a74b1700bc8838a62255c9e</code>.</p> +<h1>How does the opam repository work?</h1> +<p>According to DNS, opam.ocaml.org is a machine at amazon. It likely, apart from the website, uses <code>opam admin index</code> periodically to create the index tarball and the cache. There's an observable delay between a package merge in the opam-repository and when it shows up at opam.ocaml.org. Recently, there was <a href="https://discuss.ocaml.org/t/opam-ocaml-org-is-currently-down-is-that-where-indices-are-kept-still/">a reported downtime</a>.</p> +<p>Apart from being a single point of failure, if you're compiling a lot of opam projects (e.g. a continuous integration / continuous build system), it makes sense from a network usage (and thus sustainability perspective) to move the cache closer to where you need the source archives. We're also organising the MirageOS <a href="http://retreat.mirage.io">hack retreats</a> in a northern African country with poor connectivity - so if you gather two dozen camels you better bring your opam repository cache with you to reduce the bandwidth usage (NB: this requires at the moment cooperation of all participants to configure their default opam repository accordingly).</p> +<h1>Re-developing &quot;opam admin create&quot; as MirageOS unikernel</h1> +<p>The need for a local opam cache at our <a href="https://builds.robur.coop">reproducible build infrastructure</a> and the retreats, we decided to develop <a href="https://git.robur.io/robur/opam-mirror">opam-mirror</a> as a <a href="https://mirage.io">MirageOS unikernel</a>. Apart from a useful showcase using persistent storage (that won't fit into memory), and having fun while developing it, our aim was to reduce our time spent on system administration (the <code>opam admin index</code> is only one part of the story, it needs a Unix system and a webserver next to it - plus remote access for doing software updates - which has quite some attack surface.</p> +<p>Another reason for re-developing the functionality was that the opam code (what opam admin index actually does) is part of the opam source code, which totals to 50_000 lines of code -- looking up whether one or all checksums are verified before adding the tarball to the cache, was rather tricky.</p> +<p>In earlier years, we avoided persistent storage and block devices in MirageOS (by embedding it into the source code with <a href="https://github.com/mirage/ocaml-crunch">crunch</a>, or using a remote git repository), but recent development, e.g. of <a href="https://somerandomidiot.com/blog/2022/03/04/chamelon/">chamelon</a> sparked some interest in actually using file systems and figuring out whether MirageOS is ready in that area. A month ago we started the opam-mirror project.</p> +<p>Opam-mirror takes a remote repository URL, and downloads all referenced archives. It serves as a cache and opam-repository - and does periodic updates from the remote repository. The idea is to validate all available checksums and store the tarballs only once, and store overlays (as maps) from the other hash algorithms.</p> +<h1>Code development and improvements</h1> +<p>Initially, our plan was to use <a href="https://github.com/mirage/ocaml-git">ocaml-git</a> for pulling the repository, <a href="https://github.com/yomimono/chamelon">chamelon</a> for persistent storage, and <a href="https://github.com/inhabitedtype/httpaf">httpaf</a> as web server. With <a href="https://github.com/mirage/ocaml-tar">ocaml-tar</a> recent support of <a href="https://github.com/mirage/ocaml-tar/pull/88">gzip</a> we should be all set, and done within a few days.</p> +<p>There is already a gap in the above plan: which http client to use - in the best case something similar to our <a href="https://github.com/roburio/http-lwt-client">http-lwt-client</a> - in MirageOS: it should support HTTP 1.1 and HTTP 2, TLS (with certificate validation), and using <a href="https://github.com/roburio/happy-eyeballs">happy-eyeballs</a> to seemlessly support both IPv6 and legacy IPv4. Of course it should follow redirect, without that we won't get far in the current Internet.</p> +<p>On the path (over the last month), we fixed file descriptor leaks (memory leaks) in <a href="https://github.com/dinosaure/paf-le-chien">paf</a> -- which is used as a runtime for httpaf and h2.</p> +<p>Then we ran into some trouble with chamelon (<a href="https://github.com/yomimono/chamelon/issues/11">out of memory</a>, some degraded peformance, it reporting out of disk space), and re-thought our demands for opam-mirror. Since the cache is only ever growing (new packages are released), there's no need to ever remove anything: it is append-only. Once we figured that out, we investigated what needs to be done in ocaml-tar (where tar is in fact a tape archive, and was initially designed as file format to be appended to) to support appending to an archive.</p> +<p>We also re-thought our bandwidth usage, and instead of cloning the git remote at startup, we developed <a href="https://git.robur.io/robur/git-kv">git-kv</a> which can dump and restore the git state.</p> +<p>Also, initially we computed all hashes of all tarballs, but with the size increasing (all archives are around 7.5GB) this lead to a major issue of startup time (around 5 minutes on a laptop), so we wanted to save and restore the maps as well.</p> +<p>Since neither git state nor the maps are suitable for tar's append-only semantics, and we didn't want to investigate yet another file system - such as <a href="https://github.com/mirage/ocaml-fat">fat</a> may just work fine, but the code looks slightly bitrot, and the reported issues and non-activity doesn't make this package very trustworthy from our point of view. Instead, we developed <a href="https://github.com/reynir/mirage-block-partition">mirage-block-partition</a> to partition a block device into two. Then we just store the maps and the git state at the end - the end of a tar archive is 2 blocks of zeroes, so stuff at the far end aren't considered by any tooling. Extending the tar archive is also possible, only the maps and git state needs to be moved to the end (or recomputed). As file system, we developed <a href="https://git.robur.io/reynir/oneffs">oneffs</a> which stores a single value on the block device.</p> +<p>We observed a high memory usage, since each requested archive was first read from the block device into memory, and then sent out. Thanks to Pierre Alains <a href="https://github.com/mirage/mirage-kv/pull/28">recent enhancements</a> of the mirage-kv API, there is a <code>get_partial</code>, that we use to chunk-wise read the archive and send it via HTTP. Now, the memory usage is around 20MB (the git repository and the generated tarball are kept in memory).</p> +<p>What is next? Downloading and writing to the tar archive could be done chunk-wise as well; also dumping and restoring the git state is quite CPU intensive, we would like to improve that. Adding the TLS frontend (currently done on our site by our TLS termination proxy <a href="https://github.com/roburio/tlstunnel">tlstunnel</a>) similar to how <a href="https://github.com/roburio/unipi">unipi</a> does it, including let's encrypt provisioning -- should be straightforward (drop us a note if you'd be interesting in that feature).</p> +<h1>Conclusion</h1> +<p>To conclude, we managed within a month to develop this opam-mirror cache from scratch. It has a reasonable footprint (CPU and memory-wise), is easy to maintain and easy to update - if you want to use it, we also provide <a href="https://builds.robur.coop/job/opam-mirror">reproducible binaries</a> for solo5-hvt. You can use our opam mirror with <code>opam repository set-url default https://opam.robur.coop</code> (revert to the other with <code>opam repository set-url default https://opam.ocaml.org</code>) or use it as a backup with <code>opam repository add robur --rank 2 https://opam.robur.coop</code>.</p> +<p>Please reach out to us (at team AT robur DOT coop) if you have feedback and suggestions. We are a non-profit company, and rely on <a href="https://robur.coop/Donate">donations</a> for doing our work - everyone can contribute.</p> +urn:uuid:0dbd251f-32c7-57bd-8e8f-7392c0833a09Mirroring the opam repository and all tarballs2022-10-11T12:14:07-00:00hannes<p>How to monitor your MirageOS unikernel with albatross and monitoring-experiments</p> +2022-03-08T11:26:31-00:00<h1>Introduction to monitoring</h1> +<p>At <a href="https://robur.coop">robur</a> we use a range of MirageOS unikernels. Recently, we worked on improving the operations story thereof. One part is shipping binaries using our <a href="https://builds.robur.coop">reproducible builds infrastructure</a>. Another part is, once deployed we want to observe what is going on.</p> +<p>I first got into touch with monitoring - collecting and graphing metrics - with <a href="https://oss.oetiker.ch/mrtg/">MRTG</a> and <a href="https://munin-monitoring.org/">munin</a> - and the simple network management protocol <a href="https://en.wikipedia.org/wiki/Simple_Network_Management_Protocol">SNMP</a>. From the whole system perspective, I find it crucial that the monitoring part of a system does not add pressure. This favours a push-based design, where reporting is done at the disposition of the system.</p> +<p>The rise of monitoring where graphs are done dynamically (such as <a href="https://grafana.com/">Grafana</a>) and can be programmed (with a query language) by the operator are very neat, it allows to put metrics in relation after they have been recorded - thus if there's a thesis why something went berserk, you can graph the collected data from the past and prove or disprove the thesis.</p> +<h1>Monitoring a MirageOS unikernel</h1> +<p>From the operational perspective, taking security into account - either the data should be authenticated and integrity-protected, or being transmitted on a private network. We chose the latter, there's a private network interface only for monitoring. Access to that network is only granted to the unikernels and metrics collector.</p> +<p>For MirageOS unikernels, we use the <a href="https://github.com/mirage/metrics">metrics</a> library - which design shares the idea of <a href="https://erratique.ch/software/logs">logs</a> that only if there's a reporter registered, work is performed. We use the Influx line protocol via TCP to report via <a href="https://www.influxdata.com/time-series-platform/telegraf/">Telegraf</a> to <a href="https://www.influxdata.com/">InfluxDB</a>. But due to the design of <a href="https://github.com/mirage/metrics">metrics</a>, other reporters can be developed and used -- prometheus, SNMP, your-other-favourite are all possible.</p> +<p>Apart from monitoring metrics, we use the same network interface for logging via syslog. Since the logs library separates the log message generation (in the OCaml libraries) from the reporting, we developed <a href="https://github.com/hannesm/logs-syslog">logs-syslog</a>, which registers a log reporter sending each log message to a syslog sink.</p> +<p>We developed a small library for metrics reporting of a MirageOS unikernel into the <a href="https://github.com/roburio/monitoring-experiments">monitoring-experiments</a> package - which also allows to dynamically adjust log level and disable or enable metrics sources.</p> +<h2>Required components</h2> +<p>Install from your operating system the packages providing telegraf, influxdb, and grafana.</p> +<p>Setup telegraf to contain a socket listener:</p> +<pre><code>[[inputs.socket_listener]] + service_address = &quot;tcp://192.168.42.14:8094&quot; + keep_alive_period = &quot;5m&quot; + data_format = &quot;influx&quot; +</code></pre> +<p>Use a unikernel that reports to Influx (below the heading &quot;Unikernels (with metrics reported to Influx)&quot; on <a href="https://builds.robur.coop">builds.robur.coop</a>) and provide <code>--monitor=192.168.42.14</code> as boot parameter. Conventionally, these unikernels expect a second network interface (on the &quot;management&quot; bridge) where telegraf (and a syslog sink) are running. You'll need to pass <code>--net=management</code> and <code>--arg='--management-ipv4=192.168.42.x/24'</code> to albatross-client-local.</p> +<p>Albatross provides a <code>albatross-influx</code> daemon that reports information from the host system about the unikernels to influx. Start it with <code>--influx=192.168.42.14</code>.</p> +<h2>Adding monitoring to your unikernel</h2> +<p>If you want to extend your own unikernel with metrics, follow along these lines.</p> +<p>An example is the <a href="https://github.com/roburio/dns-primary-git">dns-primary-git</a> unikernel, where on the branch <code>future</code> we have a single commit ahead of main that adds monitoring. The difference is in the unikernel configuration and the main entry point. See the <a href="https://builds.robur.coop/job/dns-primary-git-monitoring/build/latest/">binary builts</a> in contrast to the <a href="https://builds.robur.coop/job/dns-primary-git/build/latest/">non-monitoring builts</a>.</p> +<p>In config, three new command line arguments are added: <code>--monitor=IP</code>, <code>--monitor-adjust=PORT</code> <code>--syslog=IP</code> and <code>--name=STRING</code>. In addition, the package <code>monitoring-experiments</code> is required. And a second network interface <code>management_stack</code> using the prefix <code>management</code> is required and passed to the unikernel. Since the syslog reporter requires a console (to report when logging fails), also a console is passed to the unikernel. Each reported metrics includes a tag <code>vm=&lt;name&gt;</code> that can be used to distinguish several unikernels reporting to the same InfluxDB.</p> +<p>Command line arguments:</p> +<pre><code class="language-patch"> let doc = Key.Arg.info ~doc:&quot;The fingerprint of the TLS certificate.&quot; [ &quot;tls-cert-fingerprint&quot; ] in + Key.(create &quot;tls_cert_fingerprint&quot; Arg.(opt (some string) None doc)) + ++let monitor = ++ let doc = Key.Arg.info ~doc:&quot;monitor host IP&quot; [&quot;monitor&quot;] in ++ Key.(create &quot;monitor&quot; Arg.(opt (some ip_address) None doc)) ++ ++let monitor_adjust = ++ let doc = Key.Arg.info ~doc:&quot;adjust monitoring (log level, ..)&quot; [&quot;monitor-adjust&quot;] in ++ Key.(create &quot;monitor_adjust&quot; Arg.(opt (some int) None doc)) ++ ++let syslog = ++ let doc = Key.Arg.info ~doc:&quot;syslog host IP&quot; [&quot;syslog&quot;] in ++ Key.(create &quot;syslog&quot; Arg.(opt (some ip_address) None doc)) ++ ++let name = ++ let doc = Key.Arg.info ~doc:&quot;Name of the unikernel&quot; [&quot;name&quot;] in ++ Key.(create &quot;name&quot; Arg.(opt string &quot;ns.nqsb.io&quot; doc)) ++ + let mimic_impl random stackv4v6 mclock pclock time = + let tcpv4v6 = tcpv4v6_of_stackv4v6 $ stackv4v6 in + let mhappy_eyeballs = mimic_happy_eyeballs $ random $ time $ mclock $ pclock $ stackv4v6 in +</code></pre> +<p>Requiring <code>monitoring-experiments</code>, registering command line arguments:</p> +<pre><code class="language-patch"> package ~min:&quot;3.7.0&quot; ~max:&quot;3.8.0&quot; &quot;git-mirage&quot;; + package ~min:&quot;3.7.0&quot; &quot;git-paf&quot;; + package ~min:&quot;0.0.8&quot; ~sublibs:[&quot;mirage&quot;] &quot;paf&quot;; ++ package &quot;monitoring-experiments&quot;; ++ package ~sublibs:[&quot;mirage&quot;] ~min:&quot;0.3.0&quot; &quot;logs-syslog&quot;; + ] in + foreign +- ~keys:[Key.abstract remote_k ; Key.abstract axfr] ++ ~keys:[ ++ Key.abstract remote_k ; Key.abstract axfr ; ++ Key.abstract name ; Key.abstract monitor ; Key.abstract monitor_adjust ; Key.abstract syslog ++ ] + ~packages +</code></pre> +<p>Added console and a second network stack to <code>foreign</code>:</p> +<pre><code class="language-patch"> &quot;Unikernel.Main&quot; +- (random @-&gt; pclock @-&gt; mclock @-&gt; time @-&gt; stackv4v6 @-&gt; mimic @-&gt; job) ++ (console @-&gt; random @-&gt; pclock @-&gt; mclock @-&gt; time @-&gt; stackv4v6 @-&gt; mimic @-&gt; stackv4v6 @-&gt; job) ++ +</code></pre> +<p>Passing a console implementation (<code>default_console</code>) and a second network stack (with <code>management</code> prefix) to <code>register</code>:</p> +<pre><code class="language-patch">+let management_stack = generic_stackv4v6 ~group:&quot;management&quot; (netif ~group:&quot;management&quot; &quot;management&quot;) + + let () = + register &quot;primary-git&quot; +- [dns_handler $ default_random $ default_posix_clock $ default_monotonic_clock $ +- default_time $ net $ mimic_impl] ++ [dns_handler $ default_console $ default_random $ default_posix_clock $ default_monotonic_clock $ ++ default_time $ net $ mimic_impl $ management_stack] +</code></pre> +<p>Now, in the unikernel module the functor changes (console and second network stack added):</p> +<pre><code class="language-patch">@@ -4,17 +4,48 @@ + + open Lwt.Infix + +-module Main (R : Mirage_random.S) (P : Mirage_clock.PCLOCK) (M : Mirage_clock.MCLOCK) (T : Mirage_time.S) (S : Mirage_stack.V4V6) (_ : sig e +nd) = struct ++module Main (C : Mirage_console.S) (R : Mirage_random.S) (P : Mirage_clock.PCLOCK) (M : Mirage_clock.MCLOCK) (T : Mirage_time.S) (S : Mirage +_stack.V4V6) (_ : sig end) (Management : Mirage_stack.V4V6) = struct + + module Store = Irmin_mirage_git.Mem.KV(Irmin.Contents.String) + module Sync = Irmin.Sync(Store) +</code></pre> +<p>And in the <code>start</code> function, the command line arguments are processed and used to setup syslog and metrics monitoring to the specified addresses. Also, a TCP listener is waiting for monitoring and logging adjustments if <code>--monitor-adjust</code> was provided:</p> +<pre><code class="language-patch"> module D = Dns_server_mirage.Make(P)(M)(T)(S) ++ module Monitoring = Monitoring_experiments.Make(T)(Management) ++ module Syslog = Logs_syslog_mirage.Udp(C)(P)(Management) + +- let start _rng _pclock _mclock _time s ctx = ++ let start c _rng _pclock _mclock _time s ctx management = ++ let hostname = Key_gen.name () in ++ (match Key_gen.syslog () with ++ | None -&gt; Logs.warn (fun m -&gt; m &quot;no syslog specified, dumping on stdout&quot;) ++ | Some ip -&gt; Logs.set_reporter (Syslog.create c management ip ~hostname ())); ++ (match Key_gen.monitor () with ++ | None -&gt; Logs.warn (fun m -&gt; m &quot;no monitor specified, not outputting statistics&quot;) ++ | Some ip -&gt; Monitoring.create ~hostname ?listen_port:(Key_gen.monitor_adjust ()) ip management); + connect_store ctx &gt;&gt;= fun (store, upstream) -&gt; + load_git None store upstream &gt;&gt;= function + | Error (`Msg msg) -&gt; +</code></pre> +<p>Once you compiled the unikernel (or downloaded a binary with monitoring), and start that unikernel by passing <code>--net:service=tap0</code> and <code>--net:management=tap10</code> (or whichever your <code>tap</code> interfaces are), and as unikernel arguments <code>--ipv4=&lt;my-ip-address&gt;</code> and <code>--management-ipv4=192.168.42.2/24</code> for IPv4 configuration, <code>--monitor=192.168.42.14</code>, <code>--syslog=192.168.42.10</code>, <code>--name=my.unikernel</code>, <code>--monitor-adjust=12345</code>.</p> +<p>With this, your unikernel will report metrics using the influx protocol to 192.168.42.14 on port 8094 (every 10 seconds), and syslog messages via UDP to 192.168.0.10 (port 514). You should see your InfluxDB getting filled and syslog server receiving messages.</p> +<p>When you configure <a href="https://grafana.com/docs/grafana/latest/getting-started/getting-started-influxdb/">Grafana to use InfluxDB</a>, you'll be able to see the data in the data sources.</p> +<p>Please reach out to us (at team AT robur DOT coop) if you have feedback and suggestions.</p> +urn:uuid:b8f1fa5b-d8dd-5a54-a9e4-064b9dcd053eAll your metrics belong to influx2022-03-08T11:26:31-00:00hannes<p>Finally, we provide reproducible binary MirageOS unikernels together with packages to reproduce them and setup your own builder</p> +2021-06-30T13:13:37-00:00<h2>Introduction</h2> +<p>MirageOS development focus has been a lot on tooling and the developer experience, but to accomplish <a href="https://robur.coop">our</a> goal to &quot;get MirageOS into production&quot;, we need to lower the barrier. This means for us to release binary unikernels. As described <a href="/Posts/NGI">earlier</a>, we received a grant for &quot;Deploying MirageOS&quot; from <a href="https://pointer.ngi.eu">NGI Pointer</a> to work on the required infrastructure. This is joint work with <a href="https://reynir.dk/">Reynir</a>.</p> +<p>We provide at <a href="https://builds.robur.coop">builds.robur.coop</a> binary unikernel images (and supplementary software). Doing binary releases of MirageOS unikernels is challenging in two aspects: firstly to be useful for everyone, a binary unikernel should not contain any configuration (such as private keys, certificates, etc.). Secondly, the binaries should be <a href="https://reproducible-builds.org">reproducible</a>. This is crucial for security; everyone can reproduce the exact same binary and verify that our build service did only use the sources. No malware or backdoors included.</p> +<p>This post describes how you can deploy MirageOS unikernels without compiling it from source, then dives into the two issues outlined above - configuration and reproducibility - and finally describes how to setup your own reproducible build infrastructure for MirageOS, and how to bootstrap it.</p> +<h2>Deploying MirageOS unikernels from binary</h2> +<p>To execute a MirageOS unikernel, apart from a hypervisor (Xen/KVM/Muen), a tender (responsible for allocating host system resources and passing these to the unikernel) is needed. Using virtio, this is conventionally done with qemu on Linux, but its code size (and attack surface) is huge. For MirageOS, we develop <a href="https://github.com/solo5/solo5">Solo5</a>, a minimal tender. It supports <em>hvt</em> - hardware virtualization (Linux KVM, FreeBSD BHyve, OpenBSD VMM), <em>spt</em> - sandboxed process (a tight seccomp ruleset (only a handful of system calls allowed, no hardware virtualization needed), Linux only). Apart from that, <a href="https://muen.sk"><em>muen</em></a> (a hypervisor developed in Ada), <em>virtio</em> (for some cloud deployments), and <em>xen</em> (PVHv2 or Qubes 4.0) - <a href="https://github.com/Solo5/solo5/blob/master/docs/building.md">read more</a>. We deploy our unikernels as hvt with FreeBSD BHyve as hypervisor.</p> +<p>On <a href="https://builds.robur.coop">builds.robur.coop</a>, next to the unikernel images, <a href="https://builds.robur.coop/job/solo5-hvt/"><em>solo5-hvt</em> packages</a> are provided - download the binary and install it. A <a href="https://github.com/NixOS/nixpkgs/tree/master/pkgs/os-specific/solo5">NixOS package</a> is already available - please note that <a href="https://github.com/Solo5/solo5/pull/494">soon</a> packaging will be much easier (and we will work on packages merged into distributions).</p> +<p>When the tender is installed, download a unikernel image (e.g. the <a href="https://builds.robur.coop/job/traceroute/build/latest/">traceroute</a> described in <a href="/Posts/Traceroute">an earlier post</a>), and execute it:</p> +<pre><code>$ solo5-hvt --net:service=tap0 -- traceroute.hvt --ipv4=10.0.42.2/24 --ipv4-gateway=10.0.42.1 +</code></pre> +<p>If you plan to orchestrate MirageOS unikernels, you may be interested in <a href="https://github.com/roburio/albatross">albatross</a> - we provide <a href="https://builds.robur.coop/job/albatross/">binary packages as well for albatross</a>. An upcoming post will go into further details of how to setup albatross.</p> +<h2>MirageOS configuration</h2> +<p>A MirageOS unikernel has a specific purpose - composed of OCaml libraries - selected at compile time, which allows to only embed the required pieces. This reduces the attack surface drastically. At the same time, to be widely useful to multiple organisations, no configuration data must be embedded into the unikernel.</p> +<p>Early MirageOS unikernels such as <a href="https://github.com/mirage/mirage-www">mirage-www</a> embed content (blog posts, ..) and TLS certificates and private keys in the binary (using <a href="https://github.com/mirage/ocaml-crunch">crunch</a>). The <a href="https://github.com/mirage/qubes-mirage-firewall">Qubes firewall</a> (read the <a href="http://roscidus.com/blog/blog/2016/01/01/a-unikernel-firewall-for-qubesos/">blog post by Thomas</a> for more information) used to include the firewall rules until <a href="https://github.com/mirage/qubes-mirage-firewall/releases/tag/v0.6">v0.6</a> in the binary, since <a href="https://github.com/mirage/qubes-mirage-firewall/tree/v0.7">v0.7</a> the rules are read dynamically from QubesDB. This is big usability improvement.</p> +<p>We have several possibilities to provide configuration information in MirageOS, on the one hand via boot parameters (can be pre-filled at development time, and further refined at configuration time, but those passed at boot time take precedence). Boot parameters have a length limitation.</p> +<p>Another option is to <a href="https://github.com/roburio/tlstunnel/">use a block device</a> - where the TLS reverse proxy stores the configuration, modifiable via a TCP control socket (authentication using a shared hmac secret).</p> +<p>Several other unikernels, such as <a href="https://github.com/Engil/Canopy">this website</a> and <a href="https://github.com/roburio/caldav">our CalDAV server</a>, store the content in a remote git repository. The git URI and credentials (private key seed, host key fingerprint) are passed via boot parameter.</p> +<p>Finally, another option that we take advantage of is to introduce a post-link step that rewrites the binary to embed configuration. The tool <a href="https://github.com/dinosaure/caravan">caravan</a> developed by Romain that does this rewrite is used by our <a href="https://github.com/roburio/openvpn/tree/robur/mirage-router">openvpn router</a> (<a href="https://builds.robur.coop/job/openvpn-router/build/latest/">binary</a>).</p> +<p>In the future, some configuration information - such as monitoring system, syslog sink, IP addresses - may be done via DHCP on one of the private network interfaces - this would mean that the DHCP server has some global configuration option, and the unikernels no longer require that many boot parameters. Another option we want to investigate is where the tender shares a file as read-only memory-mapped region from the host system to the guest system - but this is tricky considering all targets above (especially virtio and muen).</p> +<h2>Behind the scenes: reproducible builds</h2> +<p>To provide a high level of assurance and trust, if you distribute binaries in 2021, you should have a recipe how they can be reproduced in a bit-by-bit identical way. This way, different organisations can run builders and rebuilders, and a user can decide to only use a binary if it has been reproduced by multiple organisations in different jurisdictions using different physical machines - to avoid malware being embedded in the binary.</p> +<p>For a reproduction to be successful, you need to collect the checksums of all sources that contributed to the built, together with other things (host system packages, environment variables, etc.). Of course, you can record the entire OS and sources as a tarball (or file system snapshot) and distribute that - but this may be suboptimal in terms of bandwidth requirements.</p> +<p>With opam, we already have precise tracking which opam packages are used, and since opam 2.1 the <code>opam switch export</code> includes <a href="https://github.com/ocaml/opam/pull/4040">extra-files (patches)</a> and <a href="https://github.com/ocaml/opam/pull/4055">records the VCS version</a>. Based on this functionality, <a href="https://github.com/roburio/orb">orb</a>, an alternative command line application using the opam-client library, can be used to collect (a) the switch export, (b) host system packages, and (c) the environment variables. Only required environment variables are kept, all others are unset while conducting a build. The only required environment variables are <code>PATH</code> (sanitized with an allow list, <code>/bin</code>, <code>/sbin</code>, with <code>/usr</code>, <code>/usr/local</code>, and <code>/opt</code> prefixes), and <code>HOME</code>. To enable Debian's <code>apt</code> to install packages, <code>DEBIAN_FRONTEND</code> is set to <code>noninteractive</code>. The <code>SWITCH_PATH</code> is recorded to allow orb to use the same path during a rebuild. The <code>SOURCE_DATE_EPOCH</code> is set to enable tools that record a timestamp to use a static one. The <code>OS*</code> variables are only used for recording the host OS and version.</p> +<p>The goal of reproducible builds can certainly be achieved in several ways, including to store all sources and used executables in a huge tarball (or docker container), which is preserved for rebuilders. The question of minimal trusted computing base and how such a container could be rebuild from sources in reproducible way are open.</p> +<p>The opam-repository is a community repository, where packages are released to on a daily basis by a lot of OCaml developers. Package dependencies usually only use lower bounds of other packages, and the continuous integration system of the opam repository takes care that upon API changes all reverse dependencies include the right upper bounds. Using the head commit of opam-repository usually leads to a working package universe.</p> +<p>For our MirageOS unikernels, we don't want to stay behind with ancient versions of libraries. That's why our automated building is done on a daily basis with the head commit of opam-repository. Since our unikernels are not part of the main opam repository (they include the configuration information which target to use, e.g. <em>hvt</em>), and we occasionally development versions of opam packages, we use <a href="https://git.robur.io/robur/unikernel-repo">the unikernel-repo</a> as overlay.</p> +<p>If no dependent package got a new release, the resulting binary has the same checksum. If any dependency was released with a newer release, this is picked up, and eventually the checksum changes.</p> +<p>Each unikernel (and non-unikernel) job (e.g. <a href="https://builds.robur.coop/job/dns-primary-git/build/latest/">dns-primary</a> outputs some artifacts:</p> +<ul> +<li>the <a href="https://builds.robur.coop/job/dns-primary-git/build/latest/f/bin/primary_git.hvt">binary image</a> (in <code>bin/</code>, unikernel image, OS package) +</li> +<li>the <a href="https://builds.robur.coop/job/dns-primary-git/build/latest/f/build-environment"><code>build-environment</code></a> containing the environment variables used for this build +</li> +<li>the <a href="https://builds.robur.coop/job/dns-primary-git/build/latest/f/system-packages"><code>system-packages</code></a> containing all packages installed on the host system +</li> +<li>the <a href="https://builds.robur.coop/job/dns-primary-git/build/latest/f/opam-switch"><code>opam-switch</code></a> that contains all opam packages, including git commit or tarball with checksum, and potentially extra patches, used for this build +</li> +<li>a job script and console output +</li> +</ul> +<p>To reproduce such a built, you need to get the same operating system (OS, OS_FAMILY, OS_DISTRIBUTION, OS_VERSION in build-environment), the same set of system packages, and then you can <code>orb rebuild</code> which sets the environment variables and installs the opam packages from the opam-switch.</p> +<p>You can <a href="https://builds.robur.coop/job/dns-primary-git/">browse</a> the different builds, and if there are checksum changes, you can browse to a diff between the opam switches to reason whether the checksum change was intentional (e.g. <a href="https://builds.robur.coop/compare/ba9ab091-9400-4e8d-ad37-cf1339114df8/23341f6b-cd26-48ab-9383-e71342455e81/opam-switch">here</a> the checksum of the unikernel changed when the x509 library was updated).</p> +<p>The opam reproducible build infrastructure is driven by:</p> +<ul> +<li><a href="https://github.com/roburio/orb">orb</a> conducting reproducible builds (<a href="https://builds.robur.coop/job/orb/">packages</a>) +</li> +<li><a href="https://github.com/roburio/builder">builder</a> scheduling builds in contained environments (<a href="https://builds.robur.coop/job/builder/">packages</a>) +</li> +<li><a href="https://git.robur.io/robur/builder-web">builder-web</a> storing builds in a database and providing a HTTP interface (<a href="https://builds.robur.coop/job/builder-web/">packages</a>) +</li> +</ul> +<p>These tools are themselves reproducible, and built on a daily basis. The infrastructure executing the build jobs installs the most recent packages of orb and builder before conducting a build. This means that our build infrastructure is reproducible as well, and uses the latest code when it is released.</p> +<h2>Conclusion</h2> +<p>Thanks to NGI funding we now have reproducible MirageOS binary builds available at <a href="https://builds.robur.coop">builds.robur.coop</a>. The underlying infrastructure is reproducible, available for multiple platforms (Ubuntu using docker, FreeBSD using jails), and can be easily bootstrapped from source (once you have OCaml and opam working, getting builder and orb should be easy). All components are open source software, mostly with permissive licenses.</p> +<p>We also have an index over sha-256 checksum of binaries - in the case you find a running unikernel image where you forgot which exact packages were used, you can do a reverse lookup.</p> +<p>We are aware that the web interface can be improved (PRs welcome). We will also work on the rebuilder setup and run some rebuilds.</p> +<p>Please reach out to us (at team AT robur DOT coop) if you have feedback and suggestions.</p> +urn:uuid:331831d8-6093-5dd7-9164-445afff953cbDeploying binary MirageOS unikernels2021-11-15T11:17:23-00:00hannes<p>Elliptic curves (ECDSA/ECDH) are supported in a maintainable and secure way.</p> +2021-04-23T13:33:06-00:00<h2>Introduction</h2> +<p>Tl;DR: mirage-crypto-ec, with x509 0.12.0, and tls 0.13.0, provide fast and secure elliptic curve support in OCaml and MirageOS - using the verified <a href="https://github.com/mit-plv/fiat-crypto/">fiat-crypto</a> stack (Coq to OCaml to executable which generates C code that is interfaced by OCaml). In x509, a long standing issue (countryName encoding), and archive (PKCS 12) format is now supported, in addition to EC keys. In tls, ECDH key exchanges are supported, and ECDSA and EdDSA certificates.</p> +<h2>Elliptic curve cryptography</h2> +<p><a href="https://mirage.io/blog/tls-1-3-mirageos">Since May 2020</a>, our <a href="https://usenix15.nqsb.io">OCaml-TLS</a> stack supports TLS 1.3 (since tls version 0.12.0 on opam).</p> +<p>TLS 1.3 requires elliptic curve cryptography - which was not available in <a href="https://github.com/mirage/mirage-crypto">mirage-crypto</a> (the maintained fork of <a href="https://github.com/mirleft/ocaml-nocrypto">nocrypto</a>).</p> +<p>There are two major uses of elliptic curves: <a href="https://en.wikipedia.org/wiki/Elliptic-curve_Diffie%E2%80%93Hellman">key exchange (ECDH)</a> for establishing a shared secret over an insecure channel, and <a href="https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm">digital signature (ECDSA)</a> for authentication, integrity, and non-repudiation. (Please note that the construction of digital signatures on Edwards curves (Curve25519, Ed448) is called EdDSA instead of ECDSA.)</p> +<p>Elliptic curve cryptoraphy is <a href="https://eprint.iacr.org/2020/615">vulnerable</a> <a href="https://raccoon-attack.com/">to</a> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-5407">various</a> <a href="https://github.com/mimoo/timing_attack_ecdsa_tls">timing</a> <a href="https://minerva.crocs.fi.muni.cz/">attacks</a> - have a read of the <a href="https://blog.trailofbits.com/2020/06/11/ecdsa-handle-with-care/">overview article on ECDSA</a>. When implementing elliptic curve cryptography, it is best to avoid these known attacks. Gladly, there are some projects which address these issues by construction.</p> +<p>In addition, to use the code in MirageOS, it should be boring C code: no heap allocations, only using a very small amount of C library functions -- the code needs to be compiled in an environment with <a href="https://github.com/mirage/ocaml-freestanding/tree/v0.6.4/nolibc">nolibc</a>.</p> +<p>Two projects started in semantics, to solve the issue from the grounds up: <a href="https://github.com/mit-plv/fiat-crypto/">fiat-crypto</a> and <a href="https://github.com/project-everest/hacl-star/">hacl-star</a>: their approach is to use a proof system (<a href="https://coq.inria.fr">Coq</a> or <a href="https://www.fstar-lang.org/">F*</a> to verify that the code executes in constant time, not depending on data input. Both projects provide as output of their proof systems C code.</p> +<p>For our initial TLS 1.3 stack, <a href="https://github.com/pascutto/">Clément</a>, <a href="https://github.com/NathanReb/">Nathan</a> and <a href="https://github.com/emillon/">Etienne</a> developed <a href="https://github.com/mirage/fiat">fiat-p256</a> and <a href="https://github.com/mirage/hacl">hacl_x5519</a>. Both were one-shot interfaces for a narrow use case (ECDH for NIST P-256 and X25519), worked well for their purpose, and allowed to gather some experience from the development side.</p> +<h3>Changed requirements</h3> +<p>Revisiting our cryptography stack with the elliptic curve perspective had several reasons, on the one side the customer project <a href="https://www.nitrokey.com/products/nethsm">NetHSM</a> asked for feasibility of ECDSA/EdDSA for various elliptic curves, on the other side <a href="https://github.com/mirage/ocaml-dns/pull/251">DNSSec</a> uses elliptic curve cryptography (ECDSA), and also <a href="https://www.wireguard.com/">wireguard</a> relies on elliptic curve cryptography. The number of X.509 certificates using elliptic curves is increasing, and we don't want to leave our TLS stack in a state where it can barely talk to a growing number of services on the Internet.</p> +<p>Looking at <a href="https://github.com/project-everest/hacl-star/"><em>hacl-star</em></a>, their <a href="https://hacl-star.github.io/Supported.html">support</a> is limited to P-256 and Curve25519, any new curve requires writing F*. Another issue with hacl-star is C code quality: their C code does neither <a href="https://github.com/mirage/hacl/issues/46">compile with older C compilers (found on Oracle Linux 7 / CentOS 7)</a>, nor when enabling all warnings (&gt; 150 are generated). We consider the C compiler as useful resource to figure out undefined behaviour (and other problems), and when shipping C code we ensure that it compiles with <code>-Wall -Wextra -Wpedantic --std=c99 -Werror</code>. The hacl project <a href="https://github.com/mirage/hacl/tree/master/src/kremlin">ships</a> a bunch of header files and helper functions to work on all platforms, which is a clunky <code>ifdef</code> desert. The hacl approach is to generate a whole algorithm solution: from arithmetic primitives, group operations, up to cryptographic protocol - everything included.</p> +<p>In contrast, <a href="https://github.com/mit-plv/fiat-crypto/"><em>fiat-crypto</em></a> is a Coq development, which as part of compilation (proof verification) generates executables (via OCaml code extraction from Coq). These executables are used to generate modular arithmetic (as C code) given a curve description. The <a href="https://github.com/mirage/mirage-crypto/tree/main/ec/native">generated C code</a> is highly portable, independent of platform (word size is taken as input) - it only requires a <code>&lt;stdint.h&gt;</code>, and compiles with all warnings enabled (once <a href="https://github.com/mit-plv/fiat-crypto/pull/906">a minor PR</a> got merged). Supporting a new curve is simple: generate the arithmetic code using fiat-crypto with the new curve description. The downside is that group operations and protocol needs to implemented elsewhere (and is not part of the proven code) - gladly this is pretty straightforward to do, especially in high-level languages.</p> +<h3>Working with fiat-crypto</h3> +<p>As mentioned, our initial <a href="https://github.com/mirage/fiat">fiat-p256</a> binding provided ECDH for the NIST P-256 curve. Also, BoringSSL uses fiat-crypto for ECDH, and developed the code for group operations and cryptographic protocol on top of it.</p> +<p>The work needed was (a) ECDSA support and (b) supporting more curves (let's focus on NIST curves). For ECDSA, the algorithm requires modular arithmetics in the field of the group order (in addition to the prime). We generate these primitives with fiat-crypto (named <code>npYYY_AA</code>) - that required <a href="https://github.com/mit-plv/fiat-crypto/commit/e31a36d5f1b20134e67ccc5339d88f0ff3cb0f86">a small fix in decoding hex</a>. Fiat-crypto also provides inversion <a href="https://github.com/mit-plv/fiat-crypto/pull/670">since late October 2020</a>, <a href="https://eprint.iacr.org/2021/549">paper</a> - which allowed to reduce our code base taken from BoringSSL. The ECDSA protocol was easy to implement in OCaml using the generated arithmetics.</p> +<p>Addressing the issue of more curves was also easy to achieve, the C code (group operations) are macros that are instantiated for each curve - the OCaml code are functors that are applied with each curve description.</p> +<p>Thanks to the test vectors (as structured data) from <a href="https://github.com/google/wycheproof/">wycheproof</a> (and again thanks to Etienne, Nathan, and Clément for their OCaml code decodin them), I feel confident that our elliptic curve code works as desired.</p> +<p>What was left is X25519 and Ed25519 - dropping the hacl dependency entirely felt appealing (less C code to maintain from fewer projects). This turned out to require more C code, which we took from BoringSSL. It may be desirable to reduce the imported C code, or to wait until a project on top of fiat-crypto which provides proven cryptographic protocols is in a usable state.</p> +<p>To avoid performance degradation, I distilled some <a href="https://github.com/mirage/mirage-crypto/pull/107#issuecomment-799701703">X25519 benchmarks</a>, turns out the fiat-crypto and hacl performance is very similar.</p> +<h3>Achievements</h3> +<p>The new opam package <a href="https://mirage.github.io/mirage-crypto/doc/mirage-crypto-ec/Mirage_crypto_ec/index.html">mirage-crypto-ec</a> is released, which includes the C code generated by fiat-crypto (including <a href="https://github.com/mit-plv/fiat-crypto/pull/670">inversion</a>), <a href="https://github.com/mirage/mirage-crypto/blob/main/ec/native/point_operations.h">point operations</a> from BoringSSL, and some <a href="https://github.com/mirage/mirage-crypto/blob/main/ec/mirage_crypto_ec.ml">OCaml code</a> for invoking these functions and doing bounds checks, and whether points are on the curve. The OCaml code are some functors that take the curve description (consisting of parameters, C function names, byte length of value) and provide Diffie-Hellman (Dh) and digital signature algorithm (Dsa) modules. The nonce for ECDSA is computed deterministically, as suggested by <a href="https://tools.ietf.org/html/rfc6979">RFC 6979</a>, to avoid private key leakage.</p> +<p>The code has been developed in <a href="https://github.com/mirage/mirage-crypto/pull/101">NIST curves</a>, <a href="https://github.com/mirage/mirage-crypto/pull/106">removing blinding</a> (since we use operations that are verified to be constant-time), <a href="https://github.com/mirage/mirage-crypto/pull/108">added missing length checks</a> (reported by <a href="https://github.com/greg42">Greg</a>), <a href="https://github.com/mirage/mirage-crypto/pull/107">curve25519</a>, <a href="https://github.com/mirage/mirage-crypto/pull/117">a fix for signatures that do not span the entire byte size (discovered while adapting X.509)</a>, <a href="https://github.com/mirage/mirage-crypto/pull/118">fix X25519 when the input has offset &lt;&gt; 0</a>. It works on x86 and arm, both 32 and 64 bit (checked by CI). The development was partially sponsored by Nitrokey.</p> +<p>What is left to do, apart from further security reviews, is <a href="https://github.com/mirage/mirage-crypto/issues/109">performance improvements</a>, <a href="https://github.com/mirage/mirage-crypto/issues/112">Ed448/X448 support</a>, and <a href="https://github.com/mirage/mirage-crypto/issues/105">investigating deterministic k for P521</a>. Pull requests are welcome.</p> +<p>When you use the code, and encounter any issues, please <a href="https://github.com/mirage/mirage-crypto/issues">report them</a>.</p> +<h2>Layer up - X.509 now with ECDSA / EdDSA and PKCS 12 support, and a long-standing issue fixed</h2> +<p>With the sign and verify primitives, the next step is to interoperate with other tools that generate and use these public and private keys. This consists of serialisation to and deserialisation from common data formats (ASN.1 DER and PEM encoding), and support for handling X.509 certificates with elliptic curve keys. Since X.509 0.12.0, it supports EC private and public keys, including certificate validation and issuance.</p> +<p>Releasing X.509 also included to go through the issue tracker and attempt to solve the existing issues. This time, the <a href="https://github.com/mirleft/ocaml-x509/issues/69">&quot;country name is encoded as UTF8String, while RFC demands PrintableString&quot;</a> filed more than 5 years ago by <a href="https://github.com/reynir">Reynir</a>, re-reported by <a href="https://github.com/paurkedal">Petter</a> in early 2017, and again by <a href="https://github.com/NightBlues">Vadim</a> in late 2020, <a href="https://github.com/mirleft/ocaml-x509/pull/140">was fixed by Vadim</a>.</p> +<p>Another long-standing pull request was support for <a href="https://tools.ietf.org/html/rfc7292">PKCS 12</a>, the archive format for certificate and private key bundles. This has <a href="https://github.com/mirleft/ocaml-x509/pull/114">been developed and merged</a>. PKCS 12 is a widely used and old format (e.g. when importing / exporting cryptographic material in your browser, used by OpenVPN, ...). Its specification uses RC2 and 3DES (see <a href="https://unmitigatedrisk.com/?p=654">this nice article</a>), which are the default algorithms used by <code>openssl pkcs12</code>.</p> +<h2>One more layer up - TLS</h2> +<p>In TLS we are finally able to use ECDSA (and EdDSA) certificates and private keys, this resulted in slightly more complex configuration - the constraints between supported groups, signature algorithms, ciphersuite, and certificates are intricate:</p> +<p>The ciphersuite (in TLS before 1.3) specifies which key exchange mechanism to use, but also which signature algorithm to use (RSA/ECDSA). The supported groups client hello extension specifies which elliptic curves are supported by the client. The signature algorithm hello extension (TLS 1.2 and above) specifies the signature algorithm. In the end, at load time the TLS configuration is validated and groups, ciphersuites, and signature algorithms are condensed depending on configured server certificates. At session initiation time, once the client reports what it supports, these parameters are further cut down to eventually find some suitable cryptographic parameters for this session.</p> +<p>From the user perspective, earlier the certificate bundle and private key was a pair of <code>X509.Certificate.t list</code> and <code>Mirage_crypto_pk.Rsa.priv</code>, now the second part is a <code>X509.Private_key.t</code> - all provided constructors have been updates (notably <code>X509_lwt.private_of_pems</code> and <code>Tls_mirage.X509.certificate</code>).</p> +<h2>Finally, conduit and mirage</h2> +<p>Thanks to <a href="https://github.com/dinosaure">Romain</a>, conduit* 4.0.0 was released which supports the modified API of X.509 and TLS. Romain also developed patches and released mirage 3.10.3 which supports the above mentioned work.</p> +<h2>Conclusion</h2> +<p>Elliptic curve cryptography is now available in OCaml using verified cryptographic primitives from the fiat-crypto project - <code>opam install mirage-crypto-ec</code>. X.509 since 0.12.0 and TLS since 0.13.0 and MirageOS since 3.10.3 support this new development which gives rise to smaller EC keys. Our old bindings, fiat-p256 and hacl_x25519 have been archived and will no longer be maintained.</p> +<p>Thanks to everyone involved on this journey: reporting issues, sponsoring parts of the work, helping with integration, developing initial prototypes, and keep motivating me to continue this until the release is done.</p> +<p>In the future, it may be possible to remove zarith and gmp from the dependency chain, and provide EC-only TLS servers and clients for MirageOS. The benefit will be much less C code (libgmp-freestanding.a is 1.5MB in size) in our trusted code base.</p> +<p>Another potential project that is very close now is a certificate authority developed in MirageOS - now that EC keys, PKCS 12, revocation lists, ... are implemented.</p> +<h2>Footer</h2> +<p>If you want to support our work on MirageOS unikernels, please <a href="https://robur.coop/Donate">donate to robur</a>. I'm interested in feedback, either via <a href="https://twitter.com/h4nnes">twitter</a>, <a href="https://mastodon.social/@hannesm">hannesm@mastodon.social</a> or via eMail.</p> +urn:uuid:16427713-5da1-50cd-b17c-ca5b5cca431dCryptography updates in OCaml and MirageOS2021-11-19T18:04:52-00:00hannes<p>Home office, MirageOS unikernels, 2020 recap, 2021 tbd</p> +2021-01-25T12:45:54-00:00<h2>Introduction</h2> +<p>2020 was an intense year. I hope you're healthy and keep being healthy. I am privileged (as lots of software engineers and academics are) to be able to work from home during the pandemic. Let's not forget people in less privileged situations, and let’s try to give them as much practical, psychological and financial support as we can these days. And as much joy as possible to everyone around :)</p> +<p>I cancelled the autumn MirageOS retreat due to the pandemic. Instead I collected donations for our hosts in Marrakech - they were very happy to receive our financial support, since they had a difficult year, since their income is based on tourism. I hope that in autumn 2021 we'll have an on-site retreat again.</p> +<p>For 2021, we (at <a href="https://robur.coop">robur</a>) got a grant from the EU (via <a href="https://pointer.ngi.eu">NGI pointer</a>) for &quot;Deploying MirageOS&quot; (more details below), and another grant from <a href="https://ocaml-sf.org">OCaml software foundation</a> for securing the opam supply chain (using <a href="https://github.com/hannesm/conex">conex</a>). Some long-awaited releases for MirageOS libraries, namely a <a href="https://discuss.ocaml.org/t/ann-first-release-of-awa-ssh">ssh implementation</a> and a rewrite of our <a href="https://discuss.ocaml.org/t/ann-release-of-ocaml-git-v3-0-duff-encore-decompress-etc/">git implementation</a> have already been published.</p> +<p>With my MirageOS view, 2020 was a pretty successful year, where we managed to add more features, fixed lots of bugs, and paved the road ahead. I want to thank <a href="https://ocamllabs.io/">OCamlLabs</a> for funding work on MirageOS maintenance.</p> +<h2>Recap 2020</h2> +<p>Here is a very subjective random collection of accomplishments in 2020, where I was involved with some degree.</p> +<h3>NetHSM</h3> +<p><a href="https://www.nitrokey.com/products/nethsm">NetHSM</a> is a hardware security module in software. It is a product that uses MirageOS for security, and is based on the <a href="https://muen.sk">muen</a> separation kernel. We at <a href="https://robur.coop">robur</a> were heavily involved in this product. It already has been security audited by an external team. You can pre-order it from Nitrokey.</p> +<h3>TLS 1.3</h3> +<p>Dating back to 2016, at the <a href="https://www.ndss-symposium.org/ndss2016/tron-workshop-programme/">TRON</a> (TLS 1.3 Ready or NOt), we developed a first draft of a 1.3 implementation of <a href="https://github.com/mirleft/ocaml-tls">OCaml-TLS</a>. Finally in May 2020 we got our act together, including ECC (ECDH P256 from <a href="https://github.com/mit-plv/fiat-crypto/">fiat-crypto</a>, X25519 from <a href="https://project-everest.github.io/">hacl</a>) and testing with <a href="https://github.com/tlsfuzzer/tlsfuzzer">tlsfuzzer</a>, and release tls 0.12.0 with TLS 1.3 support. Later we added <a href="https://github.com/mirleft/ocaml-tls/pull/414">ECC ciphersuites to TLS version 1.2</a>, implemented <a href="https://github.com/mirleft/ocaml-tls/pull/414">ChaCha20/Poly1305</a>, and fixed an <a href="https://github.com/mirleft/ocaml-tls/pull/424">interoperability issue with Go's implementation</a>.</p> +<p><a href="https://github.com/mirage/mirage-crypto">Mirage-crypto</a> provides the underlying cryptographic primitives, initially released in March 2020 as a fork of <a href="https://github.com/mirleft/ocaml-nocrypto">nocrypto</a> -- huge thanks to <a href="https://github.com/pqwy">pqwy</a> for his great work. Mirage-crypto detects <a href="https://github.com/mirage/mirage-crypto/pull/53">CPU features at runtime</a> (thanks to <a href="https://github.com/Julow">Julow</a>) (<a href="https://github.com/mirage/mirage-crypto/pull/96">bugfix for bswap</a>), using constant time modular exponentation (powm_sec) and hardens against Lenstra's CRT attack, supports <a href="https://github.com/mirage/mirage-crypto/pull/39">compilation on Windows</a> (thanks to <a href="https://github.com/avsm">avsm</a>), <a href="https://github.com/mirage/mirage-crypto/pull/90">async entropy harvesting</a> (thanks to <a href="https://github.com/seliopou">seliopou</a>), <a href="https://github.com/mirage/mirage-crypto/pull/65">32 bit support</a>, <a href="https://github.com/mirage/mirage-crypto/pull/72">chacha20/poly1305</a> (thanks to <a href="https://github.com/abeaumont">abeaumont</a>), <a href="https://github.com/mirage/mirage-crypto/pull/84">cross-compilation</a> (thanks to <a href="https://github.com/EduardoRFS">EduardoRFS</a>) and <a href="https://github.com/mirage/mirage-crypto/pull/78">various</a> <a href="https://github.com/mirage/mirage-crypto/pull/81">bug</a> <a href="https://github.com/mirage/mirage-crypto/pull/83">fixes</a>, even <a href="https://github.com/mirage/mirage-crypto/pull/95">memory leak</a> (thanks to <a href="https://github.com/talex5">talex5</a> for reporting several of these issues), and <a href="https://github.com/mirage/mirage-crypto/pull/99">RSA</a> <a href="https://github.com/mirage/mirage-crypto/pull/100">interoperability</a> (thanks to <a href="https://github.com/psafont">psafont</a> for investigation and <a href="https://github.com/mattjbray">mattjbray</a> for reporting). This library feels very mature now - being used by multiple stakeholders, and lots of issues have been fixed in 2020.</p> +<h3>Qubes Firewall</h3> +<p>The <a href="https://github.com/mirage/qubes-mirage-firewall/">MirageOS based Qubes firewall</a> is the most widely used MirageOS unikernel. And it got major updates: in May <a href="https://github.com/linse">Steffi</a> <a href="https://groups.google.com/g/qubes-users/c/Xzplmkjwa5Y">announced</a> her and <a href="https://github.com/yomimono">Mindy's</a> work on improving it for Qubes 4.0 - including <a href="https://www.qubes-os.org/doc/vm-interface/#firewall-rules-in-4x">dynamic firewall rules via QubesDB</a>. Thanks to <a href="https://prototypefund.de/project/portable-firewall-fuer-qubesos/">prototypefund</a> for sponsoring.</p> +<p>In October 2020, we released <a href="https://mirage.io/blog/announcing-mirage-39-release">Mirage 3.9</a> with PVH virtualization mode (thanks to <a href="https://github.com/mato">mato</a>). There's still a <a href="https://github.com/mirage/qubes-mirage-firewall/issues/120">memory leak</a> to be investigated and fixed.</p> +<h3>IPv6</h3> +<p>In December, with <a href="https://mirage.io/blog/announcing-mirage-310-release">Mirage 3.10</a> we got the IPv6 code up and running. Now MirageOS unikernels have a dual stack available, besides IPv4-only and IPv6-only network stacks. Thanks to <a href="https://github.com/nojb">nojb</a> for the initial code and <a href="https://github.com/MagnusS">MagnusS</a>.</p> +<p>Turns out this blog, but also robur services, are now available via IPv6 :)</p> +<h3>Albatross</h3> +<p>Also in December, I pushed an initial release of <a href="https://github.com/roburio/albatross">albatross</a>, a unikernel orchestration system with remote access. <em>Deploy your unikernel via a TLS handshake -- the unikernel image is embedded in the TLS client certificates.</em></p> +<p>Thanks to <a href="https://github.com/reynir">reynir</a> for statistics support on Linux and improvements of the systemd service scripts. Also thanks to <a href="https://github.com/cfcs">cfcs</a> for the initial Linux port.</p> +<h3>CA certs</h3> +<p>For several years I postponed the problem of how to actually use the operating system trust anchors for OCaml-TLS connections. Thanks to <a href="https://github.com/emillon">emillon</a> for initial code, there are now <a href="https://github.com/mirage/ca-certs">ca-certs</a> and <a href="https://github.com/mirage/ca-certs-nss">ca-certs-nss</a> opam packages (see <a href="https://discuss.ocaml.org/t/ann-ca-certs-and-ca-certs-nss">release announcement</a>) which fills this gap.</p> +<h2>Unikernels</h2> +<p>I developed several useful unikernels in 2020, and also pushed <a href="https://mirage.io/wiki/gallery">a unikernel gallery</a> to the Mirage website:</p> +<h3>Traceroute in MirageOS</h3> +<p>I already wrote about <a href="/Posts/Traceroute">traceroute</a> which traces the routing to a given remote host.</p> +<h3>Unipi - static website hosting</h3> +<p><a href="https://github.com/roburio/unipi">Unipi</a> is a static site webserver which retrieves the content from a remote git repository. Let's encrypt certificate provisioning and dynamic updates via a webhook to be executed for every push.</p> +<h4>TLSTunnel - TLS demultiplexing</h4> +<p>The physical machine this blog and other robur infrastructure runs on has been relocated from Sweden to Germany mid-December. Thanks to UPS! Fewer IPv4 addresses are available in the new data center, which motivated me to develop <a href="https://github.com/roburio/tlstunnel">tlstunnel</a>.</p> +<p>The new behaviour is as follows (see the <code>monitoring</code> branch):</p> +<ul> +<li>listener on TCP port 80 which replies with a permanent redirect to <code>https</code> +</li> +<li>listener on TCP port 443 which forwards to a backend host if the requested server name is configured +</li> +<li>its configuration is stored on a block device, and can be dynamically changed (with a custom protocol authenticated with a HMAC) +</li> +<li>it is setup to hold a wildcard TLS certificate and in DNS a wildcard entry is pointing to it +</li> +<li>setting up a new service is very straightforward: only the new name needs to be registered with tlstunnel together with the TCP backend, and everything will just work +</li> +</ul> +<h2>2021</h2> +<p>The year started with a release of <a href="https://discuss.ocaml.org/t/ann-first-release-of-awa-ssh">awa</a>, a SSH implementation in OCaml (thanks to <a href="https://github.com/haesbaert">haesbaert</a> for initial code). This was followed by a <a href="https://discuss.ocaml.org/t/ann-release-of-ocaml-git-v3-0-duff-encore-decompress-etc/">git 3.0 release</a> (thanks to <a href="https://github.com/dinosaure">dinosaure</a>).</p> +<h3>Deploying MirageOS - NGI Pointer</h3> +<p>For 2021 we at robur received funding from the EU (via <a href="https://pointer.ngi.eu/">NGI pointer</a>) for &quot;Deploying MirageOS&quot;, which boils down into three parts:</p> +<ul> +<li>reproducible binary releases of MirageOS unikernels, +</li> +<li>monitoring (and other devops features: profiling) and integration into existing infrastructure, +</li> +<li>and further documentation and advertisement. +</li> +</ul> +<p>Of course this will all be available open source. Please get in touch via eMail (team aT robur dot coop) if you're eager to integrate MirageOS unikernels into your infrastructure.</p> +<p>We discovered at an initial meeting with an infrastructure provider that a DNS resolver is of interest - even more now that dnsmasq suffered from <a href="https://www.jsof-tech.com/wp-content/uploads/2021/01/DNSpooq_Technical-Whitepaper.pdf">dnspooq</a>. We are already working on an <a href="https://github.com/mirage/ocaml-dns/pull/251">implementation of DNSSec</a>.</p> +<p>MirageOS unikernels are binary reproducible, and <a href="https://github.com/rjbou/orb/pull/1">infrastructure tools are available</a>. We are working hard on a web interface (and REST API - think of it as &quot;Docker Hub for MirageOS unikernels&quot;), and more tooling to verify reproducibility.</p> +<h3>Conex - securing the supply chain</h3> +<p>Another funding from the <a href="http://ocaml-sf.org/">OCSF</a> is to continue development and deploy <a href="https://github.com/hannesm/conex">conex</a> - to bring trust into opam-repository. This is a great combination with the reproducible build efforts, and will bring much more trust into retrieving OCaml packages and using MirageOS unikernels.</p> +<h3>MirageOS 4.0</h3> +<p>Mirage so far still uses ocamlbuild and ocamlfind for compiling the virtual machine binary. But the switch to dune is <a href="https://github.com/mirage/mirage/issues/1195">close</a>, a lot of effort has been done. This will make the developer experience of MirageOS much more smooth, with a per-unikernel monorepo workflow where you can push your changes to the individual libraries.</p> +<h2>Footer</h2> +<p>If you want to support our work on MirageOS unikernels, please <a href="https://robur.coop/Donate">donate to robur</a>. I'm interested in feedback, either via <a href="https://twitter.com/h4nnes">twitter</a>, <a href="https://mastodon.social/@hannesm">hannesm@mastodon.social</a> or via eMail.</p> +urn:uuid:bc7675a5-47d0-5ce1-970c-01ed07fdf404The road ahead for MirageOS in 20212021-11-19T18:04:52-00:00hannes<p>A MirageOS unikernel which traces the path between itself and a remote host.</p> +2020-06-24T10:38:10-00:00<h2>Traceroute</h2> +<p>Is a diagnostic utility which displays the route and measures transit delays of +packets across an Internet protocol (IP) network.</p> +<pre><code class="language-bash">$ doas solo5-hvt --net:service=tap0 -- traceroute.hvt --ipv4=10.0.42.2/24 --ipv4-gateway=10.0.42.1 --host=198.167.222.207 + | ___| + __| _ \ | _ \ __ \ +\__ \ ( | | ( | ) | +____/\___/ _|\___/____/ +Solo5: Bindings version v0.6.5 +Solo5: Memory map: 512 MB addressable: +Solo5: reserved @ (0x0 - 0xfffff) +Solo5: text @ (0x100000 - 0x212fff) +Solo5: rodata @ (0x213000 - 0x24bfff) +Solo5: data @ (0x24c000 - 0x317fff) +Solo5: heap &gt;= 0x318000 &lt; stack &lt; 0x20000000 +2020-06-22 15:41:25 -00:00: INF [netif] Plugging into service with mac 76:9b:36:e0:e5:74 mtu 1500 +2020-06-22 15:41:25 -00:00: INF [ethernet] Connected Ethernet interface 76:9b:36:e0:e5:74 +2020-06-22 15:41:25 -00:00: INF [ARP] Sending gratuitous ARP for 10.0.42.2 (76:9b:36:e0:e5:74) +2020-06-22 15:41:25 -00:00: INF [udp] UDP interface connected on 10.0.42.2 +2020-06-22 15:41:25 -00:00: INF [application] 1 10.0.42.1 351us +2020-06-22 15:41:25 -00:00: INF [application] 2 192.168.42.1 1.417ms +2020-06-22 15:41:25 -00:00: INF [application] 3 192.168.178.1 1.921ms +2020-06-22 15:41:25 -00:00: INF [application] 4 88.72.96.1 16.716ms +2020-06-22 15:41:26 -00:00: INF [application] 5 * +2020-06-22 15:41:27 -00:00: INF [application] 6 92.79.215.112 16.794ms +2020-06-22 15:41:27 -00:00: INF [application] 7 145.254.2.215 21.305ms +2020-06-22 15:41:27 -00:00: INF [application] 8 145.254.2.217 22.05ms +2020-06-22 15:41:27 -00:00: INF [application] 9 195.89.99.1 21.088ms +2020-06-22 15:41:27 -00:00: INF [application] 10 62.115.9.133 20.105ms +2020-06-22 15:41:27 -00:00: INF [application] 11 213.155.135.82 30.861ms +2020-06-22 15:41:27 -00:00: INF [application] 12 80.91.246.200 30.716ms +2020-06-22 15:41:27 -00:00: INF [application] 13 80.91.253.163 28.315ms +2020-06-22 15:41:27 -00:00: INF [application] 14 62.115.145.27 30.436ms +2020-06-22 15:41:27 -00:00: INF [application] 15 80.67.4.239 42.826ms +2020-06-22 15:41:27 -00:00: INF [application] 16 80.67.10.147 47.213ms +2020-06-22 15:41:27 -00:00: INF [application] 17 198.167.222.207 48.598ms +Solo5: solo5_exit(0) called +</code></pre> +<p>This means with a traceroute utility you can investigate which route is taken +to a destination host, and what the round trip time(s) on the path are. The +sample output above is taken from a virtual machine on my laptop to the remote +host 198.167.222.207. You can see there are 17 hops between us, with the first +being my laptop with a tiny round trip time of 351us, the second and third are +using private IP addresses, and are my home network. The round trip time of the +fourth hop is much higher, this is the first hop on the other side of my DSL +modem. You can see various hops on the public Internet: the packets pass from +my Internet provider's backbone across some exchange points to the destination +Internet provider somewhere in Sweden.</p> +<p>The implementation of traceroute relies mainly on the time-to-live (ttl) field +(in IPv6 lingua it is &quot;hop limit&quot;) of IP packets, which is meant to avoid route +cycles that would infinitely forward IP packets in circles. Every router, when +forwarding an IP packet, first checks that the ttl field is greater than zero, +and then forwards the IP packet where the ttl is decreased by one. If the ttl +field is zero, instead of forwarding, an ICMP time exceeded packet is sent back +to the source.</p> +<p>Traceroute works by exploiting this mechanism: a series of IP packets with +increasing ttls is sent to the destination. Since upfront the length of the +path is unknown, it is a reactive system: first send an IP packet with a ttl of +one, if a ICMP time exceeded packet is returned, send an IP packet with a ttl of +two, etc. -- until an ICMP packet of type destination unreachable is received. +Since some hosts do not reply with a time exceeded message, it is crucial for +not getting stuck to use a timeout for each packet: when the timeout is reached, +an IP packet with an increased ttl is sent and an unknown for the ttl is +printed (see the fifth hop in the example above).</p> +<p>The packets send out are conventionally UDP packets without payload. From a +development perspective, one question is how to correlate the ICMP packet +with the sent UDP packet. Conveniently, ICMP packets contain the IP header and +the first eight bytes of the next protocol - the UDP header containing source +port, destination port, checksum, and payload length (each fields of size two +bytes). This means when we record the outgoing ports together with the sent +timestamp, and correlate the later received ICMP packet to the sent packet. +Great.</p> +<p>But as a functional programmer, let's figure whether we can abolish the +(globally shared) state. Since the ICMP packet contains the original IP +header and the first eight bytes of the UDP header, this is where we will +embed data. As described above, the data is the sent timestamp and the value +of the ttl field. For the latter, we can arbitrarily restrict it to 31 (5 bits). +For the timestamp, it is mainly a question about precision and maximum expected +round trip time. Taking the source and destination port are 32 bits, using 5 for +ttl, remaining are 27 bits (an unsigned value up to 134217727). Looking at the +decimal representation, 1 second is likely too small, 13 seconds are sufficient +for the round trip time measurement. This implies our precision is 100ns, by +counting the digits.</p> +<p>Finally to the code. First we need forth and back conversions between ports +and ttl, timestamp:</p> +<pre><code class="language-OCaml">(* takes a time-to-live (int) and timestamp (int64, nanoseconda), encodes them + into 16 bit source port and 16 bit destination port: + - the timestamp precision is 100ns (thus, it is divided by 100) + - use the bits 27-11 of the timestamp as source port + - use the bits 11-0 as destination port, and 5 bits of the ttl +*) +let ports_of_ttl_ts ttl ts = + let ts = Int64.div ts 100L in + let src_port = 0xffff land (Int64.(to_int (shift_right ts 11))) + and dst_port = 0xffe0 land (Int64.(to_int (shift_left ts 5))) lor (0x001f land ttl) + in + src_port, dst_port + +(* inverse operation of ports_of_ttl_ts for the range (src_port and dst_port + are 16 bit values) *) +let ttl_ts_of_ports src_port dst_port = + let ttl = 0x001f land dst_port in + let ts = + let low = Int64.of_int (dst_port lsr 5) + and high = Int64.(shift_left (of_int src_port) 11) + in + Int64.add low high + in + let ts = Int64.mul ts 100L in + ttl, ts +</code></pre> +<p>They should be inverse over the range of valid input: ports are 16 bit numbers, +ttl expected to be at most 31, ts a int64 expressed in nanoseconds.</p> +<p>Related is the function to print out one hop and round trip measurement:</p> +<pre><code class="language-OCaml">(* write a log line of a hop: the number, IP address, and round trip time *) +let log_one now ttl sent ip = + let now = Int64.(mul (logand (div now 100L) 0x7FFFFFFL) 100L) in + let duration = Mtime.Span.of_uint64_ns (Int64.sub now sent) in + Logs.info (fun m -&gt; m &quot;%2d %a %a&quot; ttl Ipaddr.V4.pp ip Mtime.Span.pp duration) +</code></pre> +<p>The most logic is when a ICMP packet is received:</p> +<pre><code class="language-OCaml">module Icmp = struct + type t = { + send : int -&gt; unit Lwt.t ; + log : int -&gt; int64 -&gt; Ipaddr.V4.t -&gt; unit ; + task_done : unit Lwt.u ; + } + + let connect send log task_done = + let t = { send ; log ; task_done } in + Lwt.return t + + (* This is called for each received ICMP packet. *) + let input t ~src ~dst buf = + let open Icmpv4_packet in + (* Decode the received buffer (the IP header has been cut off already). *) + match Unmarshal.of_cstruct buf with + | Error s -&gt; + Lwt.fail_with (Fmt.strf &quot;ICMP: error parsing message from %a: %s&quot; Ipaddr.V4.pp src s) + | Ok (message, payload) -&gt; + let open Icmpv4_wire in + (* There are two interesting cases: Time exceeded (-&gt; send next packet), + and Destination (port) unreachable (-&gt; we reached the final host and can exit) *) + match message.ty with + | Time_exceeded -&gt; + (* Decode the payload, which should be an IPv4 header and a protocol header *) + begin match Ipv4_packet.Unmarshal.header_of_cstruct payload with + | Ok (pkt, off) when + (* Ensure this packet matches our sent packet: the protocol is UDP + and the destination address is the host we're tracing *) + pkt.Ipv4_packet.proto = Ipv4_packet.Marshal.protocol_to_int `UDP &amp;&amp; + Ipaddr.V4.compare pkt.Ipv4_packet.dst (Key_gen.host ()) = 0 -&gt; + let src_port = Cstruct.BE.get_uint16 payload off + and dst_port = Cstruct.BE.get_uint16 payload (off + 2) + in + (* Retrieve ttl and sent timestamp, encoded in the source port and + destination port of the UDP packet we sent, and received back as + ICMP payload. *) + let ttl, sent = ttl_ts_of_ports src_port dst_port in + (* Log this hop. *) + t.log ttl sent src; + (* Sent out the next UDP packet with an increased ttl. *) + let ttl' = succ ttl in + Logs.debug (fun m -&gt; m &quot;ICMP time exceeded from %a to %a, now sending with ttl %d&quot; + Ipaddr.V4.pp src Ipaddr.V4.pp dst ttl'); + t.send ttl' + | Ok (pkt, _) -&gt; + (* Some stray ICMP packet. *) + Logs.debug (fun m -&gt; m &quot;unsolicited time exceeded from %a to %a (proto %X dst %a)&quot; + Ipaddr.V4.pp src Ipaddr.V4.pp dst pkt.Ipv4_packet.proto Ipaddr.V4.pp pkt.Ipv4_packet.dst); + Lwt.return_unit + | Error e -&gt; + (* Decoding error. *) + Logs.warn (fun m -&gt; m &quot;couldn't parse ICMP time exceeded payload (IPv4) (%a -&gt; %a) %s&quot; + Ipaddr.V4.pp src Ipaddr.V4.pp dst e); + Lwt.return_unit + end + | Destination_unreachable when Ipaddr.V4.compare src (Key_gen.host ()) = 0 -&gt; + (* We reached the final host, and the destination port was not listened to *) + begin match Ipv4_packet.Unmarshal.header_of_cstruct payload with + | Ok (_, off) -&gt; + let src_port = Cstruct.BE.get_uint16 payload off + and dst_port = Cstruct.BE.get_uint16 payload (off + 2) + in + (* Retrieve ttl and sent timestamp. *) + let ttl, sent = ttl_ts_of_ports src_port dst_port in + (* Log the final hop. *) + t.log ttl sent src; + (* Wakeup the waiter task to exit the unikernel. *) + Lwt.wakeup t.task_done (); + Lwt.return_unit + | Error e -&gt; + (* Decoding error. *) + Logs.warn (fun m -&gt; m &quot;couldn't parse ICMP unreachable payload (IPv4) (%a -&gt; %a) %s&quot; + Ipaddr.V4.pp src Ipaddr.V4.pp dst e); + Lwt.return_unit + end + | ty -&gt; + Logs.debug (fun m -&gt; m &quot;ICMP unknown ty %s from %a to %a: %a&quot; + (ty_to_string ty) Ipaddr.V4.pp src Ipaddr.V4.pp dst + Cstruct.hexdump_pp payload); + Lwt.return_unit +end +</code></pre> +<p>Now, the remaining main unikernel is the module <code>Main</code>:</p> +<pre><code class="language-OCaml">module Main (R : Mirage_random.S) (M : Mirage_clock.MCLOCK) (Time : Mirage_time.S) (N : Mirage_net.S) = struct + module ETH = Ethernet.Make(N) + module ARP = Arp.Make(ETH)(Time) + module IPV4 = Static_ipv4.Make(R)(M)(ETH)(ARP) + module UDP = Udp.Make(IPV4)(R) + + (* Global mutable state: the timeout task for a sent packet. *) + let to_cancel = ref None + + (* Send a single packet with the given time to live. *) + let rec send_udp udp ttl = + (* This is called by the ICMP handler which successfully received a + time exceeded, thus we cancel the timeout task. *) + (match !to_cancel with + | None -&gt; () + | Some t -&gt; Lwt.cancel t ; to_cancel := None); + (* Our hop limit is 31 - 5 bit - should be sufficient for most networks. *) + if ttl &gt; 31 then + Lwt.return_unit + else + (* Create a timeout task which: + - sleeps for --timeout interval + - logs an unknown hop + - sends another packet with increased ttl + *) + let cancel = + Lwt.catch (fun () -&gt; + Time.sleep_ns (Duration.of_ms (Key_gen.timeout ())) &gt;&gt;= fun () -&gt; + Logs.info (fun m -&gt; m &quot;%2d *&quot; ttl); + send_udp udp (succ ttl)) + (function Lwt.Canceled -&gt; Lwt.return_unit | exc -&gt; Lwt.fail exc) + in + (* Assign this timeout task. *) + to_cancel := Some cancel; + (* Figure out which source and destination port to use, based on ttl + and current timestamp. *) + let src_port, dst_port = ports_of_ttl_ts ttl (M.elapsed_ns ()) in + (* Send packet via UDP. *) + UDP.write ~ttl ~src_port ~dst:(Key_gen.host ()) ~dst_port udp Cstruct.empty &gt;&gt;= function + | Ok () -&gt; Lwt.return_unit + | Error e -&gt; Lwt.fail_with (Fmt.strf &quot;while sending udp frame %a&quot; UDP.pp_error e) + + (* The main unikernel entry point. *) + let start () () () net = + let cidr = Key_gen.ipv4 () + and gateway = Key_gen.ipv4_gateway () + in + let log_one = fun port ip -&gt; log_one (M.elapsed_ns ()) port ip + (* Create a task to wait on and a waiter to wakeup. *) + and t, w = Lwt.task () + in + (* Setup network stack: ethernet, ARP, IPv4, UDP, and ICMP. *) + ETH.connect net &gt;&gt;= fun eth -&gt; + ARP.connect eth &gt;&gt;= fun arp -&gt; + IPV4.connect ~cidr ~gateway eth arp &gt;&gt;= fun ip -&gt; + UDP.connect ip &gt;&gt;= fun udp -&gt; + let send = send_udp udp in + Icmp.connect send log_one w &gt;&gt;= fun icmp -&gt; + + (* The callback cascade for an incoming network packet. *) + let ethif_listener = + ETH.input + ~arpv4:(ARP.input arp) + ~ipv4:( + IPV4.input + ~tcp:(fun ~src:_ ~dst:_ _ -&gt; Lwt.return_unit) + ~udp:(fun ~src:_ ~dst:_ _ -&gt; Lwt.return_unit) + ~default:(fun ~proto ~src ~dst buf -&gt; + match proto with + | 1 -&gt; Icmp.input icmp ~src ~dst buf + | _ -&gt; Lwt.return_unit) + ip) + ~ipv6:(fun _ -&gt; Lwt.return_unit) + eth + in + (* Start the callback in a separate asynchronous task. *) + Lwt.async (fun () -&gt; + N.listen net ~header_size:Ethernet_wire.sizeof_ethernet ethif_listener &gt;|= function + | Ok () -&gt; () + | Error e -&gt; Logs.err (fun m -&gt; m &quot;netif error %a&quot; N.pp_error e)); + (* Send the initial UDP packet with a ttl of 1. This entails the domino + effect to receive ICMP packets, send out another UDP packet with ttl + increased by one, etc. - until a destination unreachable is received, + or the hop limit is reached. *) + send 1 &gt;&gt;= fun () -&gt; + t +end +</code></pre> +<p>The configuration (<code>config.ml</code>) for this unikernel is as follows:</p> +<pre><code class="language-OCaml">open Mirage + +let host = + let doc = Key.Arg.info ~doc:&quot;The host to trace.&quot; [&quot;host&quot;] in + Key.(create &quot;host&quot; Arg.(opt ipv4_address (Ipaddr.V4.of_string_exn &quot;141.1.1.1&quot;) doc)) + +let timeout = + let doc = Key.Arg.info ~doc:&quot;Timeout (in millisecond)&quot; [&quot;timeout&quot;] in + Key.(create &quot;timeout&quot; Arg.(opt int 1000 doc)) + +let ipv4 = + let doc = Key.Arg.info ~doc:&quot;IPv4 address&quot; [&quot;ipv4&quot;] in + Key.(create &quot;ipv4&quot; Arg.(required ipv4 doc)) + +let ipv4_gateway = + let doc = Key.Arg.info ~doc:&quot;IPv4 gateway&quot; [&quot;ipv4-gateway&quot;] in + Key.(create &quot;ipv4-gateway&quot; Arg.(required ipv4_address doc)) + +let main = + let packages = [ + package ~sublibs:[&quot;ipv4&quot;; &quot;udp&quot;; &quot;icmpv4&quot;] &quot;tcpip&quot;; + package &quot;ethernet&quot;; + package &quot;arp-mirage&quot;; + package &quot;mirage-protocols&quot;; + package &quot;mtime&quot;; + ] in + foreign + ~keys:[Key.abstract ipv4 ; Key.abstract ipv4_gateway ; Key.abstract host ; Key.abstract timeout] + ~packages + &quot;Unikernel.Main&quot; + (random @-&gt; mclock @-&gt; time @-&gt; network @-&gt; job) + +let () = + register &quot;traceroute&quot; + [ main $ default_random $ default_monotonic_clock $ default_time $ default_network ] +</code></pre> +<p>And voila, that's all the code. If you copy it together (or download the two +files from <a href="https://github.com/roburio/traceroute">the GitHub repository</a>), +and have OCaml, opam, and <a href="https://mirage.io/wiki/install">mirage (&gt;= 3.8.0)</a> installed, +you should be able to:</p> +<pre><code class="language-bash">$ mirage configure -t hvt +$ make depend +$ make +$ solo5-hvt --net:service=tap0 -- traceroute.hvt ... +... get the output shown at top ... +</code></pre> +<p>Enhancements may be to use a different protocol (TCP? or any other protocol ID (may be used to encode more information), encode data into IPv4 ID, or the full 8 bytes of the upper protocol), encrypt/authenticate the data transmitted (and verify it has not been tampered with in the ICMP reply), improve error handling and recovery, send multiple packets for improved round trip time measurements, ...</p> +<p>If you develop enhancements you'd like to share, please sent a pull request to the git repository.</p> +<p>Motivation for this traceroute unikernel was while talking with <a href="https://twitter.com/networkservice">Aaron</a> and <a href="https://github.com/phaer">Paul</a>, who contributed several patches to the IP stack which pass the ttl through.</p> +<p>If you want to support our work on MirageOS unikernels, please <a href="https://robur.coop/Donate">donate to robur</a>. I'm interested in feedback, either via <a href="https://twitter.com/h4nnes">twitter</a>, <a href="https://mastodon.social/@hannesm">hannesm@mastodon.social</a> or via eMail.</p> +urn:uuid:ed3036f6-83d2-5e80-b3da-4ccbedb5ae9eTraceroute2021-11-19T18:04:52-00:00hannes<p>A tutorial how to deploy authoritative name servers, let's encrypt, and updating entries from unix services.</p> +2019-12-23T21:30:53-00:00<h2>Goal</h2> +<p>Have your domain served by OCaml-DNS authoritative name servers. Data is stored in a git remote, and let's encrypt certificates can be requested to DNS. This software is deployed since more than two years for several domains such as <code>nqsb.io</code> and <code>robur.coop</code>. This present the authoritative server side, and certificate library of the OCaml-DNS implementation formerly known as <a href="/Posts/DNS">µDNS</a>.</p> +<h2>Prerequisites</h2> +<p>You need to own a domain, and be able to delegate the name service to your own servers. +You also need two spare public IPv4 addresses (in different /24 networks) for your name servers. +A git server or remote repository reachable via git over ssh. +Servers which support <a href="https://github.com/solo5/solo5">solo5</a> guests, and have the corresponding tender installed. +A computer with <a href="https://opam.ocaml.org">opam</a> (&gt;= 2.0.0) installed.</p> +<h2>Data preparation</h2> +<p>Figure out a way to get the DNS entries of your domain in a <a href="https://tools.ietf.org/html/rfc1034">&quot;master file format&quot;</a>, i.e. what bind uses.</p> +<p>This is a master file for the <code>mirage</code> domain, defining <code>$ORIGIN</code> to avoid typing the domain name after each hostname (use <code>@</code> if you need the domain name only; if you need to refer to a hostname in a different domain end it with a dot (<code>.</code>), i.e. <code>ns2.foo.com.</code>). The default time to live <code>$TTL</code> is an hour (3600 seconds). +The zone contains a <a href="https://tools.ietf.org/html/rfc1035#section-3.3.13">start of authority (<code>SOA</code>) record</a> containing the nameserver, hostmaster, serial, refresh, retry, expiry, and minimum. +Also, a single <a href="https://tools.ietf.org/html/rfc1035#section-3.3.11">name server (<code>NS</code>) record</a> <code>ns1</code> is specified with an accompanying <a href="https://tools.ietf.org/html/rfc1035#section-3.4.1">address (<code>A</code>) records</a> pointing to their IPv4 address.</p> +<pre><code class="language-shell">git-repo&gt; cat mirage +$ORIGIN mirage. +$TTL 3600 +@ SOA ns1 hostmaster 1 86400 7200 1048576 3600 +@ NS ns1 +ns1 A 127.0.0.1 +www A 1.1.1.1 +git-repo&gt; git add mirage &amp;&amp; git commit -m initial &amp;&amp; git push +</code></pre> +<h2>Installation</h2> +<p>On your development machine, you need to install various OCaml packages. You don't need privileged access if common tools (C compiler, make, libgmp) are already installed. You have <code>opam</code> installed.</p> +<p>Let's create a fresh <code>switch</code> for the DNS journey:</p> +<pre><code class="language-shell">$ opam init +$ opam update +$ opam switch create udns 4.09.0 +# waiting a bit, a fresh OCaml compiler is getting bootstrapped +$ eval `opam env` #sets some environment variables +</code></pre> +<p>The last command set environment variables in your current shell session, please use the same shell for the commands following (or run <code>eval $(opam env)</code> in another shell and proceed in there - the output of <code>opam switch</code> sohuld point to <code>udns</code>).</p> +<h3>Validation of our zonefile</h3> +<p>First let's check that OCaml-DNS can parse our zonefile:</p> +<pre><code class="language-shell">$ opam install dns-cli #installs ~/.opam/udns/bin/ozone and other binaries +$ ozone &lt;git-repo&gt;/mirage # see ozone --help +successfully checked zone +</code></pre> +<p>Great. Error reporting is not great, but line numbers are indicated (<code>ozone: zone parse problem at line 3: syntax error</code>), <a href="https://github.com/mirage/ocaml-dns/tree/v4.2.0/zone">lexer and parser are lex/yacc style</a> (PRs welcome).</p> +<p>FWIW, <code>ozone</code> accepts <code>--old &lt;filename&gt;</code> to check whether an update from the old zone to the new is fine. This can be used as <a href="https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks">pre-commit hook</a> in your git repository to avoid bad parse states in your name servers.</p> +<h3>Getting the primary up</h3> +<p>The next step is to compile the primary server and run it to serve the domain data. Since the git-via-ssh client is not yet released, we need to add a custom opam repository to this switch.</p> +<pre><code class="language-shell"># git via ssh is not yet released, but this opam repository contains the branch information +$ opam repo add git-ssh git+https://github.com/roburio/git-ssh-dns-mirage3-repo.git +# get the `mirage` application via opam +$ opam install lwt mirage + +# get the source code of the unikernels +$ git clone -b future https://github.com/roburio/unikernels.git +$ cd unikernels/primary-git + +# let's build the server first as unix application +$ mirage configure --prng fortuna #--no-depext if you have all system dependencies +$ make depend +$ make + +# run it +$ ./primary_git +# starts a unix process which clones https://github.com/roburio/udns.git +# attempts to parse the data as zone files, and fails on parse error +$ ./primary-git --remote=https://my-public-git-repository +# this should fail with ENOACCESS since the DNS server tries to listen on port 53 + +# which requires a privileged user, i.e. su, sudo or doas +$ sudo ./primary-git --remote=https://my-public-git-repository +# leave it running, run the following programs in a different shell + +# test it +$ host ns1.mirage 127.0.0.1 +ns1.mirage has address 127.0.0.1 +$ dig any mirage @127.0.0.1 +# a DNS packet printout with all records available for mirage +</code></pre> +<p>That's exciting, the DNS server serving answers from a remote git repository.</p> +<h3>Securing the git access with ssh</h3> +<p>Let's authenticate the access by using ssh, so we feel ready to push data there as well. The primary-git unikernel already includes an experimental <a href="https://github.com/haesbaert/awa-ssh">ssh client</a>, all we need to do is setting up credentials - in the following a RSA keypair and the server fingerprint.</p> +<pre><code class="language-shell"># collect the RSA host key fingerprint +$ ssh-keyscan &lt;git-server&gt; &gt; /tmp/git-server-public-keys +$ ssh-keygen -l -E sha256 -f /tmp/git-server-public-keys | grep RSA +2048 SHA256:a5kkkuo7MwTBkW+HDt4km0gGPUAX0y1bFcPMXKxBaD0 &lt;git-server&gt; (RSA) +# we're interested in the SHA256:yyy only + +# generate a ssh keypair +$ awa_gen_key # installed by the make depend step above in ~/.opam/udns/bin +seed is pIKflD07VT2W9XpDvqntcmEW3OKlwZL62ak1EZ0m +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5b2cSSkZ5/MAu7pM6iJLOaX9tJsfA8DB1RI34Zygw6FA0y8iisbqGCv6Z94ZxreGATwSVvrpqGo5p0rsKs+6gQnMCU1+sOC4PRlxy6XKgj0YXvAZcQuxwmVQlBHshuq0CraMK9FASupGrSO8/dW30Kqy1wmd/IrqW9J1Cnw+qf0C/VEhIbo7btlpzlYpJLuZboTvEk1h67lx1ZRw9bSPuLjj665yO8d0caVIkPp6vDX20EsgITdg+cFjWzVtOciy4ETLFiKkDnuzHzoQ4EL8bUtjN02UpvX2qankONywXhzYYqu65+edSpogx2TuWFDJFPHgcyO/ZIMoluXGNgQlP awa@awa.local +# please run your own awa_gen_key, don't use the numbers above +</code></pre> +<p>The public key needs is in standard OpenSSH format and needs to be added to the list of accepted keys on your server - the exact steps depend on your git server, if you're running your own with <a href="https://github.com/tv42/gitosis">gitosis</a>, add it as new public key file and grant that key access to the data repository. If you use gitlab or github, you may want to create a new user account and with the generated key.</p> +<p>The private key is not displayed, but only the seed required to re-generate it, when using the same random number generator, in our case <a href="http://mirleft.github.io/ocaml-nocrypto/doc/Nocrypto.Rng.html">fortuna implemented by nocrypto</a> - used by both <code>awa_gen_key</code> and <code>primary_git</code>. The seed is provided as command-line argument while starting <code>primary_git</code>:</p> +<pre><code class="language-shell"># execute with git over ssh, authenticator from ssh-keyscan, seed from awa_gen_key +$ ./primary_git --authenticator=SHA256:a5kkkuo7MwTBkW+HDt4km0gGPUAX0y1bFcPMXKxBaD0 --seed=pIKflD07VT2W9XpDvqntcmEW3OKlwZL62ak1EZ0m --remote=ssh://git@&lt;git-server&gt;/repo-name.git +# started up, you can try the host and dig commands from above if you like +</code></pre> +<p>To wrap up, we now have a primary authoritative name server for our zone running as Unix process, which clones a remote git repository via ssh on startup and then serves it.</p> +<h3>Authenticated data updates</h3> +<p>Our remote git repository is the source of truth, if you need to add a DNS entry to the zone, you git pull, edit the zone file, remember to increase the serial in the SOA line, run <code>ozone</code>, git commit and push to the repository.</p> +<p>So, the <code>primary_git</code> needs to be informed of git pushes. This requires a communication channel from the git server (or somewhere else, e.g. your laptop) to the DNS server. I prefer in-protocol solutions over adding yet another protocol stack, no way my DNS server will talk HTTP REST.</p> +<p>The DNS protocol has an extension for <a href="https://tools.ietf.org/html/rfc1996">notifications of zone changes</a> (as a DNS packet), usually used between the primary and secondary servers. The <code>primary_git</code> accepts these notify requests (i.e. bends the standard slightly), and upon receival pulls the remote git repository, and serves the fresh zone files. Since a git pull may be rather excessive in terms of CPU cycles and network bandwidth, only authenticated notifications are accepted.</p> +<p>The DNS protocol specifies in another extension <a href="https://tools.ietf.org/html/rfc2845">authentication (DNS TSIG)</a> with transaction signatures on DNS packets including a timestamp and fudge to avoid replay attacks. As key material hmac secrets distribued to both the communication endpoints are used.</p> +<p>To recap, the primary server is configured with command line parameters (for remote repository url and ssh credentials), and serves data from a zonefile. If the secrets would be provided via command line, a restart would be necessary for adding and removing keys. If put into the zonefile, they would be publicly served on request. So instead, we'll use another file, still in zone file format, in the top-level domain <code>_keys</code>, i.e. the <code>mirage._keys</code> file contains keys for the <code>mirage</code> zone. All files ending in <code>._keys</code> are parsed with the normal parser, but put into an authentication store instead of the domain data store, which is served publically.</p> +<p>For encoding hmac secrets into DNS zone file format, the <a href="https://tools.ietf.org/html/rfc4034#section-2"><code>DNSKEY</code></a> format is used (designed for DNSsec). The <a href="https://www.isc.org/bind/">bind</a> software comes with <code>dnssec-keygen</code> and <code>tsig-keygen</code> to generate DNSKEY output: flags is 0, protocol is 3, and algorithm identifier for SHA256 is 163 (SHA384 164, SHA512 165). This is reused by the OCaml DNS library. The key material itself is base64 encoded.</p> +<p>Access control and naming of keys follows the DNS domain name hierarchy - a key has the form name._operation.domain, and has access granted to domain and all subdomains of it. Two operations are supported: update and transfer. In the future there may be a dedicated notify operation, for now we'll use update. The name part is ignored for the update operation.</p> +<p>Since we now embedd secret information in the git repository, it is a good idea to restrict access to it, i.e. make it private and not publicly cloneable or viewable. Let's generate a first hmac secret and send a notify:</p> +<pre><code class="language-shell">$ dd if=/dev/random bs=1 count=32 | b64encode - +begin-base64 644 - +kJJqipaQHQWqZL31Raar6uPnepGFIdtpjkXot9rv2xg= +==== +[..] +git-repo&gt; echo &quot;personal._update.mirage. DNSKEY 0 3 163 kJJqipaQHQWqZL31Raar6uPnepGFIdtpjkXot9rv2xg=&quot; &gt; mirage._keys +git-repo&gt; git add mirage._keys &amp;&amp; git commit -m &quot;add hmac secret&quot; &amp;&amp; git push + +# now we need to restart the primary git to get the git repository with the key +$ ./primary_git --seed=... # arguments from above, remote git, host key fingerprint, private key seed + +# now test that a notify results in a git pull +$ onotify 127.0.0.1 mirage --key=personal._update.mirage:SHA256:kJJqipaQHQWqZL31Raar6uPnepGFIdtpjkXot9rv2xg= +# onotify was installed by dns-cli in ~/.opam/udns/bin/onotify, see --help for options +# further changes to the hmac secrets don't require a restart anymore, a notify packet is sufficient :D +</code></pre> +<p>Ok, this onotify command line could be setup as a git post-commit hook, or run manually after each manual git push.</p> +<h3>Secondary</h3> +<p>It's time to figure out how to integrate the secondary name server. An already existing bind or something else that accepts notifications and issues zone transfers with hmac-sha256 secrets should work out of the box. If you encounter interoperability issues, please get in touch with me.</p> +<p>The <code>secondary</code> subdirectory of the cloned <code>unikernels</code> repository is another unikernel that acts as secondary server. It's only command line argument is a list of hmac secrets used for authenticating that the received data originates from the primary server. Data is initially transferred by a <a href="https://tools.ietf.org/html/rfc5936">full zone transfer (AXFR)</a>, later updates (upon refresh timer or notify request sent by the primary) use <a href="https://tools.ietf.org/html/rfc1995">incremental (IXFR)</a>. Zone transfer requests and data are authenticated with transaction signatures again.</p> +<p>Convenience by OCaml DNS is that transfer key names matter, and are of the form <primary-ip>.<secondary-ip>._transfer.domain, i.e. <code>1.1.1.1.2.2.2.2._transfer.mirage</code> if the primary server is 1.1.1.1, and the secondary 2.2.2.2. Encoding the IP address in the name allows both parties to start the communication: the secondary starts by requesting a SOA for all domains for which keys are provided on command line, and if an authoritative SOA answer is received, the AXFR is triggered. The primary server emits notification requests on startup and then on every zone change (i.e. via git pull) to all secondary IP addresses of transfer keys present for the specific zone in addition to the notifications to the NS records in the zone.</p> +<pre><code class="language-shell">$ cd ../secondary +$ mirage configure --prng fortuna +# make depend should not be needed since all packages are already installed by the primary-git +$ make +$ ./secondary +</code></pre> +<h3>IP addresses and routing</h3> +<p>Both primary and secondary serve the data on the DNS port (53) on UDP and TCP. To run both on the same machine and bind them to different IP addresses, we'll use a layer 2 network (ethernet frames) with a host system software switch (bridge interface <code>service</code>), the unikernels as virtual machines (or seccomp-sandboxed) via the <a href="https://github.com/solo5/solo5">solo5</a> backend. Using xen is possible as well. As IP address range we'll use 10.0.42.0/24, and the host system uses the 10.0.42.1.</p> +<p>The primary git needs connectivity to the remote git repository, thus on a laptop in a private network we need network address translation (NAT) from the bridge where the unikernels speak to the Internet where the git repository resides.</p> +<pre><code class="language-shell"># on FreeBSD: +# configure NAT with pf, you need to have forwarding enabled +$ sysctl net.inet.ip.forwarding: 1 +$ echo 'nat pass on wlan0 inet from 10.0.42.0/24 to any -&gt; (wlan0)' &gt;&gt; /etc/pf.conf +$ service pf restart + +# make tap interfaces UP on open() +$ sysctl net.link.tap.up_on_open: 1 + +# bridge creation, naming, and IP setup +$ ifconfig bridge create +bridge0 +$ ifconfig bridge0 name service +$ ifconfig bridge0 10.0.42.1/24 + +# two tap interfaces for our unikernels +$ ifconfig tap create +tap0 +$ ifconfig tap create +tap1 +# add them to the bridge +$ ifconfig service addm tap0 addm tap1 +</code></pre> +<h3>Primary and secondary setup</h3> +<p>Let's update our zone slightly to reflect the IP changes.</p> +<pre><code class="language-shell">git-repo&gt; cat mirage +$ORIGIN mirage. +$TTL 3600 +@ SOA ns1 hostmaster 2 86400 7200 1048576 3600 +@ NS ns1 +@ NS ns2 +ns1 A 10.0.42.2 +ns2 A 10.0.42.3 + +# we also need an additional transfer key +git-repo&gt; cat mirage._keys +personal._update.mirage. DNSKEY 0 3 163 kJJqipaQHQWqZL31Raar6uPnepGFIdtpjkXot9rv2xg= +10.0.42.2.10.0.42.3._transfer.mirage. DNSKEY 0 3 163 cDK6sKyvlt8UBerZlmxuD84ih2KookJGDagJlLVNo20= +git-repo&gt; git commit -m &quot;udpates&quot; . &amp;&amp; git push +</code></pre> +<p>Ok, the git repository is ready, now we need to compile the unikernels for the virtualisation target (see <a href="https://mirage.io/wiki/hello-world#Building-for-Another-Backend">other targets</a> for further information).</p> +<pre><code class="language-shell"># back to primary +$ cd ../primary-git +$ mirage configure -t hvt --prng fortuna # or e.g. -t spt (and solo5-spt below) +# installs backend-specific opam packages, recompiles some +$ make depend +$ make +[...] +$ solo5-hvt --net:service=tap0 -- primary_git.hvt --ipv4=10.0.42.2/24 --ipv4-gateway=10.0.42.1 --seed=.. --authenticator=.. --remote=ssh+git://... +# should now run as a virtual machine (kvm, bhyve), and clone the git repository +$ dig any mirage @10.0.42.2 +# should reply with the SOA and NS records, and also the name server address records in the additional section + +# secondary +$ cd ../secondary +$ mirage configure -t hvt --prng fortuna +$ make +$ solo5-hvt --net:service=tap1 -- secondary.hvt --ipv4=10.0.42.3/24 --keys=10.0.42.2.10.0.42.3._transfer.mirage:SHA256:cDK6sKyvlt8UBerZlmxuD84ih2KookJGDagJlLVNo20= +# an ipv4-gateway is not needed in this setup, but in real deployment later +# it should start up and transfer the mirage zone from the primary + +$ dig any mirage @10.0.42.3 +# should now output the same information as from 10.0.42.2 + +# testing an update and propagation +# edit mirage zone, add a new record and increment the serial number +git-repo&gt; echo &quot;foo A 127.0.0.1&quot; &gt;&gt; mirage +git-repo&gt; vi mirage &lt;- increment serial +git-repo&gt; git commit -m 'add foo' . &amp;&amp; git push +$ onotify 10.0.42.2 mirage --key=personal._update.mirage:SHA256:kJJqipaQHQWqZL31Raar6uPnepGFIdtpjkXot9rv2xg= + +# now check that it worked +$ dig foo.mirage @10.0.42.2 # primary +$ dig foo.mirage @10.0.42.3 # secondary got notified and transferred the zone +</code></pre> +<p>You can also check the behaviour when restarting either of the VMs, whenever the primary is available the zone is synchronised. If the primary is down, the secondary still serves the zone. When the secondary is started while the primary is down, it won't serve any data until the primary is online (the secondary polls periodically, the primary sends notifies on startup).</p> +<h3>Dynamic data updates via DNS, pushed to git</h3> +<p>DNS is a rich protocol, and it also has builtin <a href="https://tools.ietf.org/html/rfc2136">updates</a> that are supported by OCaml DNS, again authenticated with hmac-sha256 and shared secrets. Bind provides the command-line utility <code>nsupdate</code> to send these update packets, a simple <code>oupdate</code> unix utility is available as well (i.e. for integration of dynamic DNS clients). You know the drill, add a shared secret to the primary, git push, notify the primary, and voila we can dynamically in-protocol update. An update received by the primary via this way will trigger a git push to the remote git repository, and notifications to the secondary servers as described above.</p> +<pre><code class="language-shell"># being lazy, I reuse the key above +$ oupdate 10.0.42.2 personal._update.mirage:SHA256:kJJqipaQHQWqZL31Raar6uPnepGFIdtpjkXot9rv2xg= my-other.mirage 1.2.3.4 + +# let's observe the remote git +git-repo&gt; git pull +# there should be a new commit generated by the primary +git-repo&gt; git log + +# test it, should return 1.2.3.4 +$ dig my-other.mirage @10.0.42.2 +$ dig my-other.mirage @10.0.42.3 +</code></pre> +<p>So we can deploy further <code>oupdate</code> (or <code>nsupdate</code>) clients, distribute hmac secrets, and have the DNS zone updated. The source of truth is still the git repository, where the primary-git pushes to. Merge conflicts and timing of pushes is not yet dealt with. They are unlikely to happen since the primary is notified on pushes and should have up-to-date data in storage. Sorry, I'm unsure about the error semantics, try it yourself.</p> +<h3>Let's encrypt!</h3> +<p><a href="https://letsencrypt.org/">Let's encrypt</a> is a certificate authority (CA), which certificate is shipped as trust anchor in web browsers. They specified a protocol for <a href="https://tools.ietf.org/html/draft-ietf-acme-acme-05">automated certificate management environment (ACME)</a>, used to get X509 certificates for your services. In the protocol, a certificate signing request (publickey and hostname) is sent to let's encrypt servers, which sends a challenge to proof the ownership of the hostnames. One widely-used way to solve this challenge is running a web server, another is to serve it as text record from the authoritative DNS server.</p> +<p>Since I avoid persistent storage when possible, and also don't want to integrate a HTTP client stack in the primary server, I developed a third unikernel that acts as (hidden) secondary server, performs the tedious HTTP communication with let's encrypt servers, and stores all data in the public DNS zone.</p> +<p>For encoding of certificates, the DANE working group specified <a href="https://tools.ietf.org/html/rfc6698.html#section-7.1">TLSA</a> records in DNS. They are quadruples of usage, selector, matching type, and ASN.1 DER-encoded material. We set usage to 3 (domain-issued certificate), matching type to 0 (no hash), and selector to 0 (full certificate) or 255 (private usage) for certificate signing requests. The interaction is as follows:</p> +<ol> +<li>Primary, secondary, and let's encrypt unikernels are running +</li> +<li>A service (<code>ocertify</code>, <code>unikernels/certificate</code>, or the <code>dns-certify.mirage</code> library) demands a TLS certificate, and has a hmac-secret for the primary DNS +</li> +<li>The service generates a certificate signing request with the desired hostname(s), and performs an nsupdate with TLSA 255 <DER encoded signing-request> +</li> +<li>The primary accepts the update, pushes the new zone to git, and sends notifies to secondary and let's encrypt unikernels which (incrementally) transfer the zone +</li> +<li>The let's encrypt unikernel notices while transferring the zone a signing request without a certificate, starts HTTP interaction with let's encrypt +</li> +<li>The let's encrypt unikernel solves the challenge, sends the response as update of a TXT record to the primary nameserver +</li> +<li>The primary pushes the TXT record to git, and notifies secondaries (which transfer the zone) +</li> +<li>The let's encrypt servers request the TXT record from either or both authoritative name servers +</li> +<li>The let's encrypt unikernel polls for the issued certificate and send an update to the primary TLSA 0 <DER encoded certificate> +</li> +<li>The primary pushes the certificate to git, notifies secondaries (which transfer the zone) +</li> +<li>The service polls TLSA records for the hostname, and use it upon retrieval +</li> +</ol> +<p>Note that neither the signing request nor the certificate contain private key material, thus it is fine to serve them publically. Please also note, that the service polls for the certificate for the hostname in DNS, which is valid (start and end date) certificate and uses the same public key, this certificate is used and steps 3-10 are not executed.</p> +<p>The let's encrypt unikernel does not serve anything, it is a reactive system which acts upon notification from the primary. Thus, it can be executed in a private address space (with a NAT). Since the OCaml DNS server stack needs to push notifications to it, it preserves all incoming signed SOA requests as candidates for notifications on update. The let's encrypt unikernel ensures to always have a connection to the primary to receive notifications.</p> +<pre><code class="language-shell"># getting let's encrypt up and running +$ cd ../lets-encrypt +$ mirage configure -t hvt --prng fortuna +$ make depend +$ make + +# run it +$ solo5-hvt --net:service=tap2 -- letsencrypt.hvt --keys=... + +# test it +$ ocertify 10.0.42.2 foo.mirage +</code></pre> +<p>For actual testing with let's encrypt servers you need to have the primary and secondary deployed on your remote hosts, and your domain needs to be delegated to these servers. Good luck. And ensure you have backup your git repository.</p> +<p>As fine print, while this tutorial was about the <code>mirage</code> zone, you can stick any number of zones into the git repository. If you use a <code>_keys</code> file (without any domain prefix), you can configure hmac secrets for all zones, i.e. something to use in your let's encrypt unikernel and secondary unikernel. Dynamic addition of zones is supported, just create a new zonefile and notify the primary, the secondary will be notified and pick it up. The primary responds to a signed SOA for the root zone (i.e. requested by the secondary) with the SOA response (not authoritative), and additionally notifications for all domains of the primary.</p> +<h3>Conclusion and thanks</h3> +<p>This tutorial presented how to use the OCaml DNS based unikernels to run authoritative name servers for your domain, using a git repository as the source of truth, dynamic authenticated updates, and let's encrypt certificate issuing.</p> +<p>There are further steps to take, such as monitoring -- have a look at the <code>monitoring</code> branch of the opam repository above, and the <code>future-robur</code> branch of the unikernels repository above, which use a second network interface for reporting syslog and metrics to telegraf / influx / grafana. Some DNS features are still missing, most prominently DNSSec.</p> +<p>I'd like to thank all people involved in this software stack, without other key components, including <a href="https://github.com/mirage/ocaml-git">git</a>, <a href="https://irmin.io/">irmin 2.0</a>, <a href="https://github.com/mirleft/ocaml-nocrypto">nocrypto</a>, <a href="https://github.com/haesbaert/awa-ssh">awa-ssh</a>, <a href="https://github.com/mirage/ocaml-cohttp">cohttp</a>, <a href="https://github.com/solo5/sol5">solo5</a>, <a href="https://github.com/mirage/mirage">mirage</a>, <a href="https://github.com/mmaker/ocaml-letsencrypt">ocaml-letsencrypt</a>, and more.</p> +<p>If you want to support our work on MirageOS unikernels, please <a href="https://robur.coop/Donate">donate to robur</a>. I'm interested in feedback, either via <a href="https://twitter.com/h4nnes">twitter</a>, <a href="https://mastodon.social/@hannesm">hannesm@mastodon.social</a> or via eMail.</p> +urn:uuid:e3d4fd9e-e379-5c86-838e-46034ddd435dDeploying authoritative OCaml-DNS servers as MirageOS unikernels2021-11-19T18:04:52-00:00hannes<p>MirageOS unikernels are reproducible :)</p> +2019-12-16T18:29:30-00:00<h2>Reproducible builds summit</h2> +<p>I'm just back from the <a href="https://reproducible-builds.org/events/Marrakesh2019/">Reproducible builds summit 2019</a>. In 2018, several people developing <a href="https://ocaml.org">OCaml</a> and <a href="https://opam.ocaml.org">opam</a> and <a href="https://mirage.io">MirageOS</a>, attended <a href="https://reproducible-builds.org/events/paris2018/">the Reproducible builds summit in Paris</a>. The notes from last year on <a href="https://reproducible-builds.org/events/paris2018/report/#Toc11410_331763073">opam reproducibility</a> and <a href="https://reproducible-builds.org/events/paris2018/report/#Toc11681_331763073">MirageOS reproducibility</a> are online. After last years workshop, Raja started developing the opam reproducibilty builder <a href="https://github.com/rjbou/orb">orb</a>, which I extended at and after this years summit. This year before and after the facilitated summit there were hacking days, which allowed further interaction with participants, writing some code and conduct experiments. I had this year again an exciting time at the summit and hacking days, thanks to our hosts, organisers, and all participants.</p> +<h2>Goal</h2> +<p>Stepping back a bit, first look on the <a href="https://reproducible-builds.org/">goal of reproducible builds</a>: when compiling source code multiple times, the produced binaries should be identical. It should be sufficient if the binaries are behaviourally equal, but this is pretty hard to check. It is much easier to check <strong>bit-wise identity of binaries</strong>, and relaxes the burden on the checker -- checking for reproducibility is reduced to computing the hash of the binaries. Let's stick to the bit-wise identical binary definition, which also means software developers have to avoid non-determinism during compilation in their toolchains, dependent libraries, and developed code.</p> +<p>A <a href="https://reproducible-builds.org/docs/test-bench/">checklist</a> of potential things leading to non-determinism has been written up by the reproducible builds project. Examples include recording the build timestamp into the binary, ordering of code and embedded data. The reproducible builds project also developed <a href="https://packages.debian.org/sid/disorderfs">disorderfs</a> for testing reproducibility and <a href="https://diffoscope.org/">diffoscope</a> for comparing binaries with file-dependent readers, falling back to <code>objdump</code> and <code>hexdump</code>. A giant <a href="https://tests.reproducible-builds.org/">test infrastructure</a> with <a href="https://tests.reproducible-builds.org/debian/index_variations.html">lots of variations</a> between the builds, mostly using Debian, has been setup over the years.</p> +<p>Reproducibility is a precondition for trustworthy binaries. See <a href="https://reproducible-builds.org/#why-does-it-matter">why does it matter</a>. If there are no instructions how to get from the published sources to the exact binary, why should anyone trust and use the binary which claims to be the result of the sources? It may as well contain different code, including a backdoor, bitcoin mining code, outputting the wrong results for specific inputs, etc. Reproducibility does not imply the software is free of security issues or backdoors, but instead of a audit of the binary - which is tedious and rarely done - the source code can be audited - but the toolchain (compiler, linker, ..) used for compilation needs to be taken into account, i.e. trusted or audited to not be malicious. <strong>I will only ever publish binaries if they are reproducible</strong>.</p> +<p>My main interest at the summit was to enhance existing tooling and conduct some experiments about the reproducibility of <a href="https://mirage.io">MirageOS unikernels</a> -- a unikernel is a statically linked ELF binary to be run as Unix process or <a href="https://github.com/solo5/solo5">virtual machine</a>. MirageOS heavily uses <a href="https://ocaml.org">OCaml</a> and <a href="https://opam.ocaml.org">opam</a>, the OCaml package manager, and is an opam package itself. Thus, <em>checking reproducibility of a MirageOS unikernel is the same problem as checking reproducibility of an opam package</em>.</p> +<h2>Reproducible builds with opam</h2> +<p>Testing for reproducibility is achieved by taking the sources and compile them twice independently. Afterwards the equality of the resulting binaries can be checked. In trivial projects, the sources is just a single file, or originate from a single tarball. In OCaml, opam uses <a href="https://github.com/ocaml/opam-repository">a community repository</a> where OCaml developers publish their package releases to, but can also use custom repositores, and in addition pin packages to git remotes (url including branch or commit), or a directory on the local filesystem. Manually tracking and updating all dependent packages of a MirageOS unikernel is not feasible: our hello-world compiled for hvt (kvm/BHyve) already has 79 opam dependencies, including the OCaml compiler which is distribued as opam package. The unikernel serving this website depends on 175 opam packages.</p> +<p>Conceptually there should be two tools, the <em>initial builder</em>, which takes the latest opam packages which do not conflict, and exports exact package versions used during the build, as well as hashes of binaries. The other tool is a <em>rebuilder</em>, which imports the export, conducts a build, and outputs the hashes of the produced binaries.</p> +<p>Opam has the concept of a <code>switch</code>, which is an environment where a package set is installed. Switches are independent of each other, and can already be exported and imported. Unfortunately the export is incomplete: if a package includes additional patches as part of the repository -- sometimes needed for fixing releases where the actual author or maintainer of a package responds slowly -- these package neither the patches end up in the export. Also, if a package is pinned to a git branch, the branch appears in the export, but this may change over time by pushing more commits or even force-pushing to that branch. In <a href="https://github.com/ocaml/opam/pull/4040">PR #4040</a> (under discussion and review), also developed during the summit, I propose to embed the additional files as base64 encoded values in the opam file. To solve the latter issue, I modified the export mechanism to <a href="https://github.com/ocaml/opam/pull/4055">embed the git commit hash (PR #4055)</a>, and avoid sources from a local directory and which do not have a checksum.</p> +<p>So the opam export contains the information required to gather the exact same sources and build instructions of the opam packages. If the opam repository would be self-contained (i.e. not depend on any other tools), this would be sufficient. But opam does not run in thin air, it requires some system utilities such as <code>/bin/sh</code>, <code>sed</code>, a GNU make, commonly <code>git</code>, a C compiler, a linker, an assembler. Since opam is available on various operating systems, the plugin <code>depext</code> handles host system dependencies, e.g. if your opam package requires <code>gmp</code> to be installed, this requires slightly different names depending on host system or distribution, take a look at <a href="https://github.com/ocaml/opam-repository/blob/master/packages/conf-gmp/conf-gmp.1/opam">conf-gmp</a>. This also means, opam has rather good information about both the opam dependencies and the host system dependencies for each package. Please note that the host system packages used during compilation are not yet recorded (i.e. which <code>gmp</code> package was installed and used during the build, only that a <code>gmp</code> package has to be installed). The base utilities mentioned above (C compiler, linker, shell) are also not recorded yet.</p> +<p>Operating system information available in opam (such as architecture, distribution, version), which in some cases maps to exact base utilities, is recorded in the build-environment, a separate artifact. The environment variable <a href="https://reproducible-builds.org/specs/source-date-epoch/"><code>SOURCE_DATE_EPOCH</code></a>, used for communicating the same timestamp when software is required to record a timestamp into the resulting binary, is also captured in the build environment.</p> +<p>Additional environment variables may be captured or used by opam packages to produce different output. To avoid this, both the initial builder and the rebuilder are run with minimal environment variables: only <code>PATH</code> (normalised to a whitelist of <code>/bin</code>, <code>/usr/bin</code>, <code>/usr/local/bin</code> and <code>/opt/bin</code>) and <code>HOME</code> are defined. Missing information at the moment includes CPU features: some libraries (gmp?, nocrypto) emit different code depending on the CPU feature.</p> +<h2>Tooling</h2> +<p><em>TL;DR: A <strong>build</strong> builds an opam package, and outputs <code>.opam-switch</code>, <code>.build-hashes.N</code>, and <code>.build-environment.N</code>. A <strong>rebuild</strong> uses these artifacts as input, builds the package and outputs another <code>.build-hashes.M</code> and <code>.build-environment.M</code>.</em></p> +<p>The command-line utility <code>orb</code> can be installed and used:</p> +<pre><code class="language-sh">$ opam pin add orb git+https://github.com/hannesm/orb.git#active +$ orb build --twice --keep-build-dir --diffoscope &lt;your-favourite-opam-package&gt; +</code></pre> +<p>It provides two subcommands <code>build</code> and <code>rebuild</code>. The <code>build</code> command takes a list of local opam <code>--repos</code> where to take opam packages from (defaults to <code>default</code>), a compiler (either a variant <code>--compiler=4.09.0+flambda</code>, a version <code>--compiler=4.06.0</code>, or a pin to a local development version <code>--compiler-pin=~/ocaml</code>), and optionally an existing switch <code>--use-switch</code>. It creates a switch, builds the packages, and emits the opam export, hashes of all files installed by these packages, and the build environment. The flags <code>--keep-build</code> retains the build products, opam's <code>--keep-build-dir</code> in addition temporary build products and generated source code. If <code>--twice</code> is provided, a rebuild (described next) is executed after the initial build.</p> +<p>The <code>rebuild</code> command takes a directory with the opam export and build environment to build the opam package. It first compares the build-environment with the host system, sets the <code>SOURCE_DATE_EPOCH</code> and switch location accordingly and executes the import. Once the build is finished, it compares the hashes of the resulting files with the previous run. On divergence, if build directories were kept in the previous build, and if diffoscope is available and <code>--diffoscope</code> was provided, diffoscope is run on the diverging files. If <code>--keep-build-dir</code> was provided as well, <code>diff -ur</code> can be used to compare the temporary build and sources, including build logs.</p> +<p>The builds are run in parallel, as opam does, this parallelism does not lead to different binaries in my experiments.</p> +<h2>Results and discussion</h2> +<p><strong>All MirageOS unikernels I have deployed are reproducible \o/</strong>. Also, several binaries such as <code>orb</code> itself, <code>opam</code>, <code>solo5-hvt</code>, and all <code>albatross</code> utilities are reproducible.</p> +<p>The unikernel range from hello world, web servers (e.g. this blog, getting its data on startup via a git clone to memory), authoritative DNS servers, CalDAV server. They vary in size between 79 and 200 opam packages, resulting in 2MB - 16MB big ELF binaries (including debug symbols). The <a href="https://github.com/roburio/reproducible-unikernel-repo">unikernel opam repository</a> contains some reproducible unikernels used for testing. Some work-in-progress enhancements are needed to achieve this:</p> +<p>At the moment, the opam package of a MirageOS unikernel is automatically generated by <code>mirage configure</code>, but only used for tracking opam dependencies. I worked on <a href="https://github.com/mirage/mirage/pull/1022">mirage PR #1022</a> to extend the generated opam package with build and install instructions.</p> +<p>As mentioned above, if locale is set, ocamlgraph needs to be patched to emit a (locale-dependent) timestamp.</p> +<p>The OCaml program <a href="https://github.com/mirage/ocaml-crunch"><code>crunch</code></a> embeds a subdirectory as OCaml code into a binary, which we use in MirageOS quite regularly for static assets, etc. This plays in several ways into reproducibility: on the one hand, it needs a timestamp for its <code>last_modified</code> functionality (and adheres since <a href="https://github.com/mirage/ocaml-crunch/pull/45">June 2018</a> to the <code>SOURCE_DATE_EPOCH</code> spec, thanks to Xavier Clerc). On the other hand, it used before version 3.2.0 (released Dec 14th) hashtables for storing the file contents, where iteration is not deterministic (the insertion is not sorted), <a href="https://github.com/mirage/ocaml-crunch/pull/51">fixed in PR #51</a> by using a Map instead.</p> +<p>In functoria, a tool used to configure MirageOS devices and their dependencies, can emit a list of opam packages which were required to build the unikernel. This uses <code>opam list --required-by --installed --rec &lt;pkgs&gt;</code>, which uses the cudf graph (<a href="https://github.com/mirage/functoria/pull/189#issuecomment-566696426">thanks to Raja for explanation</a>), that is during the rebuild dropping some packages. The <a href="https://github.com/mirage/functoria/pull/189">PR #189</a> avoids by not using the <code>--rec</code> argument, but manually computing the fixpoint.</p> +<p>Certainly, the choice of environment variables, and whether to vary them (as <a href="https://tests.reproducible-builds.org/debian/index_variations.html">debian does</a>) or to not define them (or normalise) while building, is arguably. Since MirageOS does neither support time zone nor internationalisation, there is no need to prematurely solving this issue. On related note, even with different locale settings, MirageOS unikernels are reproducible apart from an <a href="https://github.com/backtracking/ocamlgraph/pull/90">issue in ocamlgraph #90</a> embedding the output of <a href="https://pubs.opengroup.org/onlinepubs/9699919799/utilities/date.html"><code>date</code></a>, which is different depending on <code>LANG</code> and locale (<code>LC_*</code>) settings.</p> +<p>Prior art in reproducible MirageOS unikernels is the <a href="https://github.com/mirage/qubes-mirage-firewall/">mirage-qubes-firewall</a>. Since <a href="https://github.com/mirage/qubes-mirage-firewall/commit/07ff3d61477383860216c69869a1ffee59145e45">early 2017</a> it is reproducible. Their approach is different by building in a docker container with the opam repository pinned to an exact git commit.</p> +<h2>Further work</h2> +<p>I only tested a certain subset of opam packages and MirageOS unikernels, mainly on a single machine (my laptop) running FreeBSD, and am happy if others will test reproducibility of their OCaml programs with the tools provided. There could as well be CI machines rebuilding opam packages and reporting results to a central repository. I'm pretty sure there are more reproducibility issues in the opam ecosystem. I developed an <a href="https://github.com/roburio/reproducible-testing-repo">reproducible testing opam repository</a> with opam packages that do not depend on OCaml, mainly for further tooling development. Some tests were also conducted on a Debian system with the same result. The variations, apart from build time, were using a different user, and different locale settings.</p> +<p>As mentioned above, more environment, such as the CPU features, and external system packages, should be captured in the build environment.</p> +<p>When comparing OCaml libraries, some output files (cmt / cmti / cma / cmxa) are not deterministic, but contain minimal diverge where I was not able to spot the root cause. It would be great to fix this, likely in the OCaml compiler distribution. Since the final result, the binary I'm interested in, is not affected by non-identical intermediate build products, I hope someone (you?) is interested in improving on this side. OCaml bytecode output also seems to be non-deterministic. There is <a href="https://github.com/coq/coq/issues/11229">a discussion on the coq issue tracker</a> which may be related.</p> +<p>In contrast to initial plans, I did not used the <a href="https://reproducible-builds.org/specs/build-path-prefix-map/"><code>BUILD_PATH_PREFIX_MAP</code></a> environment variable, which is implemented in OCaml by <a href="https://github.com/ocaml/ocaml/pull/1515">PR #1515</a> (and followups). The main reasons are that something in the OCaml toolchain (I suspect the bytecode interpreter) needed absolute paths to find libraries, thus I'd need a symlink from the left-hand side to the current build directory, which was tedious. Also, my installed assembler does not respect the build path prefix map, and BUILD_PATH_PREFIX_MAP is not widely supported. See e.g. the Debian <a href="https://tests.reproducible-builds.org/debian/rb-pkg/unstable/amd64/diffoscope-results/ocaml-zarith.html">zarith</a> package with different build paths and its effects on the binary.</p> +<p>I'm fine with recording the build path (switch location) in the build environment for now - it turns out to end up only once in MirageOS unikernels, likely by the last linking step, which <a href="http://blog.llvm.org/2019/11/deterministic-builds-with-clang-and-lld.html">hopefully soon be solved by llvm 9.0</a>.</p> +<p>What was fun was to compare the unikernel when built on Linux with gcc against a built on FreeBSD with clang and lld - spoiler: they emit debug sections with different dwarf versions, it is pretty big. Other fun differences were between OCaml compiler versions: the difference between minor versions (4.08.0 vs 4.08.1) is pretty small (~100kB as human-readable output), while the difference between major version (4.08.1 vs 4.09.0) is rather big (~900kB as human-readable diff).</p> +<p>An item on my list for the future is to distribute the opam export, build hashes and build environment artifacts in a authenticated way. I want to integrate this as <a href="https://in-toto.io/">in-toto</a> style into <a href="https://github.com/hannesm/conex">conex</a>, my not-yet-deployed implementation of <a href="https://theupdateframework.github.io/">tuf</a> for opam that needs further development and a test installation, hopefully in 2020.</p> +<p>If you want to support our work on MirageOS unikernels, please <a href="https://robur.coop/Donate">donate to robur</a>. I'm interested in feedback, either via <a href="https://twitter.com/h4nnes">twitter</a>, <a href="https://mastodon.social/@hannesm">hannesm@mastodon.social</a> or via eMail.</p> +urn:uuid:09922d6b-56c8-595d-8086-5aef9656cbc4Reproducible MirageOS unikernel builds2021-11-19T18:04:52-00:00hannes<p>Five years since ocaml-x509 initial release, it has been reworked and used more widely</p> +2019-08-15T11:21:30-00:00<h2>Cryptographic material</h2> +<p>Once a private and public key pair is generated (doesn't matter whether it is plain RSA, DSA, ECC on any curve), this is fine from a scientific point of view, and can already be used for authenticating and encrypting. From a practical point of view, the public parts need to be exchanged and verified (usually a fingerprint or hash thereof). This leads to the struggle how to encode this cryptographic material, and how to embed an identity (or multiple), capabilities, and other information into it. <a href="https://en.wikipedia.org/wiki/X.509">X.509</a> is a standard to solve this encoding and embedding, and provides more functionality, such as establishing chains of trust and revocation of invalidated or compromised material. X.509 uses certificates, which contain the public key, and additional information (in a extensible key-value store), and are signed by an issuer, either the private key corresponding to the public key - a so-called self-signed certificate - or by a different private key, an authority one step up the chain. A rather long, but very good introduction to certificates by Mike Malone is <a href="https://smallstep.com/blog/everything-pki.html">available here</a>.</p> +<h2>OCaml ecosystem evolving</h2> +<p>More than 5 years ago David Kaloper and I <a href="https://mirage.io/blog/introducing-x509">released the initial ocaml-x509</a> package as part of our <a href="https://nqsb.io">TLS stack</a>, which contained code for decoding and encoding certificates, and path validation of a certificate chain (as described in <a href="https://tools.ietf.org/html/rfc6125">RFC 5280</a>). The validation logic and the decoder/encoder, based on the ASN.1 grammar specified in the RFC, implemented using David's <a href="https://github.com/mirleft/ocaml-asn1-combinators">asn1-combinators</a> library changed much over time.</p> +<p>The OCaml ecosystem evolved over the years, which lead to some changes:</p> +<ul> +<li>Camlp4 deprecation - we used camlp4 for stream parsers of PEM-encoded certificates, and sexplib.syntax to derive s-expression decoders and encoders; +</li> +<li>Avoiding brittle ppx converters - which we used for s-expression decoders and encoders of certificates after camlp4 was deprecated; +</li> +<li>Build and release system iterations - initially oasis and a packed library, then topkg and ocamlbuild, now dune; +</li> +<li>Introduction of the <code>result</code> type in the standard library - we used to use <code>[ `Ok of certificate option | `Fail of failure ]</code>; +</li> +<li>No more leaking exceptions in the public API; +</li> +<li>Usage of pretty-printers, esp with the <a href="https://erratique.ch/software/fmt">fmt</a> library <code>val pp : Format.formatter -&gt; 'a -&gt; unit</code>, instead of <code>val to_string : t -&gt; string</code> functions; +</li> +<li>Release of <a href="https://erratique.ch/software/ptime">ptime</a>, a platform-independent POSIX time support; +</li> +<li>Release of <a href="https://erratique.ch/software/rresult">rresult</a>, which includes combinators for computation <code>result</code>s; +</li> +<li>Release of <a href="https://github.com/hannesm/gmap">gmap</a>, a <code>Map</code> whose value types depend on the key, used for X.509 extensions, GeneralName, DistinguishedName, etc.; +</li> +<li>Release of <a href="https://github.com/hannesm/domain-name">domain-name</a>, a library for domain name operations (as specified in <a href="https://tools.ietf.org/html/rfc1035">RFC 1035</a>) - used for name validation; +</li> +<li>Usage of the <a href="https://github.com/mirage/alcotest">alcotest</a> unit testing framework (instead of oUnit). +</li> +</ul> +<h2>More use cases for X.509</h2> +<p>Initially, we designed and used ocaml-x509 for providing TLS server endpoints and validation in TLS clients - mostly on the public web, where each operating system ships a set of ~100 trust anchors to validate any web server certificate against. But once you have a X.509 implementation, every authentication problem can be solved by applying it.</p> +<h3>Authentication with path building</h3> +<p>It turns out that the trust anchor sets are not equal across operating systems and versions, thus some web servers serve sets, instead of chains, of certificates - as described in <a href="https://tools.ietf.org/html/rfc4158">RFC 4158</a>, where the client implementation needs to build valid paths and accept a connection if any path can be validated. The path building was initially in 0.5.2 slightly wrong, but fixed quickly in <a href="https://github.com/mirleft/ocaml-x509/commit/1a1476308d24bdcc49d45c4cd9ef539ca57461d2">0.5.3</a>.</p> +<h3>Fingerprint authentication</h3> +<p>The chain of trust validation is useful for the open web, where you as software developer don't know to which remote endpoint your software will ever connect to - as long as the remote has a certificate signed (via intermediates) by any of the trust anchors. In the early days, before <a href="https://letsencrypt.org/">let's encrypt</a> was launched and embedded as trust anchors (or cross-signed by already deployed trust anchors), operators needed to pay for a certificate - a business model where some CAs did not bother to check the authenticity of a certificate signing request, and thus random people owning valid certificates for microsoft.com or google.com.</p> +<p>Instead of using the set of trust anchors, the fingerprint of the server certificate, or preferably the fingerprint of the public key of the certificate, can be used for authentication, as optionally done since some years in <a href="https://github.com/hannesm/jackline/commit/a1e6f3159be1e45e6b690845e1b29366c41239a2">jackline</a>, an XMPP client. Support for this certificate / public key pinning was added in x509 0.2.1 / 0.5.0.</p> +<h3>Certificate signing requests</h3> +<p>Until x509 0.4.0 there was no support for generating certificate signing requests (CSR), as defined in PKCS 10, which are self-signed blobs containing a public key, an identity, and possibly extensions. Such as CSR is sent to the certificate authority, and after validation of ownership of the identity and paying a fee, the certificate is issued. Let's encrypt specified the ACME protocol which automates the proof of ownership: they provide a HTTP API for requesting a challenge, providing the response (the proof of ownership) via HTTP or DNS, and then allow the submission of a CSR and downloading the signed certificate. The ocaml-x509 library provides operations for creating such a CSR, and also for signing a CSR to generate a certificate.</p> +<p>Mindy developed the command-line utility <a href="https://github.com/yomimono/ocaml-certify/">certify</a> which uses these operations from the ocaml-x509 library and acts as a swiss-army knife purely in OCaml for these required operations.</p> +<p>Maker developed a <a href="https://github.com/mmaker/ocaml-letsencrypt">let's encrypt library</a> which implements the above mentioned ACME protocol for provisioning CSR to certificates, also using our ocaml-x509 library.</p> +<p>To complete the required certificate authority functionality, in x509 0.6.0 certificate revocation lists, both validation and signing, was implemented.</p> +<h3>Deploying unikernels</h3> +<p>As <a href="/Posts/VMM">described in another post</a>, I developed <a href="https://github.com/hannesm/albatross">albatross</a>, an orchestration system for MirageOS unikernels. This uses ASN.1 for internal socket communication and allows remote management via a TLS connection which is mutually authenticated with a X.509 client certificate. To encrypt the X.509 client certificate, first a TLS handshake where the server authenticates itself to the client is established, and over that connection another TLS handshake is established where the client certificate is requested. Note that this mechanism can be dropped with TLS 1.3, since there the certificates are transmitted over an already encrypted channel.</p> +<p>The client certificate already contains the command to execute remotely - as a custom extension, being it &quot;show me the console output&quot;, or &quot;destroy the unikernel with name = YYY&quot;, or &quot;deploy the included unikernel image&quot;. The advantage is that the commands are already authenticated, and there is no need for developing an ad-hoc protocol on top of the TLS session. The resource limits, assigned by the authority, are also part of the certificate chain - i.e. the number of unikernels, access to network bridges, available accumulated memory, accumulated size for block devices, are constrained by the certificate chain presented to the server, and currently running unikernels. The names of the chain are used for access control - if Alice and Bob have intermediate certificates from the same CA, neither Alice may manage Bob's unikernels, nor Bob may manage Alice's unikernels. I'm using albatross since 2.5 years in production on two physical machines with ~20 unikernels total (multiple users, multiple administrative domains), and it works stable and is much nicer to deal with than <code>scp</code> and custom hacked shell scripts.</p> +<h2>Why 0.7?</h2> +<p>There are still some missing pieces in our ocaml-x509 implementation, namely modern ECC certificates (depending on elliptic curve primitives not yet available in OCaml), RSA-PSS signing (should be straightforward), PKCS 12 (there is a <a href="https://github.com/mirleft/ocaml-x509/pull/114">pull request</a>, but this should wait until asn1-combinators supports the <code>ANY defined BY</code> construct to cleanup the code), ... +Once these features are supported, the library should likely be named PKCS since it supports more than X.509, and released as 1.0.</p> +<p>The 0.7 release series moved a lot of modules and function names around, thus it is a major breaking release. By using a map instead of lists for extensions, GeneralName, ..., the API was further revised - invariants that each extension key (an ASN.1 object identifier) may occur at most once are now enforced. By not leaking exceptions through the public interface, the API is easier to use safely - see <a href="https://github.com/mmaker/ocaml-letsencrypt/commit/dc53518f46310f384c9526b1d96a8e8f815a09c7">let's encrypt</a>, <a href="https://git.robur.io/?p=openvpn.git;a=commitdiff;h=929c53116c1438ba1214f53df7506d32da566ccc">openvpn</a>, <a href="https://github.com/yomimono/ocaml-certify/pull/17">certify</a>, <a href="https://github.com/mirleft/ocaml-tls/pull/394">tls</a>, <a href="https://github.com/mirage/capnp-rpc/pull/158">capnp</a>, <a href="https://github.com/hannesm/albatross/commit/50ed6a8d1ead169b3e322aaccb469e870ad72acc">albatross</a>.</p> +<p>I intended in 0.7.0 to have much more precise types, esp. for the SubjectAlternativeName (SAN) extension that uses a GeneralName, but it turns out the GeneralName is as well used for NameConstraints (NC) in a different way -- IP in SAN is an IPv4 or IPv6 address, in CN it is the IP/netmask; DNS is a domain name in SAN, in CN it is a name starting with a leading dot (i.e. &quot;.example.com&quot;), which is not a valid domain name. In 0.7.1, based on a bug report, I had to revert these variants and use less precise types.</p> +<h2>Conclusion</h2> +<p>The work on X.509 was sponsored by <a href="http://ocamllabs.io/">OCaml Labs</a>. You can support our work at robur by a <a href="https://robur.io/Donate">donation</a>, which we will use to work on our OCaml and MirageOS projects. You can also reach out to us to realize commercial products.</p> +<p>I'm interested in feedback, either via <strike><a href="https://twitter.com/h4nnes">twitter</a></strike> <a href="https://mastodon.social/@hannesm">hannesm@mastodon.social</a> or via eMail.</p> +urn:uuid:f2cf2a6a-8eef-5c2c-be03-d81a1bf0f066X509 0.72021-11-19T18:04:52-00:00hannes<p>Bringing MirageOS into production, take IV monitoring, CalDAV, DNS</p> +2019-07-08T19:29:05-00:00<h2>Working at <a href="https://robur.io">robur</a></h2> +<p>As announced <a href="/Posts/DNS">previously</a>, I started to work at robur early 2018. We're a collective of five people, distributed around Europe and the US, with the goal to deploy MirageOS unikernels. We do this by developing bespoke MirageOS unikernels which provide useful services, and deploy them for ourselves. We also develop new libraries and enhance existing ones and other components of MirageOS. Example unikernels include <a href="https://robur.io">our website</a> which uses <a href="https://github.com/Engil/Canopy">Canopy</a>, a <a href="https://robur.io/Our%20Work/Projects#CalDAV-Server">CalDAV server that stores entries in a git remote</a>, and <a href="https://github.com/roburio/unikernels">DNS servers</a> (the latter two are further described below).</p> +<p>Robur is part of the non-profit company <a href="https://techcultivation.org">Center for the Cultivation of Technology</a>, who are managing the legal and administrative sides for us. We're ourselves responsible to acquire funding to pay ourselves reasonable salaries. We received funding for CalDAV from <a href="https://prototypefund.de">prototypefund</a> and further funding from <a href="https://tarides.com">Tarides</a>, for TLS 1.3 from <a href="http://ocamllabs.io/">OCaml Labs</a>; security-audited an OCaml codebase, and received <a href="https://robur.io/Donate">donations</a>, also in the form of Bitcoins. We're looking for further funded collaborations and also contracting, mail us at <code>team@robur.io</code>. Please <a href="https://robur.io/Donate">donate</a> (tax-deductible in EU), so we can accomplish our goal of putting robust and sustainable MirageOS unikernels into production, replacing insecure legacy system that emit tons of CO<span style="vertical-align: baseline; position: relative;bottom: -0.4em;">2</span>.</p> +<h2>Deploying MirageOS unikernels</h2> +<p>While several examples are running since years (the <a href="https://mirage.io">MirageOS website</a>, <a href="http://ownme.ipredator.se">Bitcoin Piñata</a>, <a href="https://tls.nqsb.io">TLS demo server</a>, etc.), and some shell-scripts for cloud providers are floating around, it is not (yet) streamlined.</p> +<p>Service deployment is complex: you have to consider its configuration, exfiltration of logs and metrics, provisioning with valid key material (TLS certificate, hmac shared secret) and authenticators (CA certificate, ssh key fingerprint). Instead of requiring millions lines of code during orchestration (such as Kubernetes), creating the images (docker), or provisioning (ansible), why not minimise the required configuration and dependencies?</p> +<p><a href="/Posts/VMM">Earlier in this blog I introduced Albatross</a>, which serves in an enhanced version as our deployment platform on a physical machine (running 15 unikernels at the moment), I won't discuss more detail thereof in this article.</p> +<h2>CalDAV</h2> +<p><a href="https://linse.me/">Steffi</a> and I developed in 2018 a CalDAV server. Since November 2018 we have a test installation for robur, initially running as a Unix process on a virtual machine and persisting data to files on the disk. Mid-June 2019 we migrated it to a MirageOS unikernel, thanks to great efforts in <a href="https://github.com/mirage/ocaml-git">git</a> and <a href="https://github.com/mirage/irmin">irmin</a>, unikernels can push to a remote git repository. We <a href="https://github.com/haesbaert/awa-ssh/pull/8">extended the ssh library</a> with a ssh client and <a href="https://github.com/mirage/ocaml-git/pull/362">use this in git</a>. This also means our CalDAV server is completely immutable (does not carry state across reboots, apart from the data in the remote repository) and does not have persistent state in the form of a block device. Its configuration is mainly done at compile time by the selection of libraries (syslog, monitoring, ...), and boot arguments passed to the unikernel at startup.</p> +<p>We monitored the resource usage when migrating our CalDAV server from Unix process to a MirageOS unikernel. The unikernel size is just below 10MB. The workload is some clients communicating with the server on a regular basis. We use <a href="https://grafana.com/">Grafana</a> with a <a href="https://www.influxdata.com/">influx</a> time series database to monitor virtual machines. Data is collected on the host system (<code>rusage</code> sysctl, <code>kinfo_mem</code> sysctl, <code>ifdata</code> sysctl, <code>vm_get_stats</code> BHyve statistics), and our unikernels these days emit further metrics (mostly counters: gc statistics, malloc statistics, tcp sessions, http requests and status codes).</p> +<p><a href="/static/img/crobur-june-2019.png"><img src="/static/img/crobur-june-2019.png" width="700" /></a></p> +<p>Please note that memory usage (upper right) and vm exits (lower right) use logarithmic scale. The CPU usage reduced by more than a factor of 4. The memory usage dropped by a factor of 25, and the network traffic increased - previously we stored log messages on the virtual machine itself, now we send them to a dedicated log host.</p> +<p>A MirageOS unikernel, apart from a smaller attack surface, indeed uses fewer resources and actually emits less CO<span style="vertical-align: baseline; position: relative;bottom: -0.4em;">2</span> than the same service on a Unix virtual machine. So we're doing something good for the environment! :)</p> +<p>Our calendar server contains at the moment 63 events, the git repository had around 500 commits in the past month: nearly all of them from the CalDAV server itself when a client modified data via CalDAV, and two manual commits: the initial data imported from the file system, and one commit for fixing a bug of the encoder in our <a href="https://github.com/roburio/icalendar/pull/2">icalendar library</a>.</p> +<p>Our CalDAV implementation is very basic, scheduling, adding attendees (which requires sending out eMail), is not supported. But it works well for us, we have individual calendars and a shared one which everyone can write to. On the client side we use macOS and iOS iCalendar, Android DAVdroid, and Thunderbird. If you like to try our CalDAV server, have a look <a href="https://github.com/roburio/caldav/tree/future/README.md">at our installation instructions</a>. Please <a href="https://github.com/roburio/caldav/issues">report issues</a> if you find issues or struggle with the installation.</p> +<h2>DNS</h2> +<p>There has been more work on our DNS implementation, now <a href="https://github.com/mirage/ocaml-dns">here</a>. We included a DNS client library, and some <a href="https://github.com/roburio/unikernels/tree/future">example unikernels</a> are available. They as well require our <a href="https://github.com/roburio/git-ssh-dns-mirage3-repo">opam repository overlay</a>. Please report issues if you run into trouble while experimenting with that.</p> +<p>Most prominently is <code>primary-git</code>, a unikernel which acts as a primary authoritative DNS server (UDP and TCP). On startup, it fetches a remote git repository that contains zone files and shared hmac secrets. The zones are served, and secondary servers are notified with the respective serial numbers of the zones, authenticated using TSIG with the shared secrets. The primary server provides dynamic in-protocol updates of DNS resource records (<code>nsupdate</code>), and after successful authentication pushes the change to the remote git. To change the zone, you can just edit the zonefile and push to the git remote - with the proper pre- and post-commit-hooks an authenticated notify is send to the primary server which then pulls the git remote.</p> +<p>Another noteworthy unikernel is <code>letsencrypt</code>, which acts as a secondary server, and whenever a TLSA record with custom type (0xFF) and a DER-encoded certificate signing request is observed, it requests a signature from letsencrypt by solving the DNS challenge. The certificate is pushed to the DNS server as TLSA record as well. The DNS implementation provides <code>ocertify</code> and <code>dns-mirage-certify</code> which use the above mechanism to retrieve valid let's encrypt certificates. The caller (unikernel or Unix command-line utility) either takes a private key directly or generates one from a (provided) seed and generates a certificate signing request. It then looks in DNS for a certificate which is still valid and matches the public key and the hostname. If such a certificate is not present, the certificate signing request is pushed to DNS (via the nsupdate protocol), authenticated using TSIG with a given secret. This way our public facing unikernels (website, this blog, TLS demo server, ..) block until they got a certificate via DNS on startup - we avoid embedding of the certificate into the unikernel image.</p> +<h2>Monitoring</h2> +<p>We like to gather statistics about the resource usage of our unikernels to find potential bottlenecks and observe memory leaks ;) The base for the setup is the <a href="https://github.com/mirage/metrics">metrics</a> library, which is similarly in design to the <a href="https://erratique.ch/software/logs">logs</a> library: libraries use the core to gather metrics. A different aspect is the reporter, which is globally registered and responsible for exfiltrating the data via their favourite protocol. If no reporter is registered, the work overhead is negligible.</p> +<p><a href="/static/img/crobur-june-2019-unikernel.png"><img src="/static/img/crobur-june-2019-unikernel.png" width="700" /></a></p> +<p>This is a dashboard which combines both statistics gathered from the host system and various metrics from the MirageOS unikernel. The <code>monitoring</code> branch of our opam repository overlay is used together with <a href="https://github.com/hannesm/monitoring-experiments">monitoring-experiments</a>. The logs errors counter (middle right) was the icalendar parser which tried to parse its badly emitted ics (the bug is now fixed, the dashboard is from last month).</p> +<h2>OCaml libraries</h2> +<p>The <a href="https://github.com/hannesm/domain-name">domain-name</a> library was developed to handle RFC 1035 domain names and host names. It initially was part of the DNS code, but is now freestanding to be used in other core libraries (such as ipaddr) with a small dependency footprint.</p> +<p>The <a href="https://github.com/hannesm/gmap">GADT map</a> is a normal OCaml Map structure, but takes key-dependent value types by using a GADT. This library also was part of DNS, but is more broadly useful, we already use it in our icalendar (the data format for calendar entries in CalDAV) library, our <a href="https://git.robur.io/?p=openvpn.git;a=summary">OpenVPN</a> configuration parser uses it as well, and also <a href="https://github.com/mirleft/ocaml-x509/pull/115">x509</a> - which got reworked quite a bit recently (release pending), and there's preliminary PKCS12 support (which deserves its own article). <a href="https://github.com/hannesm/ocaml-tls">TLS 1.3</a> is available on a branch, but is not yet merged. More work is underway, hopefully with sufficient time to write more articles about it.</p> +<h2>Conclusion</h2> +<p>More projects are happening as we speak, it takes time to upstream all the changes, such as monitoring, new core libraries, getting our DNS implementation released, pushing Conex into production, more features such as DNSSec, ...</p> +<p>I'm interested in feedback, either via <strike><a href="https://twitter.com/h4nnes">twitter</a></strike> <a href="https://mastodon.social/@hannesm">hannesm@mastodon.social</a> or via eMail.</p> +urn:uuid:fd3a6aa5-a7ba-549a-9d0f-5f05fa6c434eSummer 20192021-11-19T18:04:52-00:00hannes \ No newline at end of file diff --git a/index.html b/index.html new file mode 100644 index 0000000..1ec6330 --- /dev/null +++ b/index.html @@ -0,0 +1,28 @@ + +full stack engineer

Mirroring the opam repository and all tarballs

Written by hannes

Re-developing an opam cache from scratch, as a MirageOS unikernel

+

All your metrics belong to influx

Written by hannes

How to monitor your MirageOS unikernel with albatross and monitoring-experiments

+

Deploying binary MirageOS unikernels

Written by hannes

Finally, we provide reproducible binary MirageOS unikernels together with packages to reproduce them and setup your own builder

+

Cryptography updates in OCaml and MirageOS

Written by hannes

Elliptic curves (ECDSA/ECDH) are supported in a maintainable and secure way.

+

The road ahead for MirageOS in 2021

Written by hannes

Home office, MirageOS unikernels, 2020 recap, 2021 tbd

+

Traceroute

Written by hannes

A MirageOS unikernel which traces the path between itself and a remote host.

+

Deploying authoritative OCaml-DNS servers as MirageOS unikernels

Written by hannes

A tutorial how to deploy authoritative name servers, let's encrypt, and updating entries from unix services.

+

Reproducible MirageOS unikernel builds

Written by hannes

MirageOS unikernels are reproducible :)

+

X509 0.7

Written by hannes

Five years since ocaml-x509 initial release, it has been reworked and used more widely

+

Summer 2019

Written by hannes

Bringing MirageOS into production, take IV monitoring, CalDAV, DNS

+

The Bitcoin Piñata - no candy for you

Written by hannes

More than three years ago we launched our Bitcoin Piñata as a transparent security bait. It is still up and running!

+

My 2018 contains robur and starts with re-engineering DNS

Written by hannes

New year brings new possibilities and a new environment. I've been working on the most Widely deployed key-value store, the domain name system. Primary and secondary name services are available, including dynamic updates, notify, and tsig authentication.

+

Albatross - provisioning, deploying, managing, and monitoring virtual machines

Written by hannes

all we need is X.509

+

Conex, establish trust in community repositories

Written by hannes

Conex is a library to verify and attest package release integrity and authenticity through the use of cryptographic signatures.

+

Who maintains package X?

Written by hannes

We describe why manual gathering of metadata is out of date, and version control systems are awesome.

+

Jackline, a secure terminal-based XMPP client

Written by hannes

implement it once to know you can do it. implement it a second time and you get readable code. implementing it a third time from scratch may lead to useful libraries.

+

Exfiltrating log data using syslog

Written by hannes

sometimes preservation of data is useful

+

Re-engineering ARP

Written by hannes

If you want it as you like, you've to do it yourself

+

Minimising the virtual machine monitor

Written by hannes

MirageOS solo5 multiboot native on bhyve

+

Counting Bytes

Written by hannes

looking into dependencies and their sizes

+

Configuration DSL step-by-step

Written by hannes

how to actually configure the system

+

Catch the bug, walking through the stack

Written by hannes

10BTC could've been yours

+

Fitting the things together

Written by hannes

building a simple website

+

Why OCaml

Written by hannes

a gentle introduction into OCaml

+

Operating systems

Written by hannes

Operating systems and MirageOS

+

About

Written by hannes

introduction (myself, this site)

+

\ No newline at end of file diff --git a/static/css/highlight.css b/static/css/highlight.css new file mode 100644 index 0000000..5376f34 --- /dev/null +++ b/static/css/highlight.css @@ -0,0 +1,101 @@ +/* + +grayscale style (c) MY Sun + +*/ + +.hljs { + display: block; + overflow-x: auto; + padding: 0.5em; + color: #333; + background: #fff; +} + +.hljs-comment, +.hljs-quote { + color: #777; + font-style: italic; +} + +.hljs-keyword, +.hljs-selector-tag, +.hljs-subst { + color: #333; + font-weight: bold; +} + +.hljs-number, +.hljs-literal { + color: #777; +} + +.hljs-string, +.hljs-doctag, +.hljs-formula { + color: #333; + background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAQAAAAECAYAAACp8Z5+AAAAJ0lEQVQIW2O8e/fufwYGBgZBQUEQxcCIIfDu3Tuwivfv30NUoAsAALHpFMMLqZlPAAAAAElFTkSuQmCC) repeat; +} + +.hljs-title, +.hljs-section, +.hljs-selector-id { + color: #000; + font-weight: bold; +} + +.hljs-subst { + font-weight: normal; +} + +.hljs-class .hljs-title, +.hljs-type, +.hljs-name { + color: #333; + font-weight: bold; +} + +.hljs-tag { + color: #333; +} + +.hljs-regexp { + color: #333; + background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAoAAAAICAYAAADA+m62AAAAPUlEQVQYV2NkQAN37979r6yszIgujiIAU4RNMVwhuiQ6H6wQl3XI4oy4FMHcCJPHcDS6J2A2EqUQpJhohQDexSef15DBCwAAAABJRU5ErkJggg==) repeat; +} + +.hljs-symbol, +.hljs-bullet, +.hljs-link { + color: #000; + background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAKElEQVQIW2NkQAO7d+/+z4gsBhJwdXVlhAvCBECKwIIwAbhKZBUwBQA6hBpm5efZsgAAAABJRU5ErkJggg==) repeat; +} + +.hljs-built_in, +.hljs-builtin-name { + color: #000; + text-decoration: underline; +} + +.hljs-meta { + color: #999; + font-weight: bold; +} + +.hljs-deletion { + color: #fff; + background:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAADCAYAAABS3WWCAAAAE0lEQVQIW2MMDQ39zzhz5kwIAQAyxweWgUHd1AAAAABJRU5ErkJggg==) repeat; +} + +.hljs-addition { + color: #000; + background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAkAAAAJCAYAAADgkQYQAAAALUlEQVQYV2N89+7dfwYk8P79ewZBQUFkIQZGOiu6e/cuiptQHAPl0NtNxAQBAM97Oejj3Dg7AAAAAElFTkSuQmCC) repeat; +} + +.hljs-emphasis { + font-style: italic; +} + +.hljs-strong { + font-weight: bold; +} diff --git a/static/css/style.css b/static/css/style.css new file mode 100644 index 0000000..1c10220 --- /dev/null +++ b/static/css/style.css @@ -0,0 +1,152 @@ +html { + background-color: #fafafa; + font-family: medium-content-serif-font,Georgia,Cambria,"Times New Roman",Times,serif; + font-size: 21px; +} + +html a { + text-decoration-line: underline; + text-decoration-style: dotted; + text-decoration-color: #1a1a1a; + color: black; +} + +html a:visited { + text-decoration-line: underline; + text-decoration-style: dotted; + text-decoration-color: #1a1a1a; + color: black; +} + +li { + margin: 0.5em 0; + line-height: 1.6; +} + +p { + line-height: 1.6; +} + +.navbar { + box-sizing: border-box; + box-shadow: 0 2px 2px -2px rgba(0,0,0,.15); + position: fixed; + top: 0; + right: 0; + left: 0; + z-index: 1030; + background-color: white; +} + +.container { + margin: 0 auto; + max-width: 800px; +} + +blockquote { + font-style: italic; + padding: 0px 20px; + margin: 0 0 20px; + border-left: 5px solid #eee; +} + +main { + margin-top: 100px; +} + +.flex-container { + display: flex; + width: 100%; + justify-content: center; +} + +article { + margin-top: 30px; +} + +footer { + margin-top: 20px; +} + +pre { + padding: 0px; + line-height: 1.3; +} + +body h2 { + margin-bottom: 3px; + font-weight: 700; + font-size: 40px; + line-height: 1.04; + letter-spacing: -.028em; + font-family: medium-content-sans-serif-font,"Lucida Grande","Lucida Sans Unicode","Lucida Sans",Geneva,Arial,sans-serif; +} + +p > code { + word-wrap: break-word; +} + +.post-title { +} +.author { + font-size: 13px; +} +.date { + font-size: 13px; +} +time { + font-size: 13px; +} +footer { + font-size: 13px; +} +.tags { + font-size: 13px; +} +.post, +.listing { + width: 90%; + max-width: 800px; +} +a.list-group-item { + border: 0; +} +.extract { + margin-top: 10px; +} +.tag { + font-size: 13px; + margin-right: 4px; +} + +.navbar-nav { + clear:both; + margin-left: 0px; + padding-left: 0px; + font-family: medium-content-sans-serif-font,"Lucida Grande","Lucida Sans Unicode","Lucida Sans",Geneva,Arial,sans-serif; + font-size: 15px; +} + +.navbar-default li a, .navbar-default li a:visited { + color: #777; +} +.navbar-default li a:hover { + color: black; +} +.navbar-default li { + list-style-type: none; + font-size: 21px; + color: #777; + display: inline-block; + padding-right: 15px; +} + +.navbar-header { + background-color: #9B111E; +} +.navbar-brand { + font-family: "Helvetica Neue",Helvetica,Arial,sans-serif; + font-weight: 700; + font-size: 40px; + background-color: #9B111E; +} diff --git a/static/img/conex.png b/static/img/conex.png new file mode 100644 index 0000000..49b9937 Binary files /dev/null and b/static/img/conex.png differ diff --git a/static/img/crobur-june-2019-unikernel.png b/static/img/crobur-june-2019-unikernel.png new file mode 100644 index 0000000..226093f Binary files /dev/null and b/static/img/crobur-june-2019-unikernel.png differ diff --git a/static/img/crobur-june-2019.png b/static/img/crobur-june-2019.png new file mode 100644 index 0000000..63acaa5 Binary files /dev/null and b/static/img/crobur-june-2019.png differ diff --git a/static/img/encrypted-alert.png b/static/img/encrypted-alert.png new file mode 100644 index 0000000..d5e3e40 Binary files /dev/null and b/static/img/encrypted-alert.png differ diff --git a/static/img/jackline.png b/static/img/jackline.png new file mode 100644 index 0000000..1d19a6e Binary files /dev/null and b/static/img/jackline.png differ diff --git a/static/img/jackline2.png b/static/img/jackline2.png new file mode 100644 index 0000000..283e3a8 Binary files /dev/null and b/static/img/jackline2.png differ diff --git a/static/img/mirage-console-bytes.svg b/static/img/mirage-console-bytes.svg new file mode 100644 index 0000000..258be2e --- /dev/null +++ b/static/img/mirage-console-bytes.svg @@ -0,0 +1,40 @@ + + stdlib.a/home/hannes/.opam/4.03.0/lib/ocaml/stdlib.a (1588982) + sexplib.a/home/hannes/.opam/4.03.0/lib/sexplib/sexplib.a (673124) + lwt-unix.a/home/hannes/.opam/4.03.0/lib/lwt/lwt-unix.a (657552) + liblwt-unix_stubs.a/home/hannes/.opam/4.03.0/lib/lwt/liblwt-unix_stubs.a (515632) + libasmrun.a/home/hannes/.opam/4.03.0/lib/ocaml/libasmrun.a (433626) + lwt.a/home/hannes/.opam/4.03.0/lib/lwt/lwt.a (331998) + cmdliner.a/home/hannes/.opam/4.03.0/lib/cmdliner/cmdliner.a (276850) + astring.a/home/hannes/.opam/4.03.0/lib/astring/astring.a (246538) + libunix.a/home/hannes/.opam/4.03.0/lib/ocaml/libunix.a (207120) + ipaddr.a/home/hannes/.opam/4.03.0/lib/ipaddr/ipaddr.a (202720) + unix.a/home/hannes/.opam/4.03.0/lib/ocaml/unix.a (143632) + camlstartup800193.o/tmp/camlstartup800193.o (117440) + ocplib_endian.a/home/hannes/.opam/4.03.0/lib/ocplib-endian/ocplib_endian.a (93778) + fmt.a/home/hannes/.opam/4.03.0/lib/fmt/fmt.a (88008) + cstruct.a/home/hannes/.opam/4.03.0/lib/cstruct/cstruct.a (70122) + lwt-log.a/home/hannes/.opam/4.03.0/lib/lwt/lwt-log.a (69738) + bigstring.a/home/hannes/.opam/4.03.0/lib/ocplib-endian/bigstring.a (47622) + logs.a/home/hannes/.opam/4.03.0/lib/logs/logs.a (46250) + libbigarray.a/home/hannes/.opam/4.03.0/lib/ocaml/libbigarray.a (32488) + bigarray.a/home/hannes/.opam/4.03.0/lib/ocaml/bigarray.a (30248) + mirage_logs.a/home/hannes/.opam/4.03.0/lib/mirage-logs/mirage_logs.a (24602) + mirage-runtime.a/home/hannes/.opam/4.03.0/lib/mirage/mirage-runtime.a (21882) + functoria-runtime.a/home/hannes/.opam/4.03.0/lib/functoria/functoria-runtime.a (15698) + OS.a/home/hannes/.opam/4.03.0/lib/mirage-unix/OS.a (15004) + main.omain.o (14624) + io_page.a/home/hannes/.opam/4.03.0/lib/io-page/io_page.a (13424) + mirage_console_unix.a/home/hannes/.opam/4.03.0/lib/mirage-console/mirage_console_unix.a (12746) + mProf.a/home/hannes/.opam/4.03.0/lib/mirage-profile/mProf.a (11358) + libcstruct_stubs.a/home/hannes/.opam/4.03.0/lib/cstruct/libcstruct_stubs.a (9468) + libio_page_unix_stubs.a/home/hannes/.opam/4.03.0/lib/io-page/libio_page_unix_stubs.a (7530) + key_gen.okey_gen.o (7256) + lwt_cstruct.a/home/hannes/.opam/4.03.0/lib/cstruct/lwt_cstruct.a (6696) + unikernel.ounikernel.o (5032) + mirage-clock.a/home/hannes/.opam/4.03.0/lib/mirage-clock-unix/mirage-clock.a (3330) + std_exit.o/home/hannes/.opam/4.03.0/lib/ocaml/std_exit.o (2984) + io_page_unix.a/home/hannes/.opam/4.03.0/lib/io-page/io_page_unix.a (2046) + result.a/home/hannes/.opam/4.03.0/lib/result/result.a (1950) + mirage-console.a/home/hannes/.opam/4.03.0/lib/mirage-console/mirage-console.a (1870) + diff --git a/static/img/mirage-console-xen-bytes-full.svg b/static/img/mirage-console-xen-bytes-full.svg new file mode 100644 index 0000000..1f5f87b --- /dev/null +++ b/static/img/mirage-console-xen-bytes-full.svg @@ -0,0 +1,55 @@ + + libgcc.a/usr/lib/gcc/x86_64-linux-gnu/4.8/libgcc.a (3027014) + stdlib.a/home/hannes/.opam/4.02.3/lib/ocaml/stdlib.a (1400328) + sexplib.a/home/hannes/.opam/4.02.3/lib/sexplib/sexplib.a (665768) + libopenlibm.a/home/hannes/.opam/4.02.3/lib/libopenlibm.a (629620) + libxenasmrun.a/home/hannes/.opam/4.02.3/lib/pkgconfig/../../lib/mirage-xen-ocaml/libxenasmrun.a (343920) + libasmrun.a/home/hannes/.opam/4.02.3/lib/ocaml/libasmrun.a (331260) + lwt.a/home/hannes/.opam/4.02.3/lib/lwt/lwt.a (328130) + cmdliner.a/home/hannes/.opam/4.02.3/lib/cmdliner/cmdliner.a (277042) + libminios.a/home/hannes/.opam/4.02.3/lib/pkgconfig/../../lib/minios-xen/libminios.a (271372) + astring.a/home/hannes/.opam/4.02.3/lib/astring/astring.a (243240) + re.a/home/hannes/.opam/4.02.3/lib/re/re.a (233148) + ipaddr.a/home/hannes/.opam/4.02.3/lib/ipaddr/ipaddr.a (198636) + camlstartup873580.o/tmp/camlstartup873580.o (121544) + xenstore.a/home/hannes/.opam/4.02.3/lib/xenstore/xenstore.a (119916) + ocplib_endian.a/home/hannes/.opam/4.02.3/lib/ocplib-endian/ocplib_endian.a (99350) + fmt.a/home/hannes/.opam/4.02.3/lib/fmt/fmt.a (86522) + OS.a/home/hannes/.opam/4.02.3/lib/mirage-xen/OS.a (77252) + cstruct.a/home/hannes/.opam/4.02.3/lib/cstruct/cstruct.a (68448) + xenstore_client_lwt.a/home/hannes/.opam/4.02.3/lib/xenstore/xenstore_client_lwt.a (62488) + libxenposix.a/home/hannes/.opam/4.02.3/lib/pkgconfig/../../lib/mirage-xen-posix/libxenposix.a (61286) + mirage_console_xen.a/home/hannes/.opam/4.02.3/lib/mirage-console/mirage_console_xen.a (51346) + bigstring.a/home/hannes/.opam/4.02.3/lib/ocplib-endian/bigstring.a (50380) + logs.a/home/hannes/.opam/4.02.3/lib/logs/logs.a (44770) + libxencamlbindings.a/home/hannes/.opam/4.02.3/lib/pkgconfig/../../lib/mirage-xen/libxencamlbindings.a (43158) + xen_gnt.a/home/hannes/.opam/4.02.3/lib/xen-gnt/xen_gnt.a (42476) + libx86_64.a/home/hannes/.opam/4.02.3/lib/pkgconfig/../../lib/minios-xen/libx86_64.a (41790) + shared_memory_ring.a/home/hannes/.opam/4.02.3/lib/shared-memory-ring/shared_memory_ring.a (41024) + re_str.a/home/hannes/.opam/4.02.3/lib/re/re_str.a (37994) + libxenotherlibs.a/home/hannes/.opam/4.02.3/lib/pkgconfig/../../lib/mirage-xen-ocaml/libxenotherlibs.a (37666) + mirage_console_proto.a/home/hannes/.opam/4.02.3/lib/mirage-console/mirage_console_proto.a (32550) + libbigarray.a/home/hannes/.opam/4.02.3/lib/ocaml/libbigarray.a (27760) + bigarray.a/home/hannes/.opam/4.02.3/lib/ocaml/bigarray.a (26228) + mirage_logs.a/home/hannes/.opam/4.02.3/lib/mirage-logs/mirage_logs.a (24604) + xen_evtchn.a/home/hannes/.opam/4.02.3/lib/xen-evtchn/xen_evtchn.a (22362) + mirage-runtime.a/home/hannes/.opam/4.02.3/lib/mirage/mirage-runtime.a (22140) + lwt_shared_memory_ring.a/home/hannes/.opam/4.02.3/lib/shared-memory-ring/lwt_shared_memory_ring.a (21432) + re_emacs.a/home/hannes/.opam/4.02.3/lib/re/re_emacs.a (20354) + xenstore_ring.a/home/hannes/.opam/4.02.3/lib/shared-memory-ring/xenstore_ring.a (19818) + console_ring.a/home/hannes/.opam/4.02.3/lib/shared-memory-ring/console_ring.a (19722) + mirage-bootvar.a/home/hannes/.opam/4.02.3/lib/mirage-bootvar/mirage-bootvar.a (16280) + functoria-runtime.a/home/hannes/.opam/4.02.3/lib/functoria/functoria-runtime.a (15444) + main.omain.o (14112) + io_page.a/home/hannes/.opam/4.02.3/lib/io-page/io_page.a (13208) + libcstruct_stubs.a/home/hannes/.opam/4.02.3/lib/cstruct/libcstruct_stubs.a (12016) + mProf.a/home/hannes/.opam/4.02.3/lib/mirage-profile/mProf.a (11084) + libio_page_xen_stubs.a/home/hannes/.opam/4.02.3/lib/io-page/libio_page_xen_stubs.a (8430) + libshared_memory_ring_stubs.a/home/hannes/.opam/4.02.3/lib/shared-memory-ring/libshared_memory_ring_stubs.a (8220) + key_gen.okey_gen.o (7184) + mirage-clock.a/home/hannes/.opam/4.02.3/lib/mirage-clock-xen/mirage-clock.a (6476) + unikernel.ounikernel.o (3592) + std_exit.o/home/hannes/.opam/4.02.3/lib/ocaml/std_exit.o (2944) + result.a/home/hannes/.opam/4.02.3/lib/result/result.a (1852) + mirage-console.a/home/hannes/.opam/4.02.3/lib/mirage-console/mirage-console.a (1778) + diff --git a/static/img/mirage-console-xen.svg b/static/img/mirage-console-xen.svg new file mode 100644 index 0000000..b39bfb7 --- /dev/null +++ b/static/img/mirage-console-xen.svg @@ -0,0 +1,669 @@ + + + + + + +mirage-console-xen + + +xenstore.client + +xenstore.client +1.3.0 + + +xenstore + +xenstore +1.3.0 + + +xenstore.client->xenstore + + + + +lwt + +lwt +2.5.2 + + +xenstore.client->lwt + + + + +cstruct.ppx + +cstruct.ppx +1.7.0 + + +xenstore->cstruct.ppx + + + + +cstruct + +cstruct +1.9.0 + + +xenstore->cstruct + + + + +xen-gnt + +xen-gnt +2.2.1 + + +unix + +unix + + +xen-gnt->unix + + + + +mirage-profile + +mirage-profile +0.6.1 + + +xen-gnt->mirage-profile + + + + +xen-gnt->lwt + + + + +io-page + +io-page +1.6.0 + + +xen-gnt->io-page + + + + +bigarray + +bigarray + + +xen-gnt->bigarray + + + + +xen-evtchn + +xen-evtchn +1.0.6 + + +xen-evtchn->unix + + + + +xen-evtchn->lwt + + + + +xen-evtchn->bigarray + + + + +shared-memory-ring.xenstore + +shared-memory-ring.xenstore +1.3.0 + + +shared-memory-ring + +shared-memory-ring +1.3.0 + + +shared-memory-ring.xenstore->shared-memory-ring + + + + +shared-memory-ring.lwt + +shared-memory-ring.lwt +1.3.0 + + +shared-memory-ring.lwt->shared-memory-ring + + + + +shared-memory-ring.lwt->mirage-profile + + + + +shared-memory-ring.lwt->lwt + + + + +shared-memory-ring.console + +shared-memory-ring.console +1.3.0 + + +shared-memory-ring.console->shared-memory-ring + + + + +shared-memory-ring.console->cstruct.ppx + + + + +shared-memory-ring.console->cstruct + + + + +shared-memory-ring->cstruct + + + + +sexplib + +sexplib +113.33.03 + + +sexplib->bigarray + + + + +result + +result +1.0 + + +re.str + +re.str +1.5.0 + + +re.emacs + +re.emacs +1.5.0 + + +re.str->re.emacs + + + + +re + +re +1.5.0 + + +re.str->re + + + + +re.emacs->re + + + + +bytes + +bytes + + +re->bytes + + + + +ocplib-endian.bigstring + +ocplib-endian.bigstring +0.8 + + +ocplib-endian.bigstring->bytes + + + + +ocplib-endian.bigstring->bigarray + + + + +ocplib-endian + +ocplib-endian +0.8 + + +ocplib-endian->bytes + + + + +mirage.runtime + +mirage.runtime +2.9.0 + + +logs + +logs +0.5.0 + + +mirage.runtime->logs + + + + +ipaddr + +ipaddr +2.7.0 + + +mirage.runtime->ipaddr + + + + +functoria.runtime + +functoria.runtime +1.1.0 + + +mirage.runtime->functoria.runtime + + + + +astring + +astring +0.8.1 + + +mirage.runtime->astring + + + + +mirage-xen + +mirage-xen +2.6.0 + + +mirage-xen->xenstore.client + + + + +mirage-xen->xen-gnt + + + + +mirage-xen->xen-evtchn + + + + +mirage-xen->shared-memory-ring.xenstore + + + + +mirage-xen->shared-memory-ring.lwt + + + + +mirage-xen->shared-memory-ring.console + + + + +mirage-xen->shared-memory-ring + + + + +mirage-types + +mirage-types +2.8.0 + + +mirage-xen->mirage-types + + + + +mirage-xen->mirage-profile + + + + +mirage-clock-xen + +mirage-clock-xen +1.0.0 + + +mirage-xen->mirage-clock-xen + + + + +mirage-xen->lwt + + + + +mirage-xen->io-page + + + + +mirage-xen->cstruct + + + + +mirage-types.lwt + +mirage-types.lwt +2.8.0 + + +mirage-profile->ocplib-endian.bigstring + + + + +mirage-profile->lwt + + + + +mirage-profile->cstruct.ppx + + + + +mirage-profile->cstruct + + + + +mirage-logs + +mirage-logs +0.2 + + +mirage-logs->mirage-types + + + + +mirage-logs->mirage-profile + + + + +mirage-logs->lwt + + + + +mirage-logs->logs + + + + +mirage-console.xen + +mirage-console.xen +2.1.3 + + +mirage-console.xen->xen-gnt + + + + +mirage-console.xen->xen-evtchn + + + + +mirage-console.xen->mirage-xen + + + + +mirage-console.xen->mirage-types + + + + +mirage-console.proto + +mirage-console.proto +2.1.3 + + +mirage-console.xen->mirage-console.proto + + + + +mirage-console.xen->lwt + + + + +mirage-console.xen->io-page + + + + +mirage-console.proto->xenstore + + + + +mirage-console + +mirage-console +2.1.3 + + +mirage-console.proto->mirage-console + + + + +mirage-console->mirage-types.lwt + + + + +mirage-console->mirage-types + + + + +mirage-console->lwt + + + + +mirage-bootvar + +mirage-bootvar +0.3.1 + + +mirage-bootvar->re.str + + + + +mirage-bootvar->re + + + + +mirage-bootvar->mirage-xen + + + + +mirage-bootvar->lwt + + + + +lwt->bytes + + + + +logs->result + + + + +ipaddr->sexplib + + + + +ipaddr->bytes + + + + +io-page->cstruct + + + + +io-page->bytes + + + + +fmt + +fmt +0.7.1 + + +functoria.runtime->fmt + + + + +cmdliner + +cmdliner +0.9.8 + + +functoria.runtime->cmdliner + + + + +cstruct.ppx->cstruct + + + + +cstruct->sexplib + + + + +cstruct->ocplib-endian.bigstring + + + + +cstruct->ocplib-endian + + + + +cstruct->bytes + + + + +cstruct->bigarray + + + + +bigarray->unix + + + + +astring->bytes + + + + + diff --git a/static/img/mirage-console.svg b/static/img/mirage-console.svg new file mode 100644 index 0000000..7c79d9b --- /dev/null +++ b/static/img/mirage-console.svg @@ -0,0 +1,407 @@ + + + + + + +mirage-console + + +unix + +unix + + +sexplib + +sexplib +113.33.00+4.03 + + +bigarray + +bigarray + + +sexplib->bigarray + + + + +result + +result +1.0 + + +ocplib-endian.bigstring + +ocplib-endian.bigstring +0.8 + + +bytes + +bytes + + +ocplib-endian.bigstring->bytes + + + + +ocplib-endian.bigstring->bigarray + + + + +ocplib-endian + +ocplib-endian +0.8 + + +ocplib-endian->bytes + + + + +mirage.runtime + +mirage.runtime +2.9.0 + + +logs + +logs +0.5.0 + + +mirage.runtime->logs + + + + +ipaddr + +ipaddr +2.7.0 + + +mirage.runtime->ipaddr + + + + +functoria.runtime + +functoria.runtime +1.1.0 + + +mirage.runtime->functoria.runtime + + + + +astring + +astring +0.8.1 + + +mirage.runtime->astring + + + + +mirage-unix + +mirage-unix +2.6.0 + + +mirage-clock-unix + +mirage-clock-unix +1.0.0 + + +mirage-unix->mirage-clock-unix + + + + +lwt.unix + +lwt.unix +2.5.2 + + +mirage-unix->lwt.unix + + + + +lwt + +lwt +2.5.2 + + +mirage-unix->lwt + + + + +io-page.unix + +io-page.unix +1.6.0 + + +mirage-unix->io-page.unix + + + + +io-page + +io-page +1.6.0 + + +mirage-unix->io-page + + + + +cstruct + +cstruct +2.1.0 + + +mirage-unix->cstruct + + + + +mirage-types.lwt + +mirage-types.lwt +2.8.0 + + +mirage-types + +mirage-types +2.8.0 + + +mirage-console.unix + +mirage-console.unix +2.1.3 + + +mirage-console.unix->mirage-unix + + + + +mirage-console.unix->mirage-types + + + + +mirage-console + +mirage-console +2.1.3 + + +mirage-console.unix->mirage-console + + + + +mirage-console.unix->lwt.unix + + + + +mirage-console.unix->lwt + + + + +cstruct.lwt + +cstruct.lwt +2.1.0 + + +mirage-console.unix->cstruct.lwt + + + + +mirage-console->mirage-types.lwt + + + + +mirage-console->mirage-types + + + + +mirage-console->lwt + + + + +mirage-clock-unix->unix + + + + +lwt.unix->unix + + + + +lwt.log + +lwt.log +2.5.2 + + +lwt.unix->lwt.log + + + + +lwt.unix->lwt + + + + +lwt.unix->bigarray + + + + +lwt.log->lwt + + + + +lwt->bytes + + + + +logs->result + + + + +ipaddr->sexplib + + + + +ipaddr->bytes + + + + +io-page.unix->bigarray + + + + +io-page->cstruct + + + + +io-page->bytes + + + + +fmt + +fmt +0.7.1 + + +functoria.runtime->fmt + + + + +cmdliner + +cmdliner +0.9.8 + + +functoria.runtime->cmdliner + + + + +cstruct.lwt->lwt.unix + + + + +cstruct.lwt->cstruct + + + + +cstruct->sexplib + + + + +cstruct->ocplib-endian.bigstring + + + + +cstruct->ocplib-endian + + + + +cstruct->bytes + + + + +cstruct->bigarray + + + + +bigarray->unix + + + + +astring->bytes + + + + + diff --git a/static/img/performance-nqsbio.png b/static/img/performance-nqsbio.png new file mode 100644 index 0000000..cc5e8db Binary files /dev/null and b/static/img/performance-nqsbio.png differ diff --git a/static/img/pinata-bytes.svg b/static/img/pinata-bytes.svg new file mode 100644 index 0000000..9b7ed49 --- /dev/null +++ b/static/img/pinata-bytes.svg @@ -0,0 +1,95 @@ + + libgcc.a/usr/lib/gcc/x86_64-linux-gnu/4.8/libgcc.a (3027014) + stdlib.a/home/hannes/.opam/4.02.3/lib/ocaml/stdlib.a (1400328) + libgmp-xen.a/home/hannes/.opam/4.02.3/lib/gmp-xen/libgmp-xen.a (1357772) + tls.a/home/hannes/.opam/4.02.3/lib/tls/tls.a (1012386) + sexplib.a/home/hannes/.opam/4.02.3/lib/sexplib/sexplib.a (665768) + libopenlibm.a/home/hannes/.opam/4.02.3/lib/libopenlibm.a (629620) + tyxml.a/home/hannes/.opam/4.02.3/lib/tyxml/tyxml.a (587802) + nocrypto.a/home/hannes/.opam/4.02.3/lib/nocrypto/nocrypto.a (431132) + x509.a/home/hannes/.opam/4.02.3/lib/x509/x509.a (403540) + libxenasmrun.a/home/hannes/.opam/4.02.3/lib/pkgconfig/../../lib/mirage-xen-ocaml/libxenasmrun.a (343920) + libasmrun.a/home/hannes/.opam/4.02.3/lib/ocaml/libasmrun.a (331260) + lwt.a/home/hannes/.opam/4.02.3/lib/lwt/lwt.a (328130) + tcp.a/home/hannes/.opam/4.02.3/lib/tcpip/tcp.a (315660) + libnocrypto_stubs.a/home/hannes/.opam/4.02.3/lib/nocrypto/libnocrypto_stubs.a (301090) + libnocrypto_xen_stubs.a/home/hannes/.opam/4.02.3/lib/nocrypto/libnocrypto_xen_stubs.a (288802) + libnocrypto_xen_stubs.a/home/hannes/.opam/4.02.3/lib/nocrypto/libnocrypto_xen_stubs.a (288802) + asn1-combinators.a/home/hannes/.opam/4.02.3/lib/asn1-combinators/asn1-combinators.a (279054) + cmdliner.a/home/hannes/.opam/4.02.3/lib/cmdliner/cmdliner.a (277042) + libminios.a/home/hannes/.opam/4.02.3/lib/minios-xen/libminios.a (271372) + astring.a/home/hannes/.opam/4.02.3/lib/astring/astring.a (243240) + re.a/home/hannes/.opam/4.02.3/lib/re/re.a (233148) + ipv6.a/home/hannes/.opam/4.02.3/lib/tcpip/ipv6.a (199486) + ipaddr.a/home/hannes/.opam/4.02.3/lib/ipaddr/ipaddr.a (198636) + tcpip.a/home/hannes/.opam/4.02.3/lib/tcpip/tcpip.a (193400) + camlstartupb4b590.o/tmp/camlstartupb4b590.o (193160) + dhcpv4.a/home/hannes/.opam/4.02.3/lib/tcpip/dhcpv4.a (134406) + mirage-net-xen.a/home/hannes/.opam/4.02.3/lib/mirage-net-xen/mirage-net-xen.a (131968) + xenstore.a/home/hannes/.opam/4.02.3/lib/xenstore/xenstore.a (119916) + ocplib_endian.a/home/hannes/.opam/4.02.3/lib/ocplib-endian/ocplib_endian.a (99350) + uutf.a/home/hannes/.opam/4.02.3/lib/uutf/uutf.a (91822) + ptime.a/home/hannes/.opam/4.02.3/lib/ptime/ptime.a (89482) + zarith.a/home/hannes/.opam/4.02.3/lib/zarith/zarith.a (87638) + fmt.a/home/hannes/.opam/4.02.3/lib/fmt/fmt.a (86522) + OS.a/home/hannes/.opam/4.02.3/lib/mirage-xen/OS.a (77252) + str.a/home/hannes/.opam/4.02.3/lib/ocaml/str.a (76790) + libzarith.a/home/hannes/.opam/4.02.3/lib/zarith/libzarith.a (74428) + cstruct.a/home/hannes/.opam/4.02.3/lib/cstruct/cstruct.a (68448) + xenstore_client_lwt.a/home/hannes/.opam/4.02.3/lib/xenstore/xenstore_client_lwt.a (62488) + libxenposix.a/home/hannes/.opam/4.02.3/lib/pkgconfig/../../lib/mirage-xen-posix/libxenposix.a (61286) + arpv4.a/home/hannes/.opam/4.02.3/lib/tcpip/arpv4.a (58494) + libzarith-xen.a/home/hannes/.opam/4.02.3/lib/zarith/libzarith-xen.a (53452) + mirage_console_xen.a/home/hannes/.opam/4.02.3/lib/mirage-console/mirage_console_xen.a (51346) + bigstring.a/home/hannes/.opam/4.02.3/lib/ocplib-endian/bigstring.a (50380) + main.omain.o (45272) + logs.a/home/hannes/.opam/4.02.3/lib/logs/logs.a (44770) + libxencamlbindings.a/home/hannes/.opam/4.02.3/lib/pkgconfig/../../lib/mirage-xen/libxencamlbindings.a (43158) + xen_gnt.a/home/hannes/.opam/4.02.3/lib/xen-gnt/xen_gnt.a (42476) + icmpv4.a/home/hannes/.opam/4.02.3/lib/tcpip/icmpv4.a (41882) + libx86_64.a/home/hannes/.opam/4.02.3/lib/minios-xen/libx86_64.a (41790) + shared_memory_ring.a/home/hannes/.opam/4.02.3/lib/shared-memory-ring/shared_memory_ring.a (41024) + page.opage.o (40744) + unikernel.ounikernel.o (38344) + re_str.a/home/hannes/.opam/4.02.3/lib/re/re_str.a (37994) + libxenotherlibs.a/home/hannes/.opam/4.02.3/lib/pkgconfig/../../lib/mirage-xen-ocaml/libxenotherlibs.a (37666) + tls-mirage.a/home/hannes/.opam/4.02.3/lib/tls/tls-mirage.a (34816) + tcpip-stack-direct.a/home/hannes/.opam/4.02.3/lib/tcpip/tcpip-stack-direct.a (33808) + mirage_console_proto.a/home/hannes/.opam/4.02.3/lib/mirage-console/mirage_console_proto.a (32550) + libbigarray.a/home/hannes/.opam/4.02.3/lib/ocaml/libbigarray.a (27760) + ipv4.a/home/hannes/.opam/4.02.3/lib/tcpip/ipv4.a (26902) + bigarray.a/home/hannes/.opam/4.02.3/lib/ocaml/bigarray.a (26228) + logger.ologger.o (24376) + xen_evtchn.a/home/hannes/.opam/4.02.3/lib/xen-evtchn/xen_evtchn.a (22362) + mirage-runtime.a/home/hannes/.opam/4.02.3/lib/mirage/mirage-runtime.a (22140) + lwt_shared_memory_ring.a/home/hannes/.opam/4.02.3/lib/shared-memory-ring/lwt_shared_memory_ring.a (21432) + re_emacs.a/home/hannes/.opam/4.02.3/lib/re/re_emacs.a (20354) + xenstore_ring.a/home/hannes/.opam/4.02.3/lib/shared-memory-ring/xenstore_ring.a (19818) + console_ring.a/home/hannes/.opam/4.02.3/lib/shared-memory-ring/console_ring.a (19722) + mirage-bootvar.a/home/hannes/.opam/4.02.3/lib/mirage-bootvar/mirage-bootvar.a (16280) + libtcpip_stubs.a/home/hannes/.opam/4.02.3/lib/tcpip/libtcpip_stubs.a (16256) + key_gen.okey_gen.o (15832) + functoria-runtime.a/home/hannes/.opam/4.02.3/lib/functoria/functoria-runtime.a (15444) + static1.ostatic1.o (14512) + libtcpip_xen_stubs.a/home/hannes/.opam/4.02.3/lib/tcpip/libtcpip_xen_stubs.a (14500) + mirage-entropy-xen.a/home/hannes/.opam/4.02.3/lib/mirage-entropy-xen/mirage-entropy-xen.a (13398) + io_page.a/home/hannes/.opam/4.02.3/lib/io-page/io_page.a (13208) + libcstruct_stubs.a/home/hannes/.opam/4.02.3/lib/cstruct/libcstruct_stubs.a (12016) + mProf.a/home/hannes/.opam/4.02.3/lib/mirage-profile/mProf.a (11084) + libio_page_unix_stubs.a/home/hannes/.opam/4.02.3/lib/io-page/libio_page_unix_stubs.a (10930) + udp.a/home/hannes/.opam/4.02.3/lib/tcpip/udp.a (9112) + libcamlstr.a/home/hannes/.opam/4.02.3/lib/ocaml/libcamlstr.a (8938) + test.otest.o (8760) + libio_page_xen_stubs.a/home/hannes/.opam/4.02.3/lib/io-page/libio_page_xen_stubs.a (8430) + libshared_memory_ring_stubs.a/home/hannes/.opam/4.02.3/lib/shared-memory-ring/libshared_memory_ring_stubs.a (8220) + nocrypto_xen.a/home/hannes/.opam/4.02.3/lib/nocrypto/nocrypto_xen.a (7938) + ethif.a/home/hannes/.opam/4.02.3/lib/tcpip/ethif.a (7776) + libmirage-entropy-xen_stubs.a/home/hannes/.opam/4.02.3/lib/mirage-entropy-xen/libmirage-entropy-xen_stubs.a (7704) + libmirage-entropy-xen_stubs.a/home/hannes/.opam/4.02.3/lib/mirage-entropy-xen/libmirage-entropy-xen_stubs.a (7704) + mirage-clock.a/home/hannes/.opam/4.02.3/lib/mirage-clock-xen/mirage-clock.a (6476) + tcpv4_plus.otcpv4_plus.o (4984) + std_exit.o/home/hannes/.opam/4.02.3/lib/ocaml/std_exit.o (2944) + io_page_unix.a/home/hannes/.opam/4.02.3/lib/io-page/io_page_unix.a (1934) + result.a/home/hannes/.opam/4.02.3/lib/result/result.a (1852) + mirage-console.a/home/hannes/.opam/4.02.3/lib/mirage-console/mirage-console.a (1778) + diff --git a/static/img/pinata-deps.svg b/static/img/pinata-deps.svg new file mode 100644 index 0000000..495faa9 --- /dev/null +++ b/static/img/pinata-deps.svg @@ -0,0 +1,1341 @@ + + + + + + +pinata + + +zarith + +zarith +1.4 + + +xenstore.client + +xenstore.client +1.3.0 + + +xenstore + +xenstore +1.3.0 + + +xenstore.client->xenstore + + + + +lwt + +lwt +2.5.2 + + +xenstore.client->lwt + + + + +cstruct.ppx + +cstruct.ppx +1.7.0 + + +xenstore->cstruct.ppx + + + + +cstruct + +cstruct +1.9.0 + + +xenstore->cstruct + + + + +xen-gnt + +xen-gnt +2.2.1 + + +unix + +unix + + +xen-gnt->unix + + + + +mirage-profile + +mirage-profile +0.6.1 + + +xen-gnt->mirage-profile + + + + +xen-gnt->lwt + + + + +io-page + +io-page +1.6.0 + + +xen-gnt->io-page + + + + +bigarray + +bigarray + + +xen-gnt->bigarray + + + + +xen-evtchn + +xen-evtchn +1.0.6 + + +xen-evtchn->unix + + + + +xen-evtchn->lwt + + + + +xen-evtchn->bigarray + + + + +x509 + +x509 +0.5.2 + + +sexplib + +sexplib +113.33.03 + + +x509->sexplib + + + + +nocrypto + +nocrypto +0.5.3 + + +x509->nocrypto + + + + +x509->cstruct + + + + +bytes + +bytes + + +x509->bytes + + + + +asn1-combinators + +asn1-combinators +0.1.2 + + +x509->asn1-combinators + + + + +uutf + +uutf +0.9.4 + + +tyxml + +tyxml +3.6.0 + + +tyxml->uutf + + + + +str + +str + + +tyxml->str + + + + +tls.mirage + +tls.mirage +0.7.1 + + +tls.mirage->x509 + + + + +tls + +tls +0.7.1 + + +tls.mirage->tls + + + + +mirage-types + +mirage-types +2.8.0 + + +tls.mirage->mirage-types + + + + +tls.mirage->lwt + + + + +ipaddr + +ipaddr +2.7.0 + + +tls.mirage->ipaddr + + + + +tls->x509 + + + + +tls->sexplib + + + + +result + +result +1.0 + + +tls->result + + + + +tls->nocrypto + + + + +tls->cstruct + + + + +tcpip.udp + +tcpip.udp +2.8.0 + + +tcpip + +tcpip +2.8.0 + + +tcpip.udp->tcpip + + + + +tcpip.udp->mirage-types + + + + +tcpip.udp->lwt + + + + +tcpip.udp->ipaddr + + + + +tcpip.udp->io-page + + + + +tcpip.udp->cstruct + + + + +tcpip.tcp + +tcpip.tcp +2.8.0 + + +tcpip.ipv6 + +tcpip.ipv6 +2.8.0 + + +tcpip.tcp->tcpip.ipv6 + + + + +tcpip.ipv4 + +tcpip.ipv4 +2.8.0 + + +tcpip.tcp->tcpip.ipv4 + + + + +tcpip.tcp->tcpip + + + + +tcpip.tcp->mirage-types + + + + +tcpip.tcp->mirage-profile + + + + +tcpip.tcp->lwt + + + + +tcpip.tcp->ipaddr + + + + +tcpip.tcp->io-page + + + + +tcpip.tcp->cstruct.ppx + + + + +tcpip.tcp->cstruct + + + + +tcpip.stack-direct + +tcpip.stack-direct +2.8.0 + + +tcpip.stack-direct->tcpip.udp + + + + +tcpip.stack-direct->tcpip.tcp + + + + +tcpip.icmpv4 + +tcpip.icmpv4 +2.8.0 + + +tcpip.stack-direct->tcpip.icmpv4 + + + + +tcpip.ethif + +tcpip.ethif +2.8.0 + + +tcpip.stack-direct->tcpip.ethif + + + + +tcpip.dhcpv4 + +tcpip.dhcpv4 +2.8.0 + + +tcpip.stack-direct->tcpip.dhcpv4 + + + + +tcpip.arpv4 + +tcpip.arpv4 +2.8.0 + + +tcpip.stack-direct->tcpip.arpv4 + + + + +tcpip.stack-direct->mirage-types + + + + +tcpip.stack-direct->lwt + + + + +tcpip.stack-direct->ipaddr + + + + +tcpip.stack-direct->io-page + + + + +tcpip.stack-direct->cstruct + + + + +tcpip.ipv6->tcpip + + + + +tcpip.ipv6->mirage-types + + + + +tcpip.ipv6->lwt + + + + +tcpip.ipv6->ipaddr + + + + +tcpip.ipv6->io-page + + + + +tcpip.ipv6->cstruct + + + + +tcpip.ipv4->tcpip + + + + +tcpip.ipv4->mirage-types + + + + +tcpip.ipv4->lwt + + + + +tcpip.ipv4->ipaddr + + + + +tcpip.ipv4->io-page + + + + +tcpip.ipv4->cstruct + + + + +tcpip.icmpv4->tcpip + + + + +tcpip.icmpv4->result + + + + +tcpip.icmpv4->mirage-types + + + + +tcpip.icmpv4->lwt + + + + +tcpip.icmpv4->ipaddr + + + + +tcpip.icmpv4->io-page + + + + +tcpip.icmpv4->cstruct + + + + +tcpip.ethif->tcpip + + + + +tcpip.ethif->mirage-types + + + + +tcpip.ethif->lwt + + + + +tcpip.ethif->ipaddr + + + + +tcpip.ethif->io-page + + + + +tcpip.ethif->cstruct + + + + +tcpip.dhcpv4->tcpip.udp + + + + +tcpip.dhcpv4->mirage-types + + + + +tcpip.dhcpv4->lwt + + + + +tcpip.dhcpv4->ipaddr + + + + +tcpip.dhcpv4->io-page + + + + +tcpip.dhcpv4->cstruct.ppx + + + + +tcpip.dhcpv4->cstruct + + + + +tcpip.dhcpv4->bytes + + + + +tcpip.arpv4->tcpip + + + + +tcpip.arpv4->mirage-types + + + + +tcpip.arpv4->lwt + + + + +tcpip.arpv4->ipaddr + + + + +tcpip.arpv4->io-page + + + + +tcpip.arpv4->cstruct.ppx + + + + +tcpip.arpv4->cstruct + + + + +tcpip->mirage-types + + + + +tcpip->mirage-profile + + + + +tcpip->ipaddr + + + + +tcpip->io-page + + + + +tcpip->cstruct.ppx + + + + +tcpip->cstruct + + + + +tcpip->bytes + + + + +shared-memory-ring.xenstore + +shared-memory-ring.xenstore +1.3.0 + + +shared-memory-ring + +shared-memory-ring +1.3.0 + + +shared-memory-ring.xenstore->shared-memory-ring + + + + +shared-memory-ring.lwt + +shared-memory-ring.lwt +1.3.0 + + +shared-memory-ring.lwt->shared-memory-ring + + + + +shared-memory-ring.lwt->mirage-profile + + + + +shared-memory-ring.lwt->lwt + + + + +shared-memory-ring.console + +shared-memory-ring.console +1.3.0 + + +shared-memory-ring.console->shared-memory-ring + + + + +shared-memory-ring.console->cstruct.ppx + + + + +shared-memory-ring.console->cstruct + + + + +shared-memory-ring->cstruct + + + + +sexplib->bigarray + + + + +re.str + +re.str +1.5.0 + + +re.emacs + +re.emacs +1.5.0 + + +re.str->re.emacs + + + + +re + +re +1.5.0 + + +re.str->re + + + + +re.emacs->re + + + + +re->bytes + + + + +ptime + +ptime +0.8.0 + + +ptime->result + + + + +ocplib-endian.bigstring + +ocplib-endian.bigstring +0.8 + + +ocplib-endian.bigstring->bytes + + + + +ocplib-endian.bigstring->bigarray + + + + +ocplib-endian + +ocplib-endian +0.8 + + +ocplib-endian->bytes + + + + +nocrypto.xen + +nocrypto.xen +0.5.3 + + +nocrypto.xen->nocrypto + + + + +mirage-entropy-xen + +mirage-entropy-xen +0.3.0 + + +nocrypto.xen->mirage-entropy-xen + + + + +nocrypto.xen->lwt + + + + +nocrypto->zarith + + + + +nocrypto->sexplib + + + + +nocrypto->cstruct + + + + +mirage.runtime + +mirage.runtime +2.9.0 + + +logs + +logs +0.5.0 + + +mirage.runtime->logs + + + + +mirage.runtime->ipaddr + + + + +functoria.runtime + +functoria.runtime +1.1.0 + + +mirage.runtime->functoria.runtime + + + + +astring + +astring +0.8.1 + + +mirage.runtime->astring + + + + +mirage-xen + +mirage-xen +2.6.0 + + +mirage-xen->xenstore.client + + + + +mirage-xen->xen-gnt + + + + +mirage-xen->xen-evtchn + + + + +mirage-xen->shared-memory-ring.xenstore + + + + +mirage-xen->shared-memory-ring.lwt + + + + +mirage-xen->shared-memory-ring.console + + + + +mirage-xen->shared-memory-ring + + + + +mirage-xen->mirage-types + + + + +mirage-xen->mirage-profile + + + + +mirage-clock-xen + +mirage-clock-xen +1.0.0 + + +mirage-xen->mirage-clock-xen + + + + +mirage-xen->lwt + + + + +mirage-xen->io-page + + + + +mirage-xen->cstruct + + + + +mirage-types.lwt + +mirage-types.lwt +2.8.0 + + +mirage-profile->ocplib-endian.bigstring + + + + +mirage-profile->lwt + + + + +mirage-profile->cstruct.ppx + + + + +mirage-profile->cstruct + + + + +mirage-net-xen + +mirage-net-xen +1.4.2 + + +mirage-net-xen->xen-gnt + + + + +mirage-net-xen->xen-evtchn + + + + +mirage-net-xen->mirage-xen + + + + +mirage-net-xen->mirage-profile + + + + +mirage-net-xen->lwt + + + + +mirage-net-xen->ipaddr + + + + +mirage-net-xen->cstruct + + + + +mirage-entropy-xen->mirage-xen + + + + +mirage-entropy-xen->lwt + + + + +mirage-entropy-xen->cstruct + + + + +mirage-console.xen + +mirage-console.xen +2.1.3 + + +mirage-console.xen->xen-gnt + + + + +mirage-console.xen->xen-evtchn + + + + +mirage-console.xen->mirage-xen + + + + +mirage-console.xen->mirage-types + + + + +mirage-console.proto + +mirage-console.proto +2.1.3 + + +mirage-console.xen->mirage-console.proto + + + + +mirage-console.xen->lwt + + + + +mirage-console.xen->io-page + + + + +mirage-console.proto->xenstore + + + + +mirage-console + +mirage-console +2.1.3 + + +mirage-console.proto->mirage-console + + + + +mirage-console->mirage-types.lwt + + + + +mirage-console->mirage-types + + + + +mirage-console->lwt + + + + +mirage-bootvar + +mirage-bootvar +0.3.1 + + +mirage-bootvar->re.str + + + + +mirage-bootvar->re + + + + +mirage-bootvar->mirage-xen + + + + +mirage-bootvar->lwt + + + + +lwt->bytes + + + + +logs->result + + + + +ipaddr->sexplib + + + + +ipaddr->bytes + + + + +io-page.unix + +io-page.unix +1.6.0 + + +io-page.unix->bigarray + + + + +io-page->cstruct + + + + +io-page->bytes + + + + +fmt + +fmt +0.7.1 + + +functoria.runtime->fmt + + + + +cmdliner + +cmdliner +0.9.8 + + +functoria.runtime->cmdliner + + + + +cstruct.ppx->cstruct + + + + +cstruct->sexplib + + + + +cstruct->ocplib-endian.bigstring + + + + +cstruct->ocplib-endian + + + + +cstruct->bytes + + + + +cstruct->bigarray + + + + +bigarray->unix + + + + +astring->bytes + + + + +asn1-combinators->zarith + + + + +asn1-combinators->cstruct + + + + + diff --git a/static/img/pinata_access_20180403.png b/static/img/pinata_access_20180403.png new file mode 100644 index 0000000..bbae86b Binary files /dev/null and b/static/img/pinata_access_20180403.png differ diff --git a/static/img/pinata_access_cumulative_20180403.png b/static/img/pinata_access_cumulative_20180403.png new file mode 100644 index 0000000..74acd63 Binary files /dev/null and b/static/img/pinata_access_cumulative_20180403.png differ diff --git a/static/img/tcp-frame-client.png b/static/img/tcp-frame-client.png new file mode 100644 index 0000000..8abe1a2 Binary files /dev/null and b/static/img/tcp-frame-client.png differ diff --git a/static/img/tcp-frame-server.png b/static/img/tcp-frame-server.png new file mode 100644 index 0000000..c2595ed Binary files /dev/null and b/static/img/tcp-frame-server.png differ diff --git a/static/js/highlight.pack.js b/static/js/highlight.pack.js new file mode 100644 index 0000000..5b2fdab --- /dev/null +++ b/static/js/highlight.pack.js @@ -0,0 +1,2 @@ +/*! highlight.js v9.8.0 | BSD3 License | git.io/hljslicense */ +!function(e){var n="object"==typeof window&&window||"object"==typeof self&&self;"undefined"!=typeof exports?e(exports):n&&(n.hljs=e({}),"function"==typeof define&&define.amd&&define([],function(){return n.hljs}))}(function(e){function n(e){return e.replace(/[&<>]/gm,function(e){return I[e]})}function t(e){return e.nodeName.toLowerCase()}function r(e,n){var t=e&&e.exec(n);return t&&0===t.index}function a(e){return k.test(e)}function i(e){var n,t,r,i,o=e.className+" ";if(o+=e.parentNode?e.parentNode.className:"",t=B.exec(o))return R(t[1])?t[1]:"no-highlight";for(o=o.split(/\s+/),n=0,r=o.length;r>n;n++)if(i=o[n],a(i)||R(i))return i}function o(e,n){var t,r={};for(t in e)r[t]=e[t];if(n)for(t in n)r[t]=n[t];return r}function u(e){var n=[];return function r(e,a){for(var i=e.firstChild;i;i=i.nextSibling)3===i.nodeType?a+=i.nodeValue.length:1===i.nodeType&&(n.push({event:"start",offset:a,node:i}),a=r(i,a),t(i).match(/br|hr|img|input/)||n.push({event:"stop",offset:a,node:i}));return a}(e,0),n}function c(e,r,a){function i(){return e.length&&r.length?e[0].offset!==r[0].offset?e[0].offset"}function u(e){l+=""}function c(e){("start"===e.event?o:u)(e.node)}for(var s=0,l="",f=[];e.length||r.length;){var g=i();if(l+=n(a.substr(s,g[0].offset-s)),s=g[0].offset,g===e){f.reverse().forEach(u);do c(g.splice(0,1)[0]),g=i();while(g===e&&g.length&&g[0].offset===s);f.reverse().forEach(o)}else"start"===g[0].event?f.push(g[0].node):f.pop(),c(g.splice(0,1)[0])}return l+n(a.substr(s))}function s(e){function n(e){return e&&e.source||e}function t(t,r){return new RegExp(n(t),"m"+(e.cI?"i":"")+(r?"g":""))}function r(a,i){if(!a.compiled){if(a.compiled=!0,a.k=a.k||a.bK,a.k){var u={},c=function(n,t){e.cI&&(t=t.toLowerCase()),t.split(" ").forEach(function(e){var t=e.split("|");u[t[0]]=[n,t[1]?Number(t[1]):1]})};"string"==typeof a.k?c("keyword",a.k):E(a.k).forEach(function(e){c(e,a.k[e])}),a.k=u}a.lR=t(a.l||/\w+/,!0),i&&(a.bK&&(a.b="\\b("+a.bK.split(" ").join("|")+")\\b"),a.b||(a.b=/\B|\b/),a.bR=t(a.b),a.e||a.eW||(a.e=/\B|\b/),a.e&&(a.eR=t(a.e)),a.tE=n(a.e)||"",a.eW&&i.tE&&(a.tE+=(a.e?"|":"")+i.tE)),a.i&&(a.iR=t(a.i)),null==a.r&&(a.r=1),a.c||(a.c=[]);var s=[];a.c.forEach(function(e){e.v?e.v.forEach(function(n){s.push(o(e,n))}):s.push("self"===e?a:e)}),a.c=s,a.c.forEach(function(e){r(e,a)}),a.starts&&r(a.starts,i);var l=a.c.map(function(e){return e.bK?"\\.?("+e.b+")\\.?":e.b}).concat([a.tE,a.i]).map(n).filter(Boolean);a.t=l.length?t(l.join("|"),!0):{exec:function(){return null}}}}r(e)}function l(e,t,a,i){function o(e,n){var t,a;for(t=0,a=n.c.length;a>t;t++)if(r(n.c[t].bR,e))return n.c[t]}function u(e,n){if(r(e.eR,n)){for(;e.endsParent&&e.parent;)e=e.parent;return e}return e.eW?u(e.parent,n):void 0}function c(e,n){return!a&&r(n.iR,e)}function g(e,n){var t=N.cI?n[0].toLowerCase():n[0];return e.k.hasOwnProperty(t)&&e.k[t]}function h(e,n,t,r){var a=r?"":y.classPrefix,i='',i+n+o}function p(){var e,t,r,a;if(!E.k)return n(B);for(a="",t=0,E.lR.lastIndex=0,r=E.lR.exec(B);r;)a+=n(B.substr(t,r.index-t)),e=g(E,r),e?(M+=e[1],a+=h(e[0],n(r[0]))):a+=n(r[0]),t=E.lR.lastIndex,r=E.lR.exec(B);return a+n(B.substr(t))}function d(){var e="string"==typeof E.sL;if(e&&!x[E.sL])return n(B);var t=e?l(E.sL,B,!0,L[E.sL]):f(B,E.sL.length?E.sL:void 0);return E.r>0&&(M+=t.r),e&&(L[E.sL]=t.top),h(t.language,t.value,!1,!0)}function b(){k+=null!=E.sL?d():p(),B=""}function v(e){k+=e.cN?h(e.cN,"",!0):"",E=Object.create(e,{parent:{value:E}})}function m(e,n){if(B+=e,null==n)return b(),0;var t=o(n,E);if(t)return t.skip?B+=n:(t.eB&&(B+=n),b(),t.rB||t.eB||(B=n)),v(t,n),t.rB?0:n.length;var r=u(E,n);if(r){var a=E;a.skip?B+=n:(a.rE||a.eE||(B+=n),b(),a.eE&&(B=n));do E.cN&&(k+=C),E.skip||(M+=E.r),E=E.parent;while(E!==r.parent);return r.starts&&v(r.starts,""),a.rE?0:n.length}if(c(n,E))throw new Error('Illegal lexeme "'+n+'" for mode "'+(E.cN||"")+'"');return B+=n,n.length||1}var N=R(e);if(!N)throw new Error('Unknown language: "'+e+'"');s(N);var w,E=i||N,L={},k="";for(w=E;w!==N;w=w.parent)w.cN&&(k=h(w.cN,"",!0)+k);var B="",M=0;try{for(var I,j,O=0;;){if(E.t.lastIndex=O,I=E.t.exec(t),!I)break;j=m(t.substr(O,I.index-O),I[0]),O=I.index+j}for(m(t.substr(O)),w=E;w.parent;w=w.parent)w.cN&&(k+=C);return{r:M,value:k,language:e,top:E}}catch(T){if(T.message&&-1!==T.message.indexOf("Illegal"))return{r:0,value:n(t)};throw T}}function f(e,t){t=t||y.languages||E(x);var r={r:0,value:n(e)},a=r;return t.filter(R).forEach(function(n){var t=l(n,e,!1);t.language=n,t.r>a.r&&(a=t),t.r>r.r&&(a=r,r=t)}),a.language&&(r.second_best=a),r}function g(e){return y.tabReplace||y.useBR?e.replace(M,function(e,n){return y.useBR&&"\n"===e?"
":y.tabReplace?n.replace(/\t/g,y.tabReplace):void 0}):e}function h(e,n,t){var r=n?L[n]:t,a=[e.trim()];return e.match(/\bhljs\b/)||a.push("hljs"),-1===e.indexOf(r)&&a.push(r),a.join(" ").trim()}function p(e){var n,t,r,o,s,p=i(e);a(p)||(y.useBR?(n=document.createElementNS("http://www.w3.org/1999/xhtml","div"),n.innerHTML=e.innerHTML.replace(/\n/g,"").replace(//g,"\n")):n=e,s=n.textContent,r=p?l(p,s,!0):f(s),t=u(n),t.length&&(o=document.createElementNS("http://www.w3.org/1999/xhtml","div"),o.innerHTML=r.value,r.value=c(t,u(o),s)),r.value=g(r.value),e.innerHTML=r.value,e.className=h(e.className,p,r.language),e.result={language:r.language,re:r.r},r.second_best&&(e.second_best={language:r.second_best.language,re:r.second_best.r}))}function d(e){y=o(y,e)}function b(){if(!b.called){b.called=!0;var e=document.querySelectorAll("pre code");w.forEach.call(e,p)}}function v(){addEventListener("DOMContentLoaded",b,!1),addEventListener("load",b,!1)}function m(n,t){var r=x[n]=t(e);r.aliases&&r.aliases.forEach(function(e){L[e]=n})}function N(){return E(x)}function R(e){return e=(e||"").toLowerCase(),x[e]||x[L[e]]}var w=[],E=Object.keys,x={},L={},k=/^(no-?highlight|plain|text)$/i,B=/\blang(?:uage)?-([\w-]+)\b/i,M=/((^(<[^>]+>|\t|)+|(?:\n)))/gm,C="
",y={classPrefix:"hljs-",tabReplace:null,useBR:!1,languages:void 0},I={"&":"&","<":"<",">":">"};return e.highlight=l,e.highlightAuto=f,e.fixMarkup=g,e.highlightBlock=p,e.configure=d,e.initHighlighting=b,e.initHighlightingOnLoad=v,e.registerLanguage=m,e.listLanguages=N,e.getLanguage=R,e.inherit=o,e.IR="[a-zA-Z]\\w*",e.UIR="[a-zA-Z_]\\w*",e.NR="\\b\\d+(\\.\\d+)?",e.CNR="(-?)(\\b0[xX][a-fA-F0-9]+|(\\b\\d+(\\.\\d*)?|\\.\\d+)([eE][-+]?\\d+)?)",e.BNR="\\b(0b[01]+)",e.RSR="!|!=|!==|%|%=|&|&&|&=|\\*|\\*=|\\+|\\+=|,|-|-=|/=|/|:|;|<<|<<=|<=|<|===|==|=|>>>=|>>=|>=|>>>|>>|>|\\?|\\[|\\{|\\(|\\^|\\^=|\\||\\|=|\\|\\||~",e.BE={b:"\\\\[\\s\\S]",r:0},e.ASM={cN:"string",b:"'",e:"'",i:"\\n",c:[e.BE]},e.QSM={cN:"string",b:'"',e:'"',i:"\\n",c:[e.BE]},e.PWM={b:/\b(a|an|the|are|I'm|isn't|don't|doesn't|won't|but|just|should|pretty|simply|enough|gonna|going|wtf|so|such|will|you|your|like)\b/},e.C=function(n,t,r){var a=e.inherit({cN:"comment",b:n,e:t,c:[]},r||{});return a.c.push(e.PWM),a.c.push({cN:"doctag",b:"(?:TODO|FIXME|NOTE|BUG|XXX):",r:0}),a},e.CLCM=e.C("//","$"),e.CBCM=e.C("/\\*","\\*/"),e.HCM=e.C("#","$"),e.NM={cN:"number",b:e.NR,r:0},e.CNM={cN:"number",b:e.CNR,r:0},e.BNM={cN:"number",b:e.BNR,r:0},e.CSSNM={cN:"number",b:e.NR+"(%|em|ex|ch|rem|vw|vh|vmin|vmax|cm|mm|in|pt|pc|px|deg|grad|rad|turn|s|ms|Hz|kHz|dpi|dpcm|dppx)?",r:0},e.RM={cN:"regexp",b:/\//,e:/\/[gimuy]*/,i:/\n/,c:[e.BE,{b:/\[/,e:/\]/,r:0,c:[e.BE]}]},e.TM={cN:"title",b:e.IR,r:0},e.UTM={cN:"title",b:e.UIR,r:0},e.METHOD_GUARD={b:"\\.\\s*"+e.UIR,r:0},e});hljs.registerLanguage("bash",function(e){var t={cN:"variable",v:[{b:/\$[\w\d#@][\w\d_]*/},{b:/\$\{(.*?)}/}]},s={cN:"string",b:/"/,e:/"/,c:[e.BE,t,{cN:"variable",b:/\$\(/,e:/\)/,c:[e.BE]}]},a={cN:"string",b:/'/,e:/'/};return{aliases:["sh","zsh"],l:/-?[a-z\._]+/,k:{keyword:"if then else elif fi for while in do done case esac function",literal:"true false",built_in:"break cd continue eval exec exit export getopts hash pwd readonly return shift test times trap umask unset alias bind builtin caller command declare echo enable help let local logout mapfile printf read readarray source type typeset ulimit unalias set shopt autoload bg bindkey bye cap chdir clone comparguments compcall compctl compdescribe compfiles compgroups compquote comptags comptry compvalues dirs disable disown echotc echoti emulate fc fg float functions getcap getln history integer jobs kill limit log noglob popd print pushd pushln rehash sched setcap setopt stat suspend ttyctl unfunction unhash unlimit unsetopt vared wait whence where which zcompile zformat zftp zle zmodload zparseopts zprof zpty zregexparse zsocket zstyle ztcp",_:"-ne -eq -lt -gt -f -d -e -s -l -a"},c:[{cN:"meta",b:/^#![^\n]+sh\s*$/,r:10},{cN:"function",b:/\w[\w\d_]*\s*\(\s*\)\s*\{/,rB:!0,c:[e.inherit(e.TM,{b:/\w[\w\d_]*/})],r:0},e.HCM,s,a,t]}});hljs.registerLanguage("ocaml",function(e){return{aliases:["ml"],k:{keyword:"and as assert asr begin class constraint do done downto else end exception external for fun function functor if in include inherit! inherit initializer land lazy let lor lsl lsr lxor match method!|10 method mod module mutable new object of open! open or private rec sig struct then to try type val! val virtual when while with parser value",built_in:"array bool bytes char exn|5 float int int32 int64 list lazy_t|5 nativeint|5 string unit in_channel out_channel ref",literal:"true false"},i:/\/\/|>>/,l:"[a-z_]\\w*!?",c:[{cN:"literal",b:"\\[(\\|\\|)?\\]|\\(\\)",r:0},e.C("\\(\\*","\\*\\)",{c:["self"]}),{cN:"symbol",b:"'[A-Za-z_](?!')[\\w']*"},{cN:"type",b:"`[A-Z][\\w']*"},{cN:"type",b:"\\b[A-Z][\\w']*",r:0},{b:"[a-z_]\\w*'[\\w']*",r:0},e.inherit(e.ASM,{cN:"string",r:0}),e.inherit(e.QSM,{i:null}),{cN:"number",b:"\\b(0[xX][a-fA-F0-9_]+[Lln]?|0[oO][0-7_]+[Lln]?|0[bB][01_]+[Lln]?|[0-9][0-9_]*([Lln]|(\\.[0-9_]*)?([eE][-+]?[0-9_]+)?)?)",r:0},{b:/[-=]>/}]}}); \ No newline at end of file diff --git a/tags/UI b/tags/UI new file mode 100644 index 0000000..bc415db --- /dev/null +++ b/tags/UI @@ -0,0 +1,3 @@ + +full stack engineer
\ No newline at end of file diff --git a/tags/background b/tags/background new file mode 100644 index 0000000..6dd0491 --- /dev/null +++ b/tags/background @@ -0,0 +1,6 @@ + +full stack engineer
\ No newline at end of file diff --git a/tags/bitcoin b/tags/bitcoin new file mode 100644 index 0000000..2a0a0d2 --- /dev/null +++ b/tags/bitcoin @@ -0,0 +1,3 @@ + +full stack engineer
\ No newline at end of file diff --git a/tags/deployment b/tags/deployment new file mode 100644 index 0000000..3d44f25 --- /dev/null +++ b/tags/deployment @@ -0,0 +1,8 @@ + +full stack engineer
\ No newline at end of file diff --git a/tags/future b/tags/future new file mode 100644 index 0000000..768b449 --- /dev/null +++ b/tags/future @@ -0,0 +1,3 @@ + +full stack engineer
\ No newline at end of file diff --git a/tags/http b/tags/http new file mode 100644 index 0000000..0472ed9 --- /dev/null +++ b/tags/http @@ -0,0 +1,3 @@ + +full stack engineer
\ No newline at end of file diff --git a/tags/logging b/tags/logging new file mode 100644 index 0000000..4ae91a1 --- /dev/null +++ b/tags/logging @@ -0,0 +1,3 @@ + +full stack engineer
\ No newline at end of file diff --git a/tags/mirageos b/tags/mirageos new file mode 100644 index 0000000..2a501b9 --- /dev/null +++ b/tags/mirageos @@ -0,0 +1,23 @@ + +full stack engineer

Mirroring the opam repository and all tarballs

Written by hannes

Re-developing an opam cache from scratch, as a MirageOS unikernel

+

All your metrics belong to influx

Written by hannes

How to monitor your MirageOS unikernel with albatross and monitoring-experiments

+

Deploying binary MirageOS unikernels

Written by hannes

Finally, we provide reproducible binary MirageOS unikernels together with packages to reproduce them and setup your own builder

+

Cryptography updates in OCaml and MirageOS

Written by hannes

Elliptic curves (ECDSA/ECDH) are supported in a maintainable and secure way.

+

The road ahead for MirageOS in 2021

Written by hannes

Home office, MirageOS unikernels, 2020 recap, 2021 tbd

+

Traceroute

Written by hannes

A MirageOS unikernel which traces the path between itself and a remote host.

+

Deploying authoritative OCaml-DNS servers as MirageOS unikernels

Written by hannes

A tutorial how to deploy authoritative name servers, let's encrypt, and updating entries from unix services.

+

Reproducible MirageOS unikernel builds

Written by hannes

MirageOS unikernels are reproducible :)

+

X509 0.7

Written by hannes

Five years since ocaml-x509 initial release, it has been reworked and used more widely

+

Summer 2019

Written by hannes

Bringing MirageOS into production, take IV monitoring, CalDAV, DNS

+

The Bitcoin Piñata - no candy for you

Written by hannes

More than three years ago we launched our Bitcoin Piñata as a transparent security bait. It is still up and running!

+

My 2018 contains robur and starts with re-engineering DNS

Written by hannes

New year brings new possibilities and a new environment. I've been working on the most Widely deployed key-value store, the domain name system. Primary and secondary name services are available, including dynamic updates, notify, and tsig authentication.

+

Albatross - provisioning, deploying, managing, and monitoring virtual machines

Written by hannes

all we need is X.509

+

Exfiltrating log data using syslog

Written by hannes

sometimes preservation of data is useful

+

Re-engineering ARP

Written by hannes

If you want it as you like, you've to do it yourself

+

Minimising the virtual machine monitor

Written by hannes

MirageOS solo5 multiboot native on bhyve

+

Counting Bytes

Written by hannes

looking into dependencies and their sizes

+

Configuration DSL step-by-step

Written by hannes

how to actually configure the system

+

Catch the bug, walking through the stack

Written by hannes

10BTC could've been yours

+

Fitting the things together

Written by hannes

building a simple website

+

Operating systems

Written by hannes

Operating systems and MirageOS

+

\ No newline at end of file diff --git a/tags/monitoring b/tags/monitoring new file mode 100644 index 0000000..dae7490 --- /dev/null +++ b/tags/monitoring @@ -0,0 +1,4 @@ + +full stack engineer
\ No newline at end of file diff --git a/tags/myself b/tags/myself new file mode 100644 index 0000000..e838f15 --- /dev/null +++ b/tags/myself @@ -0,0 +1,3 @@ + +full stack engineer
\ No newline at end of file diff --git a/tags/opam b/tags/opam new file mode 100644 index 0000000..947c6e8 --- /dev/null +++ b/tags/opam @@ -0,0 +1,3 @@ + +full stack engineer
\ No newline at end of file diff --git a/tags/operating system b/tags/operating system new file mode 100644 index 0000000..1b7a6b0 --- /dev/null +++ b/tags/operating system @@ -0,0 +1,3 @@ + +full stack engineer
\ No newline at end of file diff --git a/tags/overview b/tags/overview new file mode 100644 index 0000000..3f49c41 --- /dev/null +++ b/tags/overview @@ -0,0 +1,6 @@ + +full stack engineer
\ No newline at end of file diff --git a/tags/package signing b/tags/package signing new file mode 100644 index 0000000..2dc2230 --- /dev/null +++ b/tags/package signing @@ -0,0 +1,6 @@ + +full stack engineer
\ No newline at end of file diff --git a/tags/protocol b/tags/protocol new file mode 100644 index 0000000..23984a8 --- /dev/null +++ b/tags/protocol @@ -0,0 +1,8 @@ + +full stack engineer
\ No newline at end of file diff --git a/tags/provisioning b/tags/provisioning new file mode 100644 index 0000000..5c492c0 --- /dev/null +++ b/tags/provisioning @@ -0,0 +1,3 @@ + +full stack engineer
\ No newline at end of file diff --git a/tags/security b/tags/security new file mode 100644 index 0000000..7858dc1 --- /dev/null +++ b/tags/security @@ -0,0 +1,12 @@ + +full stack engineer
\ No newline at end of file diff --git a/tags/tls b/tags/tls new file mode 100644 index 0000000..3d19540 --- /dev/null +++ b/tags/tls @@ -0,0 +1,6 @@ + +full stack engineer
\ No newline at end of file