Compare commits

..

No commits in common. "main" and "yocaml2" have entirely different histories.

26 changed files with 117 additions and 1508 deletions

View file

@ -10,7 +10,7 @@ $ git clone git@git.robur.coop:robur/blog.robur.coop
$ cd blog.robur.coop/ $ cd blog.robur.coop/
$ opam pin add -yn . $ opam pin add -yn .
$ opam install --deps-only blogger $ opam install --deps-only blogger
$ dune exec bin/watch.exe -- $ dune exec src/watch.exe --
``` ```
A little server run on `http://localhost:8000`. A little server run on `http://localhost:8000`.
@ -27,7 +27,7 @@ You can specify an `author` (with its `name`, `email` and `link`) or not. By
default, we use `team@robur.coop`. If everything looks good, you can generate default, we use `team@robur.coop`. If everything looks good, you can generate
via the `blogger.exe` tool the generated website via: via the `blogger.exe` tool the generated website via:
```shell-session ```shell-session
$ dune exec bin/push.exe -- push \ $ dune exec src/push.exe -- push \
-r git@git.robur.coop:robur/blog.robur.coop.git#gh-pages \ -r git@git.robur.coop:robur/blog.robur.coop.git#gh-pages \
--host https://blog.robur.coop --host https://blog.robur.coop
[--name "The Robur team"] \ [--name "The Robur team"] \
@ -38,7 +38,7 @@ An SSH communication will starts. If you already registered your private key
with `ssh-agent` and your `.ssh/config` is configured to take this one if you with `ssh-agent` and your `.ssh/config` is configured to take this one if you
communicate with with `git@git.robur.coop`, everything will be smooth! Et voilà! communicate with with `git@git.robur.coop`, everything will be smooth! Et voilà!
At the end, an HTTP request will be send to `https://blog.robur.coop` (via At the end, an HTTP request will be send to `https://blog.robur.coop` (via
Forgejo) to update the unikernel with the last version of the blog. Gitea) to update the unikernel with the last version of the blog.
You can also use the `update.sh` script to update the blog with the builder user You can also use the `update.sh` script to update the blog with the builder user
on the server machine. on the server machine.

View file

@ -236,7 +236,7 @@ This work was funded by [the EU NGI Assure Fund through NLnet](https://nlnet.nl/
In my opinion, this shows that funding one open source project can have a positive impact on other open source projects, too. In my opinion, this shows that funding one open source project can have a positive impact on other open source projects, too.
[robur]: https://robur.coop/ [robur]: https://robur.coop/
[miragevpn-server]: miragevpn-server.html [miragevpn-server]: https://blog.robur.coop/articles/miragevpn-server.html
[contact]: https://reyn.ir/contact.html [contact]: https://reyn.ir/contact.html
[^openvpn-tls]: This is not always the case. It is possible to use static shared secret keys, but it is mostly considered deprecated. [^openvpn-tls]: This is not always the case. It is possible to use static shared secret keys, but it is mostly considered deprecated.

View file

@ -1,221 +0,0 @@
---
date: 2024-10-29
title: Postes, télégraphes et téléphones, next steps
description: An update of our email stack
tags:
- SMTP
- emails
- mailing-lists
author:
name: Romain Calascibetta
email: romain.calascibetta@gmail.com
link: https://blog.osau.re/
breaks: false
---
As you know from [our article on Robur's
finances](https://blog.robur.coop/articles/finances.html), we've just received
[funding for our email project](https://nlnet.nl/project/PTT). This project
started when I was doing my internship in Cambridge and it's great to see that
it's been able to evolve over time and remain functional. This article will
introduce you to the latest changes to [our PTT
project](https://github.com/mirage/ptt) and how far we've got towards providing
an OCaml mailing list service.
## A Git repository or a simple block device as a database?
One issue that came up quickly in our latest experiments with our SMTP stack was
the database of users with an email address. Since we had decided to break
down the various stages of an email submission to offer simple unikernels, we
ended up having to deploy 4 unikernels to have a service that worked.
- a unikernel for authentication
- a unikernel DKIM-signing the incoming email
- one unikernel as primary DNS server
- one unikernel sending the signed email to its real destination
And we're only talking here about the submission of an email, the reception
concerns another pipe.
The problem with such an architecture is that some unikernels need to have the
same data: the users. In this case, the first unikernel needs to know the user's
password in order to verify authentication. The final unikernel needs to know
the real destinations of the users.
Let's take the example of two users: foo@robur.coop and bar@robur.coop. The
first points to hannes@foo.org and the second to reynir@example.com.
If Hannes wants to send a message to bar@robur.coop under the identity of
foo@robur.coop, he will need to authenticate himself to our first unikernel.
This first unikernel must therefore:
1) check that the user `foo` exists
2) the hashed password used by Hannes is the same as the one in the database
Next, the email will be signed by our second unikernel. It will then forward the
email to the last unikernel, which will do the actual translation of the
recipients and DNS resolution. In other words:
1) it will see that one (the only) recipient is bar@robur.coop
2) check that bar@robur.coop exists and obtain its real address
3) it will obtain reynir@example.com and perform DNS resolution on
`example.com` to find out the email server for this domain
4) finally send the email signed by foo@robur.coop to reynir@example.com!
So the first and last unikernels need to have the same information about our
users. One for the passwords, the second for the real email addresses.
But as you know, we're talking about unikernels that exist independently of each
other. What's more, they can't share files and the possibility of them sharing
block-devices remains an open question (and a complex one where parallel access
may be involved). In short, the only way to synchronise these unikernels in
relation to common data is with a Git repository.
[Git][git-kv] has the advantage of being widely used for our unikernels
([primary-git][primary-git], [pasteur][pasteur], [unipi][unipi] and
[contruno][contruno]). The advantage is that you can track changes, modify
files and notify the unikernel to update itself (using nsupdate, a simple ping
or an http request to the unikernel).
The problem is that this requires certain skills. Even if it's simple to set
up a Git server and then deploy our unikernels, we can restructure our
architecture and simplify the deployment of an SMTP stack!
## Elit and OneFFS
We have therefore decided to merge the email exchange service and email
submission into a unikernel so that this is the only user information requester.
So we decided to use [OneFFS][oneffs] as the file system for our database,
which will be a plain JSON file. This is perhaps one of the advantages of
MirageOS, which is that you can decide exactly what you need to implement
specific objectives.
In this case, those with experience of Postfix, LDAP or MariaDB could confirm
that configuring an email service should be simpler than implementing a
multitude of pipes between different applications and authentication methods.
The JSON file is therefore very simple and so is the creation of an OneFFS
image:
```sh
$ cat >database.json<<EOF
> [ { "name": "din"
> , "password": "xxxxxx"
> , "mailboxes": [ "romain.calascibetta@gmail.com" ] } ]
> EOF
$ opam install oneffs
$ oneffs create -i database.json -o database.img
```
All you have to do is register this image as a block with [albatross][albatross] and launch
our Elit unikernel with this block-device.
```sh
$ albatross-client create-block --data=database.img database 1024
$ albatross-client create --net=service:br0 --block=database:database \
elit elit.hvt \
--arg=...
```
At this stage, and if we add our unikernel signing incoming emails, we have more
or less the same thing as what I've described in [my previous articles][smtp_1] on
[deploying][smtp_2] an [email service][smtp_3].
## Multiplex receiving & sending emails
The PTT project is a toolkit for implementing SMTP servers. It gives developers
the choice of implementing their logic as they see fit:
* sign an email
* resolve destinations according to a database
* check SPF information
* annotate the email as spam or not
* etc.
Previously, PTT was split into 2 parts:
1) management of incoming clients/emails
2) the logic to be applied to incoming emails and their delivery
The second point was becoming increasingly complex, however, and errors in
sending emails are legion (DMARC non-alignment, the email is too big for the
destination, the destination doesn't exist, etc.). All the more so since, up to
now, PTT could only report these errors via the logs...
Hannes immediately mentioned the possibility of separating the logic of the
unikernel from the delivery. This will allow us to deal with temporary failures
(greylisting) as well. So a fundamental change was made:
- improve the [sendmail][sendmail] and `sendmail-lwt` packages (as well as proposing
`sendmail-miou`!) when sending or submitting an email
- improve PTT so that there are now 3 distinct jobs: receiving, what to do with
incoming emails and sending emails
![SMTP](../images/smtp.jpg)
This finally allows us to describe a clearer error management policy that is
independent of what we want to do with incoming emails. At this stage, we can
look for the `Return-Path` in emails that we haven't managed to send and notify
the senders!
All this is still in the experimental stage and practical cases are needed to
observe how we should handle errors and how others do.
## Insights & Next goals
We're already starting to have a bit of fun with email and we can start sending
and receiving emails right away.
We're also already seeing hacking attempts on our unikernel:
- people trying to authenticate themselves without `STARTTLS` (or with it,
depending on how clever the bot is)
- people trying to send emails as non-existent users in our database
- we're also seeing content that has nothing to do with SMTP
Above all, this shows that, very early on, bots try to usurp the identity linked
to your server (in our case, osau.re) in order to send spam, authenticate
themselves or simply send stuff and observe what happens. In this case, for
all the cases mentioned, Elit (and PTT) reacts well: in other words, it simply
cuts off the connection.
We were also able to observe how services such as gmail work. In addition, for
the purposes of a mailing list, email forwarding distorts DMARC verification
(specifically, SPF verification). The case is very simple:
foo@gmail.com tries to reply to robur@osau.re. robur@osau.re is a mailing list
to several addresses (one of them is bar@gmail.com). The unikernel will receive
the email and send it to bar@gmail.com. The problem is the alignment between
the `From` field (which corresponds to foo@gmail.com) and our osau.re server.
From gmail.com's point of view, there is a misalignment between these two
pieces of information and it therefore refuses to receive the email.
This is where our next objectives come in:
- finish our DMARC implementation
- implement ARC so that our server notifies us that, on our side, the DMARC
check went well and that gmail.com should trust us on this.
There is another way of solving the problem, perhaps a little more problematic,
modify the incoming email and in particular the `From` field. Although this
could be done quite simply with [mrmime][mrmime], it's better to concentrate on
DMARC and ARC so that we can send our emails as they are and never alter them
(especially as this will invalidate previous DKIM signatures!).
## Conclusion
It's always satisfying to see your projects working more or less correctly.
This article will surely be the start of a series on the intricacies of email
and the difficulty of deploying such a service at home.
We hope that this NLnet-funded work will enable us to replace our current email
system with unikernels. We're already past the stage where we can, more or less
(without DMARC checking), send emails to each other, which is a big step!
So follow our work on our blog and if you like what we're producing (which
involves a whole bunch of protocols and formats - much more than just SMTP), you
can make [a donation here](https://robur.coop/Donate)!
[mrmime]: https://github.com/mirage/mrmime
[smtp_1]: https://blog.osau.re/articles/smtp_1.html
[smtp_2]: https://blog.osau.re/articles/smtp_2.html
[smtp_3]: https://blog.osau.re/articles/smtp_3.html
[oneffs]: https://github.com/robur-coop/oneffs
[albatross]: https://github.com/robur-coop/albatross
[git-kv]: https://github.com/robur-coop/git-kv
[primary-git]: https://github.com/robur-coop/dns-primary-git/
[contruno]: https://github.com/dinosaure/contruno
[pasteur]: https://github.com/dinosaure/pasteur
[unipi]: https://github.com/robur-coop/unipi
[sendmail]: https://github.com/mirage/colombe

View file

@ -1,538 +0,0 @@
---
date: 2024-10-22
title: Runtime arguments in MirageOS
description:
The history of runtime arguments to a MirageOS unikernel
tags:
- OCaml
- MirageOS
author:
name: Hannes Mehnert
email: hannes@mehnert.org
link: https://hannes.robur.coop
---
TL;DR: Passing runtime arguments around is tricky, and prone to change every other month.
## Motivation
Sometimes, as an unikernel developer and also as operator, it's nice to have
some runtime arguments passed to an unikernel. Now, if you're into OCaml,
command-line parsing - together with error messages, man page generation, ... -
can be done by the amazing [cmdliner](https://erratique.ch/software/cmdliner)
package from Daniel Bünzli.
MirageOS uses cmdliner for command line argument passing. This also enabled
us from the early days to have nice man pages for unikernels (see
`my-unikernel-binary --help`). There are two kinds
of arguments: those at configuration time (`mirage configure`), such as the
target to compile for, and those at runtime - when the unikernel is executed.
In Mirage 4.8.1 and 4.8.0 (released October 2024) there have been some changes
to command-line arguments, which were motivated by 4.5.0 (released April 2024)
and user feedback.
First of all, our current way to pass a custom runtime argument to a unikernel
(`unikernel.ml`):
```OCaml
open Lwt.Infix
open Cmdliner
let hello =
let doc = Arg.info ~doc:"How to say hello." [ "hello" ] in
let term = Arg.(value & opt string "Hello World!" doc) in
Mirage_runtime.register_arg term
module Hello (Time : Mirage_time.S) = struct
let start _time =
let rec loop = function
| 0 -> Lwt.return_unit
| n ->
Logs.info (fun f -> f "%s" (hello ()));
Time.sleep_ns (Duration.of_sec 1) >>= fun () -> loop (n - 1)
in
loop 4
end
```
We define the [Cmdliner.Term.t](https://erratique.ch/software/cmdliner/doc/Cmdliner/Term/index.html#type-t)
in line 6 (`let term = ..`) - which provides documentation ("How to say hello."), the option to
use (`["hello"]` - which is then translated to `--hello=`), that it is optional,
of type `string` (cmdliner allows you to convert the incoming strings to more
complex (or more narrow) data types, with decent error handling).
The defined argument is directly passed to [`Mirage_runtime.register_arg`](https://ocaml.org/p/mirage-runtime/4.8.1/doc/Mirage_runtime/index.html#val-register_arg),
(in line 7) so our binding `hello` is of type `unit -> string`.
In line 14, the value of the runtime argument is used (`hello ()`) for printing
a log message.
The nice property is that it is all local in `unikernel.ml`, there are no other
parts involved. It is just a bunch of API calls. The downside is that `hello ()`
should only be evaluated after the function `start` was called - since the
`Mirage_runtime` needs to parse and fill in the command line arguments. If you
call `hello ()` earlier, you'll get an exception "Called too early. Please delay
this call to after the start function of the unikernel.". Also, since
Mirage_runtime needs to collect and evaluate the command line arguments, the
`Mirage_runtime.register_arg` may only be called at top-level, otherwise you'll
get another exception "The function register_arg was called to late. Please call
register_arg before the start function is executed (e.g. in a top-level binding).".
Another advantage is, having it all in unikernel.ml means adding and removing
arguments doesn't need another execution of `mirage configure`. Also, any
type can be used that the unikernel depends on - the config.ml is compiled only
with a small set of dependencies (mirage itself) - and we don't want to impose a
large dependency cone for mirage just because someone may like to use
X509.Key_type.t as argument type.
Earlier, before mirage 4.5.0, we had runtime and configure arguments mixed
together. And code was generated when `mirage configure` was executed to
deal with these arguments. The downsides included: we needed serialization for
all command-line arguments (at configure time you could fill the argument, which
was then serialized, and deserialized at runtime and used unless the argument
was provided explicitly), they had to appear in `config.ml` (which also means
changing any would need an execution of `mirage configure`), since they generated code
potential errors were in code that the developer didn't write (though we had
some `__POS__` arguments to provide error locations in the developer code).
Related recent changes are:
- in mirage 4.8.1, the runtime arguments to configure the OCaml runtime system
(such as GC settings, randomization of hashtables, recording of backtraces)
are now provided using the [cmdliner-stdlib](https://ocaml.org/p/cmdliner-stdlib)
package.
- in mirage 4.8.0, for git, dns-client, and happy-eyeballs devices the optional
arguments are generated by default - so they are always available and don't
need to be manually done by the unikernel developer.
Let's dive a bit deeper into the history.
## History
In MirageOS, since the early stages (I'll go back to 2.7.0 (February 2016) where
functoria was introduced) used an embedded fork of `cmdliner` to handle command
line arguments.
[![Animated changes to the hello world unikernel](https://asciinema.org/a/ruHoadi2oZGOzgzMKk5ZYoFgf.svg)](https://asciinema.org/a/ruHoadi2oZGOzgzMKk5ZYoFgf)
### February 2016 (Mirage 2.7.0)
When looking into the MirageOS 2.x series, here's the code for our hello world
unikernel:
`config.ml`
```OCaml
open Mirage
let hello =
let doc = Key.Arg.info ~doc:"How to say hello." ["hello"] in
Key.(create "hello" Arg.(opt string "Hello World!" doc))
let main =
foreign
~keys:[Key.abstract hello]
"Unikernel.Hello" (console @-> job)
let () = register "hello-key" [main $ default_console]
```
and `unikernel.ml`
```OCaml
open Lwt.Infix
module Hello (C: V1_LWT.CONSOLE) = struct
let start c =
let rec loop = function
| 0 -> Lwt.return_unit
| n ->
C.log c (Key_gen.hello ());
OS.Time.sleep 1.0 >>= fun () ->
loop (n-1)
in
loop 4
end
```
As you can see, the cmdliner term was provided in `config.ml`, and in
`unikernel.ml` the expression `Key_gen.hello ()` was used - `Key_gen` was
a module generated by the `mirage configure` invocation.
You can as well see that the term was wrapped in `Key.create "hello"` - where
this string was used as the identifier for the code generation.
As mentioned above, a change needed to be done in `config.ml` and a
`mirage configure` to take effect.
### July 2016 (Mirage 2.9.1)
The `OS.Time` was functorized with a `Time` functor:
`config.ml`
```OCaml
open Mirage
let hello =
let doc = Key.Arg.info ~doc:"How to say hello." ["hello"] in
Key.(create "hello" Arg.(opt string "Hello World!" doc))
let main =
foreign
~keys:[Key.abstract hello]
"Unikernel.Hello" (console @-> time @-> job)
let () = register "hello-key" [main $ default_console $ default_time]
```
and `unikernel.ml`
```OCaml
open Lwt.Infix
module Hello (C: V1_LWT.CONSOLE) (Time : V1_LWT.TIME) = struct
let start c _time =
let rec loop = function
| 0 -> Lwt.return_unit
| n ->
C.log c (Key_gen.hello ());
Time.sleep 1.0 >>= fun () ->
loop (n-1)
in
loop 4
end
```
### February 2017 (Mirage pre3)
The `Time` signature changed, now the `sleep_ns` function sleeps in nanoseconds.
This avoids floating point numbers at the core of MirageOS. The helper package
`duration` is used to avoid manual conversions.
Also, the console signature changed - and `log` is now inside the Lwt monad.
`config.ml`
```OCaml
open Mirage
let hello =
let doc = Key.Arg.info ~doc:"How to say hello." ["hello"] in
Key.(create "hello" Arg.(opt string "Hello World!" doc))
let main =
foreign
~keys:[Key.abstract hello]
~packages:[package "duration"]
"Unikernel.Hello" (console @-> time @-> job)
let () = register "hello-key" [main $ default_console $ default_time]
```
and `unikernel.ml`
```OCaml
open Lwt.Infix
module Hello (C: V1_LWT.CONSOLE) (Time : V1_LWT.TIME) = struct
let start c _time =
let rec loop = function
| 0 -> Lwt.return_unit
| n ->
C.log c (Key_gen.hello ()) >>= fun () ->
Time.sleep_ns (Duration.of_sec 1) >>= fun () ->
loop (n-1)
in
loop 4
end
```
### February 2017 (Mirage 3)
Another big change is that now console is not used anymore, but
[logs](https://erratique.ch/software/logs).
`config.ml`
```OCaml
open Mirage
let hello =
let doc = Key.Arg.info ~doc:"How to say hello." ["hello"] in
Key.(create "hello" Arg.(opt string "Hello World!" doc))
let main =
foreign
~keys:[Key.abstract hello]
~packages:[package "duration"]
"Unikernel.Hello" (time @-> job)
let () = register "hello-key" [main $ default_time]
```
and `unikernel.ml`
```OCaml
open Lwt.Infix
module Hello (Time : Mirage_time_lwt.S) = struct
let start _time =
let rec loop = function
| 0 -> Lwt.return_unit
| n ->
Logs.info (fun f -> f "%s" (Key_gen.hello ()));
Time.sleep_ns (Duration.of_sec 1) >>= fun () ->
loop (n-1)
in
loop 4
end
```
### January 2020 (Mirage 3.7.0)
The `_lwt` is dropped from the interfaces (we used to have Mirage_time and
Mirage_time_lwt - where the latter was instantiating the former with concrete
types: `type 'a io = Lwt.t` and `type buffer = Cstruct.t` -- in a cleanup
session we dropped the `_lwt` interfaces and opam packages. The reasoning was
that when we'll get around to move to another IO system, we'll move everything
at once anyways. No need to have `lwt` and something else (`async`, or nowadays
`miou` or `eio`) in a single unikernel.
`config.ml`
```OCaml
open Mirage
let hello =
let doc = Key.Arg.info ~doc:"How to say hello." ["hello"] in
Key.(create "hello" Arg.(opt string "Hello World!" doc))
let main =
foreign
~keys:[Key.abstract hello]
~packages:[package "duration"]
"Unikernel.Hello" (time @-> job)
let () = register "hello-key" [main $ default_time]
```
and `unikernel.ml`
```OCaml
open Lwt.Infix
module Hello (Time : Mirage_time.S) = struct
let start _time =
let rec loop = function
| 0 -> Lwt.return_unit
| n ->
Logs.info (fun f -> f "%s" (Key_gen.hello ()));
Time.sleep_ns (Duration.of_sec 1) >>= fun () ->
loop (n-1)
in
loop 4
end
```
### October 2021 (Mirage 3.10)
Some renamings to fix warnings. Only `config.ml` changed.
`config.ml`
```OCaml
open Mirage
let hello =
let doc = Key.Arg.info ~doc:"How to say hello." ["hello"] in
Key.(create "hello" Arg.(opt string "Hello World!" doc))
let main =
main
~keys:[key hello]
~packages:[package "duration"]
"Unikernel.Hello" (time @-> job)
let () = register "hello-key" [main $ default_time]
```
and `unikernel.ml`
```OCaml
open Lwt.Infix
module Hello (Time : Mirage_time.S) = struct
let start _time =
let rec loop = function
| 0 -> Lwt.return_unit
| n ->
Logs.info (fun f -> f "%s" (Key_gen.hello ()));
Time.sleep_ns (Duration.of_sec 1) >>= fun () ->
loop (n-1)
in
loop 4
end
```
### June 2023 (Mirage 4.4)
The argument was moved to runtime.
`config.ml`
```OCaml
open Mirage
let hello =
let doc = Key.Arg.info ~doc:"How to say hello." ["hello"] in
Key.(create "hello" Arg.(opt ~stage:`Run string "Hello World!" doc))
let main =
main
~keys:[key hello]
~packages:[package "duration"]
"Unikernel.Hello" (time @-> job)
let () = register "hello-key" [main $ default_time]
```
and `unikernel.ml`
```OCaml
open Lwt.Infix
module Hello (Time : Mirage_time.S) = struct
let start _time =
let rec loop = function
| 0 -> Lwt.return_unit
| n ->
Logs.info (fun f -> f "%s" (Key_gen.hello ());
Time.sleep_ns (Duration.of_sec 1) >>= fun () ->
loop (n-1)
in
loop 4
end
```
### March 2024 (Mirage 4.5)
The runtime argument is in `config.ml` refering to the argument as string
("Unikernel.hello"), and being passed to the `start` function as argument.
`config.ml`
```OCaml
open Mirage
let runtime_args = [ runtime_arg ~pos:__POS__ "Unikernel.hello" ]
let main =
main
~runtime_args
~packages:[package "duration"]
"Unikernel.Hello" (time @-> job)
let () = register "hello-key" [main $ default_time]
```
and `unikernel.ml`
```OCaml
open Lwt.Infix
open Cmdliner
let hello =
let doc = Arg.info ~doc:"How to say hello." [ "hello" ] in
Arg.(value & opt string "Hello World!" doc)
module Hello (Time : Mirage_time.S) = struct
let start _time hello =
let rec loop = function
| 0 -> Lwt.return_unit
| n ->
Logs.info (fun f -> f "%s" hello);
Time.sleep_ns (Duration.of_sec 1) >>= fun () ->
loop (n-1)
in
loop 4
end
```
### October 2024 (Mirage 4.8)
Again, moved out of `config.ml`.
`config.ml`
```OCaml
open Mirage
let main =
main
~packages:[package "duration"]
"Unikernel.Hello" (time @-> job)
let () = register "hello-key" [main $ default_time]
```
and `unikernel.ml`
```OCaml
open Lwt.Infix
open Cmdliner
let hello =
let doc = Arg.info ~doc:"How to say hello." [ "hello" ] in
Mirage_runtime.register_arg Arg.(value & opt string "Hello World!" doc)
module Hello (Time : Mirage_time.S) = struct
let start _time =
let rec loop = function
| 0 -> Lwt.return_unit
| n ->
Logs.info (fun f -> f "%s" (hello ()));
Time.sleep_ns (Duration.of_sec 1) >>= fun () ->
loop (n-1)
in
loop 4
end
```
### 2024 (Not yet released)
This is the future with time defunctorized. Read more in the [discussion](https://github.com/mirage/mirage/issues/1513).
To delay the start function, a `dep` of `noop` is introduced.
`config.ml`
```OCaml
open Mirage
let main =
main
~packages:[package "duration"]
~dep:[dep noop]
"Unikernel" job
let () = register "hello-key" [main]
```
and `unikernel.ml`
```OCaml
open Lwt.Infix
open Cmdliner
let hello =
let doc = Arg.info ~doc:"How to say hello." [ "hello" ] in
Mirage_runtime.register_arg Arg.(value & opt string "Hello World!" doc)
let start () =
let rec loop = function
| 0 -> Lwt.return_unit
| n ->
Logs.info (fun f -> f "%s" (hello ()));
Mirage_timer.sleep_ns (Duration.of_sec 1) >>= fun () ->
loop (n-1)
in
loop 4
```
## Conclusion
The history of hello world shows that over time we slowly improve the developer
experience, and removing the boilerplate needed to get MirageOS unikernels up
and running. This is work over a decade including lots of other (here invisible)
improvements to the mirage utility.
Our current goal is to minimize the code generated by mirage, since code
generation has lots of issues (e.g. error locations, naming, binary size). It
is a long journey. At the same time, we are working on improving the performance
of MirageOS unikernels, developing unikernels that are useful in the real
world ([VPN endpoint](https://github.com/robur-coop/miragevpn), [DNSmasq replacement](https://github.com/robur-coop/dnsvizor), ...), and also [simplifying the
deployment of MirageOS unikernels](https://github.com/robur-coop/mollymawk).
If you're interested in MirageOS and using it in your domain, don't hesitate
to reach out to us (via eMail: team@robur.coop) - we're keen to deploy MirageOS
and find more domains where it is useful. If you can spare a dime, we're a
registered non-profit in Germany - and can provide tax-deductable receipts for
donations ([more information](https://robur.coop/Donate)).

View file

@ -1,107 +0,0 @@
---
date: 2024-10-25
title: "Meet DNSvizor: run your own DHCP and DNS MirageOS unikernel"
description:
The NGI-funded DNSvizor provides core network services on your network; DNS resolution and DHCP.
tags:
- OCaml
- MirageOS
- DNSvizor
author:
name: Hannes Mehnert
email: hannes@mehnert.org
link: https://hannes.robur.coop
---
TL;DR: We got [NGI0 Entrust (via NLnet)](https://nlnet.nl/entrust/) funding for developing
[DNSvizor](https://nlnet.nl/project/DNSvizor/) - a DNS resolver and
DHCP server. Please help us by [sharing with us your dnsmasq
configuration](https://github.com/robur-coop/dnsvizor/issues/new), so we can
prioritize the configuration options to support.
## Introduction
The [dynamic host configuration protocol (DHCP)](https://en.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol)
is fundamental in today's Internet and local networks. It usually runs on your
router (or as a dedicated independent service) and automatically configures
computers that join your network (for example wireless laptops, smartphones)
with an IP address, routing information, a DNS resolver, etc. No manual
configuration is needed once your friends' smartphone got the password of your
wireless network \o/
The [domain name system (DNS)](https://en.wikipedia.org/wiki/Domain_Name_System)
is responsible for translating domain names (such as "robur.coop", "nlnet.nl")
to IP addresses (such as 193.30.40.138 or 2a0f:7cc7:7cc7:7c40::138) - used by
computers to talk to each other. Humans can remember domain names instead of
memorizing IP addresses. Computers then use DNS to translate these domain names
to IP addresses to communicate with. DNS is a hierarchic, distributed,
faul-tolerant service.
These two protocols are fundamental to today's Internet: without them it would
be much harder for humans to use it.
## DNSvizor
We at [robur](https://robur.coop) got funding (from
[NGI0 Entrust via NLnet](https://nlnet.nl/project/DNSvizor/)) to continue our work on
[DNSvizor](https://github.com/robur-coop/dnsvizor) - a
[MirageOS unikernel](https://mirageos.org) that provides DNS resolution and
DHCP service for a network. This is fully implemented in
[OCaml](https://ocaml.org).
Already at our [MirageOS retreats](https://retreat.mirageos.org) we deployed
such unikernel, to test our [DHCP implementation](https://github.com/mirage/charrua)
and our [DNS resolver](https://github.com/mirage/ocaml-dns) - and found and
fixed issues on-site. At the retreats we have a very limited Internet uplink,
thus caching DNS queries and answers is great for reducing the load on the
uplink.
Thanks to the funding we received, we'll be able to work on improving the
performance, but also to finish our DNSSec implementation, provide DNS-over-TLS
and DNS-over-HTTPS services, and also a web interface. DNSvizor will use the
existing [dnsmasq](https://thekelleys.org.uk/dnsmasq/doc.html) configuration
syntax, and provide lots of features from dnsmasq, and also provide features
such as block lists from [pi-hole](https://pi-hole.net/).
We are at a point where the [basic unikernel (our MVP)](https://github.com/robur-coop/dnsvizor)
- providing DNS and DHCP services - is ready, and we provide
[reproducible binary builds](https://builds.robur.coop/job/dnsvizor). Phew. This
means that the first step is done. The `--dhcp-range` from dnsmasq is already
being parsed.
We are now curious on concrete usages of dnsmasq and the configurations you use.
If you're interested in dnsvizor, please [open an issue at our repository](https://github.com/robur-coop/dnsvizor/issues/new)
with your dnsmasq configuration. This will help us to guide which parts of the configuration to prioritize.
## Usages of DNSvizor
We have several use cases for DNSvizor:
- at your home router to provide DNS resolution and DHCP service, filtering ads,
- in the datacenter auto-configuring your machine park,
- when running your unikernel swarm to auto-configure them.
The first one is where pi-hole as well fits into, and where dnsmasq is used quite
a lot. The second one is also a domain where dnsmasq is used. The third one is
from our experience that lots of people struggle with deploying MirageOS
unikernels since they have to manually do IP configuration etc. We ourselves
also pass additional information to the unikernels, such as syslog host,
monitoring sink, X.509 certificates or host names, do some DNS provisioning, ...
With DNSvizor we will leverage the common configuration options of all
unikernels (reducing the need for boot arguments), and also go a bit further
and make deployment seamless (including adding hostnames to DNS, forwarding
from our reverse TLS proxy, etc.).
## Conclusion
[DNSvizor](https://github.com/robur-coop/dnsvizor) provides DNS resolution and
DHCP service for your network, and [already exists](https://builds.robur.coop/job/dnsvizor) :).
Please [report issues](https://github.com/robur-coop/dnsvizor/issues/) you
encounter and questions you may have. Also, if you use dnsmasq, please
[show us your configuration](https://github.com/robur-coop/dnsvizor/issues/new).
If you're interested in MirageOS and using it in your domain, don't hesitate
to reach out to us (via eMail: team@robur.coop) - we're keen to deploy MirageOS
and find more domains where it is useful. If you can
[spare a dime](https://robur.coop/Donate), we're a registered non-profit in
Germany - and can provide tax-deductable receipts in Europe.

View file

@ -1,302 +0,0 @@
---
date: 2024-10-21
title: How has robur financially been doing since 2018?
description: How we organise as a collective, and why we're doing that.
tags:
- finances
- cooperative
author:
name: Hannes Mehnert
email: hannes@mehnert.org
link: https://hannes.robur.coop
---
Since the beginning, robur has been working on MirageOS unikernels and getting
them deployed. Due to our experience in hierarchical companies, we wanted to
create something different - a workplace without bosses and management. Instead,
we are a collective where everybody has a say on what we do, and who gets how
much money at the end of the month. This means nobody has to write report and
meet any goals - there's no KPI involved. We strive to be a bunch of people
working together nicely and projects that we own and want to bring forward. If
we discover lack of funding, we reach out to (potential) customers to fill our
cash register. Or reach out to people to donate money.
Since our mission is fulfilling and already complex - organising ourselves in a
hierarchy-free environment, including the payment, and work on software in a
niche market - we decided from the early days that bookeeping and invoicing
should not be part of our collective. Especially since we want to be free in
what kind of funding we accept - donations, commercial contracts, public
funding. In the books, robur is part of the non-profit company
[Änderwerk](https://aenderwerk.de) in Germany - and friends of ours run that
company. They get a cut on each income we generate.
To be inclusive and enable everyone to participate in decisions, we are 100%
transparent in our books - every collective member has access to the financial
spreadsheets, contracts, etc. We use a needs-based payment model, so we talk
about the needs everyone has on a regular basis and adjust the salary, everyone
agreeing to all the numbers.
## 2018
We started operations in 2018. In late 2017, we got donations (in the form of
bitcoins) by friends who were convinced of our mission. This was 54,194.91 €.
So, in 2018 we started with that money, and tried to find a mission, and
generate income to sustain our salaries.
Also, already in 2017, we applied for funding from
[Prototypefund](https://prototypefund.de) on a [CalDAV server](https://prototypefund.de/project/robur-io/),
and we received the grant in early 2018. This was another 48,500 €, paid to
individuals (due to reasons, Prototype fund can't cash out to the non-profit -
this put us into some struggle, since we needed some double bookkeeping and
individuals had to dig into health care etc.).
We also did in the second half of 2018 a security audit for
[Least Authority](https://leastauthority.com/blog/audits/five-security-audits-for-the-tezos-foundation/)
(invoicing 19,600 €).
And later in 2018 we started on what is now called NetHSM with an initial
design workshop (5,000 €).
And lastly, we started to work on a grant to implement [TLS 1.3](https://datatracker.ietf.org/doc/html/rfc8446),
funded by Jane Street (via OCaml Labs Consulting). In 2018, we received 12,741.71 €
We applied at NLNet for improving the QubesOS firewall developed in MirageOS
(without success), tried to get the IT security prize in Germany (without
success), and to DIAL OSC (without success).
| Project | Amount |
|-----------------|----------:|
| Donation | 54,194.91 |
| Prototypefund | 48,500.00 |
| Least Authority | 19,600.00 |
| TLS 1.3 | 12,741.71 |
| Nitrokey | 5,000.00 |
| __Total__ | __140,036.62__ |
## 2019
We were keen to finish the CalDAV implementation (and start a CardDAV
implementation), and received some financial support from Tarides for it
(15,000 €).
The TLS 1.3 work continued, we got in total 68,887.53 €.
We also applied to (and got funding from) Prototypefund, once with an [OpenVPN-compatible
MirageOS unikernel](https://prototypefund.de/en/project/robust-openvpn-client-with-low-use-of-resources/),
and once with [improving the QubesOS firewall developed as MirageOS unikernel](https://prototypefund.de/project/portable-firewall-fuer-qubesos/).
This means again twice 48,500 €.
We also started the implementation work of NetHSM - which still included a lot
of design work - in total the contract was over 82,500 €. In 2019, we invoiced
Nitrokey in 2019 in total 40,500 €.
We also received a total of 516.48 € as donations from source unknown to us.
We also applied to NLnet with [DNSvizor](https://nlnet.nl/project/Robur/), and
got a grant, but due to buerocratic reasons they couldn't transfer the money to
our non-profit (which was involved with NLnet in some EU grants), and we didn't
get any money in the end.
| Project | Amount |
|----------|----------:|
| CardDAV | 15,000.00 |
| TLS 1.3 | 68,887.53 |
| OpenVPN | 48,500.00 |
| QubesOS | 48,500.00 |
| Donation | 516.48 |
| Nitrokey | 40,500.00 |
| __Total__ | __221,904.01__ |
## 2020
In 2020, we agreed with OCaml Labs Consulting to work on maintenance of OCaml
packages in the MirageOS ecosystem. This was a contract where at the end of the
month, we reported on which PRs and issues we spent how much time. For us, this
was great to have the freedom to work on which OCaml packages we were keen to
get up to speed. In 2020, we received 45,000 € for this maintenance.
We finished the TLS 1.3 work (18,659.01 €)
We continued to work on the NetHSM project, and invoiced 55,500 €.
We received a total of 255 € in donations from sources unknown to us.
We applied at reset.tech again with DNSvizor, unfortunately without success.
We also applied at [NGI pointer](https://pointer.ngi.eu) to work on reproducible
builds for MirageOS, and a web frontend. Here we got the grant of 200,000 €,
which we worked on in 2021 and 2022.
| Project | Amount |
|-----------|----------:|
| OCLC | 45,000.00 |
| TLS 1.3 | 18,659.01 |
| Nitrokey | 55,500.00 |
| Donations | 255.00 |
| __Total__ | __119,414.01__ |
## 2021
As outlined, we worked on reproducible builds of unikernels - rethinking the way
how a unikernel is configured: no more compiled-in secrets, but instead using
boot parameters. We setup the infrastructure for doing daily reproducible
builds, serving system packages via a package repository, and a
[web frontend](https://builds.robur.coop) hosting the reproducible builds.
We received in total 120,000 € from NGI Pointer in 2021.
Our work on NetHSM continued, including the introduction of elliptic curves
in mirage-crypto (using [fiat](https://github.com/mit-plv/fiat-crypto/)). The
invoices to Nitrokey summed up to 26,000 € in 2021.
We developed in a short timeframe two packages, [u2f](https://github.com/robur-coop/u2f)
and later [webauthn](https://git.robur.coop/robur/webauthn) for Skolem Labs based
on [gift economy](https://en.wikipedia.org/wiki/Gift_economy). This resulted in
donations of 18,976 €.
We agreed with [OCSF](https://ocaml-sf.org/) to work on
[conex](https://github.com/hannesm/conex), which we have not delivered yet
(lots of other things had to be cleared first: we did a security review of opam
(leading to [a security advisory](https://opam.ocaml.org/blog/opam-2-1-5-local-cache/)),
we got rid of [`extra-files`](https://discuss.ocaml.org/t/ann-opam-repository-policy-change-checksums-no-md5-and-no-extra-files)
in the opam-repository, and we [removed the weak hash md5](https://discuss.ocaml.org/t/ann-opam-repository-policy-change-checksums-no-md5-and-no-extra-files)
from the opam-repository.
| Customer | Amount |
|-------------|----------:|
| NGI Pointer | 120,000.00 |
| Nitrokey | 26,000.00 |
| Skolem | 18,976.00 |
| __Total__ | __164,976.00__ |
## 2022
We finished our NGI pointer project, and received another 80,000 €.
We also did some minor maintenance for Nitrokey, and invoiced 4,500 €.
For Tarides, we started another maintaining MirageOS packages (and continuing
[our TCP/IP stack](https://github.com/robur-coop/utcp)), and invoiced in
total 22,500 €.
A grant application for [bob](https://github.com/dinosaure/bob/) was rejected,
but a grant application for [MirageVPN](https://github.com/robur-coop/miragevpn)
got accepted. Both at NLnet within the EU NGI project.
| Project | Amount |
|-------------|---------:|
| NGI Pointer | 80,000.00 |
| Nitrokey | 4,500.00 |
| Tarides | 22,500.00 |
| __Total__ | __107,000.00__ |
## 2023
We finished the NetHSM project, and had a final invoice over 2,500 €.
We started a collaboration for [semgrep](https://semgrep.dev), porting some of
their Python code to OCaml. We received in total 37,500 €.
We continued the MirageOS opam package maintenance and invoiced in total
89,250 € to Tarides.
A grant application on [MirageVPN](https://nlnet.nl/project/MirageVPN/) got
accepted (NGI Assure), and we received in total 12,000 € for our work on it.
This is a continuation of our 2019 work funded by Prototypefund.
We also wrote various funding applications, including one for
[DNSvizor](https://github.com/robur-coop/dnsvizor) that was
[accepted](https://nlnet.nl/project/DNSvizor/) (NGI0 Entrust).
| Customer | Amount |
|-----------|---------:|
| Nitrokey | 2,500.00 |
| semgrep | 37,500.00 |
| Tarides | 89,250.00 |
| MirageVPN | 12,000.00 |
| __Total__ | __141,250.00__ |
## 2024
We're still in the middle of it, but so far we continued the Tarides maintenance
contract (54,937.50 €).
We also finished the MirageVPN work, and received another 45,000 €.
We had a contract with Semgrep again on porting Python code to OCaml and received 18,559.40 €.
We again worked on several successful funding applications, one on
[PTT](https://nlnet.nl/project/PTT/) (NGI Zero Core), a continuation of the
[NGI DAPSI](https://www.ngi.eu/funded_solution/ngi-dapsiproject-24/) project -
now realizing mailing lists with our SMTP stack.
We also got [MTE](https://nlnet.nl/project/MTE/) (NGI Taler) accepted.
The below table is until end of September 2024.
| Project | Amount |
|-----------|----------:|
| Semgrep | 18,559.40 |
| Tarides | 62,812.50 |
| MirageVPN | 45,000.00 |
| __Total__ | __126,371.90__ |
## Total
In a single table, here's our income since robur started.
| Year | Amount |
|-------|-----------:|
| 2018 | 140,036.62 |
| 2019 | 221,904.01 |
| 2020 | 119,414.01 |
| 2021 | 164,976.00 |
| 2022 | 107,000.00 |
| 2023 | 141,250.00 |
| 2024 | 126,371.90 |
| __Total__ | __1,020,952.54__ |
![Plot of above income table](../images/finances.png)
As you can spot, it varies quite a bit. In some years we have fewer money
available than in other years.
## Expenses
As mentioned, the non-profit company [Änderwerk](https://aenderwerk.de) running
the bookkeeping and legal stuff (invoices, tax statements, contracts, etc.) gets
a cut on each income we produce. They are doing amazing work and are very
quick responding to our queries.
We spend most of our income on salary. Some money we spend on travel. We also
pay monthly for our server (plus some extra for hardware, and in June 2024 a
huge amount for trying to recover data from failed SSDs).
## Conclusion
We have provided an overview of our income, we were three to five people working
at robur over the entire time. As written at the beginning, we use needs-based
payment. Our experience with this is great! It provides a lot of trust into each
other.
Our funding is diverse from multiple sources - donations, commercial work,
public funding. This was our initial goal, and we're very happy that it works
fine over the last five years.
Taking the numbers into account, we are not paying ourselves "industry standard"
rates - but we really love what we do - and sometimes we just take some time off.
We do work on various projects that we really really enjoy - but where (at the
moment) no funding is available for.
We are always happy to discuss how our collective operates. If you're
interested, please drop us a message.
Of course, if we receive donations, we use them wisely - mainly for working on
the currently not funded projects (bob, albatross, miou, mollymawk - to name a few). If you
can spare a dime or two, don't hesitate to [donate](https://robur.coop/Donate).
Donations are tax-deductable in Germany (and should be in Europe) since we're a
registered non-profit.
If you're interested in MirageOS and using it in your domain, don't hesitate
to reach out to us (via eMail: team@robur.coop) so we can start to chat - we're keen to deploy MirageOS
and find more domains where it is useful.

View file

@ -1,110 +0,0 @@
---
title: GPTar (update)
date: 2024-10-28
description: libarchive vs hybrid GUID partition table and GNU tar volume header
tags:
- OCaml
- gpt
- tar
- mbr
- persistent storage
author:
name: Reynir Björnsson
email: reynir@reynir.dk
link: https://reyn.ir/
---
In a [previous post][gptar-post] I describe how I craft a hybrid GUID partition table (GPT) and tar archive by exploiting that there are disjoint areas of a 512 byte *block* that are important to tar headers and *protective* master boot records used in GPT respectively.
I recommend reading it first if you haven't already for context.
After writing the above post I read an excellent and fun *and totally normal* article by Emily on how [she created **executable** tar archives][tar-executable].
Therein I learned a clever hack:
GNU tar has a tar extension for *volume headers*.
These are essentially labels for your tape archives when you're forced to split an archive across multiple tapes.
They can (seemingly) hold any text as label including shell scripts.
What's more is GNU tar and bsdtar **does not** extract these as files!
This is excellent, because I don't actually want to extract or list the GPT header when using GNU tar or bsdtar.
This prompted me to [use a different link indicator](https://github.com/reynir/gptar/pull/1).
This worked pretty great.
Listing the archive using GNU tar I still get `GPTAR`, but with verbose listing it's displayed as a `--Volume Header--`:
```shell
$ tar -tvf disk.img
Vr-------- 0/0 16896 1970-01-01 01:00 GPTAR--Volume Header--
-rw-r--r-- 0/0 14 1970-01-01 01:00 test.txt
```
And more importantly the `GPTAR` entry is ignored when extracting:
```shell
$ mkdir tmp
$ cd tmp/
$ tar -xf ../disk.img
$ ls
test.txt
```
## BSD tar / libarchive
Unfortunately, this broke bsdtar!
```shell
$ bsdtar -tf disk.img
bsdtar: Damaged tar archive
bsdtar: Error exit delayed from previous errors.
```
This is annoying because we run FreeBSD on the host for [opam.robur.coop](https://opam.robur.coop), our instance of [opam-mirror][opam-mirror].
This Autumn we updated [opam-mirror][opam-mirror] to use the hybrid GPT+tar GPTar *tartition table*[^tartition] instead of hard coded or boot parameter specified disk offsets for the different partitions - which was extremely brittle!
So we were no longer able to inspect the contents of the tar partition from the host!
Unacceptable!
So I started to dig into libarchive where bsdtar comes from.
To my surprise, after building bsdtar from the git clone of the source code it ran perfectly fine!
```shell
$ ./bsdtar -tf ../gptar/disk.img
test.txt
```
I eventually figure out [this change][libarchive-pr] fixed it for me.
I got in touch with Emily to let her know that bsdtar recently fixed this (ab)use of GNU volume headers.
Her reply was basically "as of when I wrote the article, I was pretty sure bsdtar ignored it."
And indeed it did.
Examining the diff further revealed that it ignored the GNU volume header - just not "correctly" when the GNU volume header was abused to carry file content as I did:
```diff
/*
* Interpret 'V' GNU tar volume header.
*/
static int
header_volume(struct archive_read *a, struct tar *tar,
struct archive_entry *entry, const void *h, size_t *unconsumed)
{
- (void)h;
+ const struct archive_entry_header_ustar *header;
+ int64_t size, to_consume;
+
+ (void)a; /* UNUSED */
+ (void)tar; /* UNUSED */
+ (void)entry; /* UNUSED */
- /* Just skip this and read the next header. */
- return (tar_read_header(a, tar, entry, unconsumed));
+ header = (const struct archive_entry_header_ustar *)h;
+ size = tar_atol(header->size, sizeof(header->size));
+ to_consume = ((size + 511) & ~511);
+ *unconsumed += to_consume;
+ return (ARCHIVE_OK);
}
```
So thanks to the above change we can expect a release of libarchive supporting further flavors of abuse of GNU volume headers!
🥳
[gptar-post]: gptar.html
[tar-executable]: https://uni.horse/executable-tarballs.html
[opam-mirror]: https://git.robur.coop/robur/opam-mirror/
[libarchive-pr]: https://github.com/libarchive/libarchive/pull/2127
[^tartition]: Emily came up with the much better term "tartition table" than what I had come up with - "GPTar".

View file

@ -16,7 +16,7 @@ author:
## Updating MirageVPN ## Updating MirageVPN
As announced [earlier this month](miragevpn.html), we've been working hard over the last months on MirageVPN (initially developed in 2019, targeting OpenVPN™ 2.4.7, now 2.6.6). We managed to receive funding from [NGI Assure](https://www.assure.ngi.eu/) call (via [NLnet](https://nlnet.nl)). We've made over 250 commits with more than 10k lines added, and 18k lines removed. We closed nearly all old issues, and opened 100 fresh ones, of which we already closed more than half of them. :D As announced [earlier this month](https://blog.robur.coop/articles/miragevpn.html), we've been working hard over the last months on MirageVPN (initially developed in 2019, targeting OpenVPN™ 2.4.7, now 2.6.6). We managed to receive funding from [NGI Assure](https://www.assure.ngi.eu/) call (via [NLnet](https://nlnet.nl)). We've made over 250 commits with more than 10k lines added, and 18k lines removed. We closed nearly all old issues, and opened 100 fresh ones, of which we already closed more than half of them. :D
### Actual bugs fixed (that were leading to non-working MirageVPN applications) ### Actual bugs fixed (that were leading to non-working MirageVPN applications)
@ -29,7 +29,7 @@ To avoid any future breakage while revising the code (cleaning it up, extending
### New features: AEAD ciphers, supporting more configuration primitives ### New features: AEAD ciphers, supporting more configuration primitives
We added various configuration primitives, amongst them configuratble tls ciphersuites, minimal and maximal tls version to use, [tls-crypt-v2](miragevpn.html), verify-x509-name, cipher, remote-random, ... We added various configuration primitives, amongst them configuratble tls ciphersuites, minimal and maximal tls version to use, [tls-crypt-v2](https://blog.robur.coop/articles/miragevpn.html), verify-x509-name, cipher, remote-random, ...
From a cryptographic point of view, we are now supporting more [authentication hashes](https://github.com/robur-coop/miragevpn/pull/108) via the configuration directive `auth`, namely the SHA2 family - previously, only SHA1 was supported, [AEAD ciphers](https://github.com/robur-coop/miragevpn/pull/125) (AES-128-GCM, AES-256-GCM, CHACHA20-POLY1305) - previously only AES-256-CBC was supported. From a cryptographic point of view, we are now supporting more [authentication hashes](https://github.com/robur-coop/miragevpn/pull/108) via the configuration directive `auth`, namely the SHA2 family - previously, only SHA1 was supported, [AEAD ciphers](https://github.com/robur-coop/miragevpn/pull/125) (AES-128-GCM, AES-256-GCM, CHACHA20-POLY1305) - previously only AES-256-CBC was supported.

View file

@ -21,7 +21,7 @@ coauthors:
link: https://reyn.ir/ link: https://reyn.ir/
--- ---
As we were busy continuing to work on [MirageVPN](https://github.com/robur-coop/miragevpn), we got in touch with [eduVPN](https://eduvpn.org), who are interested about deploying MirageVPN. We got example configuration from their side, and [fixed](https://github.com/robur-coop/miragevpn/pull/201) [some](https://github.com/robur-coop/miragevpn/pull/168) [issues](https://github.com/robur-coop/miragevpn/pull/202), and also implemented [tls-crypt](https://github.com/robur-coop/miragevpn/pull/169) - which was straightforward since we earlier spend time to implement [tls-crypt-v2](miragevpn.html). As we were busy continuing to work on [MirageVPN](https://github.com/robur-coop/miragevpn), we got in touch with [eduVPN](https://eduvpn.org), who are interested about deploying MirageVPN. We got example configuration from their side, and [fixed](https://github.com/robur-coop/miragevpn/pull/201) [some](https://github.com/robur-coop/miragevpn/pull/168) [issues](https://github.com/robur-coop/miragevpn/pull/202), and also implemented [tls-crypt](https://github.com/robur-coop/miragevpn/pull/169) - which was straightforward since we earlier spend time to implement [tls-crypt-v2](https://blog.robur.coop/articles/miragevpn.html).
In January, they gave MirageVPN another try, and [measured the performance](https://github.com/robur-coop/miragevpn/issues/206) -- which was very poor -- MirageVPN (run as a Unix binary) provided a bandwith of 9.3Mb/s, while OpenVPN provided a bandwidth of 360Mb/s (using a VPN tunnel over TCP). In January, they gave MirageVPN another try, and [measured the performance](https://github.com/robur-coop/miragevpn/issues/206) -- which was very poor -- MirageVPN (run as a Unix binary) provided a bandwith of 9.3Mb/s, while OpenVPN provided a bandwidth of 360Mb/s (using a VPN tunnel over TCP).
@ -45,7 +45,7 @@ The learnings of our performance engineering are in three areas:
## Conclusion ## Conclusion
To conclude: we already achieved a factor of 25 in performance by adapting the code in various ways. We have ideas to improve the performance even more in the future - we also work on using OCaml string and bytes, instead of off-the-OCaml-heap-allocated bigarrays (see [our previous article](speeding-ec-string.html), which provided some speedups). To conclude: we already achieved a factor of 25 in performance by adapting the code in various ways. We have ideas to improve the performance even more in the future - we also work on using OCaml string and bytes, instead of off-the-OCaml-heap-allocated bigarrays (see [our previous article](https://blog.robur.coop/articles/speeding-ec-string.html), which provided some speedups).
Don't hesitate to reach out to us on [GitHub](https://github.com/robur-coop/miragevpn/issues), or [by mail](https://robur.coop/Contact) if you're stuck. Don't hesitate to reach out to us on [GitHub](https://github.com/robur-coop/miragevpn/issues), or [by mail](https://robur.coop/Contact) if you're stuck.

View file

@ -22,7 +22,7 @@ coauthors:
It is a great pleasure to finally announce that we have finished a server implementation for MirageVPN (OpenVPN™-compatible). This allows to setup a very robust VPN network on both the client and the server side. It is a great pleasure to finally announce that we have finished a server implementation for MirageVPN (OpenVPN™-compatible). This allows to setup a very robust VPN network on both the client and the server side.
As announced last year, [MirageVPN](miragevpn.html) is a reimplemtation of OpenVPN™ in OCaml, with [MirageOS](https://mirage.io) unikernels. As announced last year, [MirageVPN](https://blog.robur.coop/articles/miragevpn.html) is a reimplemtation of OpenVPN™ in OCaml, with [MirageOS](https://mirage.io) unikernels.
## Why a MirageVPN server? ## Why a MirageVPN server?

View file

@ -1,54 +0,0 @@
---
date: 2024-06-26
title: Testing MirageVPN against OpenVPN™
description: Some notes about how we test MirageVPN against OpenVPN™
tags:
- OCaml
- MirageOS
- cryptography
- security
- testing
- vpn
author:
name: Reynir Björnsson
email: reynir@reynir.dk
link: https://reyn.ir/
---
As our last milestone for the [EU NGI Assure](https://www.assure.ngi.eu/) funded MirageVPN project (for now) we have been working on testing MirageVPN, our OpenVPN™-compatible VPN implementation against the upstream OpenVPN™.
During the development we have conducted many manual tests.
However, this scales poorly and it is easy to forget testing certain cases.
Therefore, we designed and implemented interoperability testing, driving the C implementation on the one side, and our OCaml implementation on the other side. The input for such a test is a configuration file that both implementations can use.
Thus we test establishment of the tunnel as well as the tunnel itself.
While conducting the tests, our instrumented binaries expose code coverage information. We use that to guide ourselves which other configurations are worth testing. Our goal is to achieve a high code coverage rate while using a small amount of different configurations. These interoperability tests are running fast enough, so they are executed on each commit by CI.
A nice property of this test setup is that it runs with an unmodified OpenVPN binary.
This means we can use an off-the-shelf OpenVPN binary from the package repository and does not entail further maintenance of an OpenVPN fork.
Testing against a future version of OpenVPN becomes trivial.
We do not just test a single part of our implementation but achieve an end-to-end test.
The same configuration files are used for both our implementation and the C implementation, and each configuration is used twice, once our implementation acts as the client, once as the server.
We added a flag to our client and our [recently finished server](miragevpn-server) applications, `--test`, which make them to exit once a tunnel is established and an ICMP echo request from the client has been replied to by the server.
Our client and server can be run without a tun device which otherwise would require elevated privileges.
Unfortunately, OpenVPN requires privileges to at least configure a tun device.
Our MirageVPN implementation does IP packet parsing in userspace.
We test our protocol implementation, not the entire unikernel - but the unikernel code is a tiny layer on top of the purely functional protocol implementation.
We explored unit testing the packet decoding and decryption with our implementation and the C implementation.
Specifically, we encountered a packet whose message authentication code (MAC) was deemed invalid by the C implementation.
It helped us discover the MAC computation was correct but the packet encoding was truncated - both implementations agreed that the MAC was bad.
The test was very tedious to write and would not easily scale to cover a large portion of the code.
If of interest, take a look into our [modifications to OpenVPN](https://github.com/reynir/openvpn/tree/badmac-test) and [modifications to MirageVPN](https://github.com/robur-coop/miragevpn/tree/badmac-test).
The end-to-end testing is in addition to our unit tests and fuzz testing; and to our [benchmarking](miragevpn-performance.html) binary.
Our results are that with 4 configurations we achieve above 75% code coverage in MirageVPN.
While investigating the code coverage results, we found various pieces of code that were never executed, and we were able to remove them.
Code that does not exist is bug-free :D
With these tests in place future maintenance is less daunting as they will help us guard us from breaking the code.
At the moment we do not exercise the error paths very well in the code.
This is much less straightforward to test in this manner, and is important future work.
We plan to develop a client and server that injects faults at various stages of the protocol to test these error paths.
OpenVPN built with debugging enabled also comes with a `--gremlin` mode that injects faults, and would be interesting to investigate.

View file

@ -93,6 +93,6 @@ As a spoiler, for P-256 sign there's another improvement of around 4.5 with [Vir
Remove all cstruct, everywhere, apart from in mirage-block-xen and mirage-net-xen ;). It was a fine decision in the early MirageOS days, but from a performance point of view, and for making our packages more broadly usable without many dependencies, it is time to remove cstruct. Earlier this year we already [removed cstruct from ocaml-tar](https://github.com/mirage/ocaml-tar/pull/137) for similar reasons. Remove all cstruct, everywhere, apart from in mirage-block-xen and mirage-net-xen ;). It was a fine decision in the early MirageOS days, but from a performance point of view, and for making our packages more broadly usable without many dependencies, it is time to remove cstruct. Earlier this year we already [removed cstruct from ocaml-tar](https://github.com/mirage/ocaml-tar/pull/137) for similar reasons.
Our MirageOS work is only partially funded, we cross-fund our work by commercial contracts and public (EU) funding. We are part of a non-profit company, you can make a (tax-deductable - at least in the EU) [donation](https://aenderwerk.de/donate/) (select "DONATION robur" in the dropdown menu). Our MirageOS work is only partially funded, we cross-fund our work by commercial contracts and public (EU) funding. We are part of a non-profit company, you can make a (tax-deducable - at least in the EU) [donation](https://aenderwerk.de/donate/) (select "DONATION robur" in the dropdown menu).
We're keen to get MirageOS deployed in production - if you would like to do that, don't hesitate to reach out to us via eMail team at robur.coop We're keen to get MirageOS deployed in production - if you would like to do that, don't hesitate to reach out to us via eMail team at robur.coop

View file

@ -1,7 +1,5 @@
open Yocaml open Yocaml
module SM = Map.Make(String)
let is_empty_list = function [] -> true | _ -> false let is_empty_list = function [] -> true | _ -> false
module Date = struct module Date = struct
@ -311,6 +309,7 @@ module Article = struct
let title p = p#title let title p = p#title
let description p = p#description let description p = p#description
let tags p = p#tags
let date p = p#date let date p = p#date
let neutral = let neutral =
@ -415,70 +414,6 @@ module Articles = struct
] ]
end end
module Tag = struct
type t = {
name : string;
articles : (Path.t * Article.t) list;
}
let make ~name ~articles =
{ name; articles }
let normalize_article (ident, article) =
let open Data in
record (("url", string @@ Path.to_string ident) :: Article.normalize article)
let normalize { name; articles } =
let open Data in
[
("name", string name);
("articles", (list_of normalize_article) articles);
]
end
module Tags = struct
class type t = object ('self)
inherit Articles.t
method tags : Tag.t list
end
class tags ?title ?description articles =
object
inherit Articles.articles ?title ?description articles as super
method! title = Some "Tags"
method tags =
let tags =
let update article sm tag =
SM.update tag
(function
| None -> Some [article]
| Some urls -> Some (article :: urls))
sm
in
List.fold_left
(fun sm (url, article) ->
List.fold_left (update (url, article)) sm article#tags)
SM.empty
super#articles
|> SM.bindings
in
List.map (fun (tag, articles) ->
Tag.make ~name:tag ~articles)
tags
end
let of_articles articles =
new tags ?title:articles#title ?description:articles#description articles#articles
let normalize_tag tag =
let open Data in
record (Tag.normalize tag)
let normalize tags =
let open Data in
("all_tags", (list_of normalize_tag tags#tags)) :: Articles.normalize tags
end
module Make_with_target (S : sig module Make_with_target (S : sig
val source : Path.t val source : Path.t
val target : Path.t val target : Path.t
@ -492,7 +427,6 @@ struct
let images = Path.(source_root / "images") let images = Path.(source_root / "images")
let articles = Path.(source_root / "articles") let articles = Path.(source_root / "articles")
let index = Path.(source_root / "pages" / "index.md") let index = Path.(source_root / "pages" / "index.md")
let tags = Path.(source_root / "pages" / "tags.md")
let templates = Path.(source_root / "templates") let templates = Path.(source_root / "templates")
let template file = Path.(templates / file) let template file = Path.(templates / file)
let binary = Path.rel [ Sys.argv.(0) ] let binary = Path.rel [ Sys.argv.(0) ]
@ -503,7 +437,9 @@ struct
let target_root = S.target let target_root = S.target
let pages = target_root let pages = target_root
let articles = Path.(target_root / "articles") let articles = Path.(target_root / "articles")
let rss1 = Path.(target_root / "rss1.xml")
let rss2 = Path.(target_root / "feed.xml") let rss2 = Path.(target_root / "feed.xml")
let atom = Path.(target_root / "atom.xml")
let as_html into file = let as_html into file =
file |> Path.move ~into |> Path.change_extension "html" file |> Path.move ~into |> Path.change_extension "html"
@ -528,7 +464,7 @@ struct
Pipeline.track_file Source.binary Pipeline.track_file Source.binary
>>> Yocaml_yaml.Pipeline.read_file_with_metadata (module Article) file >>> Yocaml_yaml.Pipeline.read_file_with_metadata (module Article) file
>>* (fun (obj, str) -> Eff.return (obj#with_host host, str)) >>* (fun (obj, str) -> Eff.return (obj#with_host host, str))
>>> Yocaml_cmarkit.content_to_html ~strict:false () >>> Yocaml_cmarkit.content_to_html ()
>>> Yocaml_jingoo.Pipeline.as_template >>> Yocaml_jingoo.Pipeline.as_template
(module Article) (module Article)
(Source.template "article.html") (Source.template "article.html")
@ -559,7 +495,7 @@ struct
begin begin
Pipeline.track_files [ Source.binary; Source.articles ] Pipeline.track_files [ Source.binary; Source.articles ]
>>> Yocaml_yaml.Pipeline.read_file_with_metadata (module Page) file >>> Yocaml_yaml.Pipeline.read_file_with_metadata (module Page) file
>>> Yocaml_cmarkit.content_to_html ~strict:false () >>> Yocaml_cmarkit.content_to_html ()
>>> first compute_index >>> first compute_index
>>* (fun (obj, str) -> Eff.return (obj#with_host host, str)) >>* (fun (obj, str) -> Eff.return (obj#with_host host, str))
>>> Yocaml_jingoo.Pipeline.as_template ~strict:true >>> Yocaml_jingoo.Pipeline.as_template ~strict:true
@ -571,37 +507,8 @@ struct
>>> drop_first () >>> drop_first ()
end end
let process_tags ~host =
let file = Source.tags in
let file_target = Target.(as_html pages file) in
let open Task in
let compute_index =
Articles.compute_index
(module Yocaml_yaml)
~where:(Path.has_extension "md")
~compute_link:(Target.as_html @@ Path.abs [ "articles" ])
Source.articles
in
Action.write_static_file file_target
begin
Pipeline.track_files [ Source.binary; Source.articles ]
>>> Yocaml_yaml.Pipeline.read_file_with_metadata (module Page) file
>>> Yocaml_cmarkit.content_to_html ~strict:false ()
>>> first compute_index
>>* (fun (obj, str) -> Eff.return (Tags.of_articles (obj#with_host host), str))
>>> Yocaml_jingoo.Pipeline.as_template ~strict:true
(module Tags)
(Source.template "tags.html")
>>> Yocaml_jingoo.Pipeline.as_template ~strict:true
(module Tags)
(Source.template "layout.html")
>>> drop_first ()
end
let feed_title = "The Robur's blog" let feed_title = "The Robur's blog"
let site_url = "https://blog.robur.coop" let site_url = "https://blog.robur.coop/"
let feed_description = "The Robur cooperative blog" let feed_description = "The Robur cooperative blog"
let fetch_articles = let fetch_articles =
@ -613,6 +520,25 @@ struct
~compute_link:(Target.as_html @@ Path.abs [ "articles" ]) ~compute_link:(Target.as_html @@ Path.abs [ "articles" ])
Source.articles Source.articles
let rss1 =
let from_articles ~title ~site_url ~description ~feed_url () =
let open Yocaml_syndication in
Rss1.from ~title ~url:feed_url ~link:site_url ~description
@@ fun (path, article) ->
let title = Article.title article in
let link = site_url ^ Yocaml.Path.to_string path in
let description = Article.description article in
Rss1.item ~title ~link ~description
in
let open Task in
Action.write_static_file Target.rss1
begin
fetch_articles
>>> from_articles ~title:feed_title ~site_url
~description:feed_description
~feed_url:"https://blog.robur.coop/rss1.xml" ()
end
let rss2 = let rss2 =
let open Task in let open Task in
let from_articles ~title ~site_url ~description ~feed_url () = let from_articles ~title ~site_url ~description ~feed_url () =
@ -662,12 +588,43 @@ struct
~feed_url:"https://blog.robur.coop/feed.xml" () ~feed_url:"https://blog.robur.coop/feed.xml" ()
end end
let atom =
let open Task in
let open Yocaml_syndication in
let authors = Yocaml.Nel.singleton @@ Person.make "The Robur Team" in
let from_articles ?(updated = Atom.updated_from_entries ()) ?(links = [])
?id ~site_url ~authors ~title ~feed_url () =
let id = Option.value ~default:feed_url id in
let feed_url = Atom.self feed_url in
let base_url = Atom.link site_url in
let links = base_url :: feed_url :: links in
Atom.from ~links ~updated ~title ~authors ~id
begin
fun (path, article) ->
let title = Article.title article in
let content_url = site_url ^ Yocaml.Path.to_string path in
let updated =
Datetime.make (Date.to_archetype_date_time (Article.date article))
in
let categories = List.map Category.make (Article.tags article) in
let summary = Atom.text (Article.description article) in
let links = [ Atom.alternate content_url ~title ] in
Atom.entry ~links ~categories ~summary ~updated ~id:content_url
~title:(Atom.text title) ()
end
in
Action.write_static_file Target.atom
begin
fetch_articles
>>> from_articles ~site_url ~authors ~title:(Atom.text feed_title)
~feed_url:"https://blog.robur.coop/atom.xml" ()
end
let process_all ~host = let process_all ~host =
let open Eff in let open Eff in
Action.restore_cache ~on:`Source Source.cache Action.restore_cache ~on:`Source Source.cache
>>= process_css_files >>= process_js_files >>= process_images_files >>= process_css_files >>= process_js_files >>= process_images_files
>>= process_tags ~host >>= process_articles ~host >>= process_index ~host >>= rss1 >>= rss2 >>= atom
>>= process_articles ~host >>= process_index ~host >>= rss2
>>= Action.store_cache ~on:`Source Source.cache >>= Action.store_cache ~on:`Source Source.cache
end end

View file

@ -14,7 +14,6 @@
fmt.tty fmt.tty
logs.fmt logs.fmt
git-unix git-unix
bos
yocaml yocaml
yocaml_git yocaml_git
yocaml_syndication yocaml_syndication

View file

@ -15,34 +15,12 @@ let reporter ppf =
in in
{ Logs.report } { Logs.report }
let run_git_rev_parse () =
let open Bos in
let value = OS.Cmd.run_out
Cmd.(v "git" % "describe" % "--always" % "--dirty"
% "--exclude=*" % "--abbrev=0")
in
match OS.Cmd.out_string value with
| Ok (value, (_, `Exited 0)) -> Some value
| Ok (value, (run_info, _)) ->
Logs.warn (fun m -> m "Failed to get commit id: %a: %s"
Cmd.pp (OS.Cmd.run_info_cmd run_info)
value);
None
| Error `Msg e ->
Logs.warn (fun m -> m "Failed to get commit id: %s" e);
None
let message () =
match run_git_rev_parse () with
| Some hash -> Fmt.str "Pushed by YOCaml 2 from %s" hash
| None -> Fmt.str "Pushed by YOCaml 2"
let () = Fmt_tty.setup_std_outputs ~style_renderer:`Ansi_tty ~utf_8:true () let () = Fmt_tty.setup_std_outputs ~style_renderer:`Ansi_tty ~utf_8:true ()
let () = Logs.set_reporter (reporter Fmt.stdout) let () = Logs.set_reporter (reporter Fmt.stdout)
(* let () = Logs.set_level ~all:true (Some Logs.Debug) *) (* let () = Logs.set_level ~all:true (Some Logs.Debug) *)
let author = ref "The Robur Team" let author = ref "The Robur Team"
let email = ref "team@robur.coop" let email = ref "team@robur.coop"
let message = ref (message ()) let message = ref "Pushed by YOCaml 2"
let remote = ref "git@git.robur.coop:robur/blog.robur.coop.git#gh-pages" let remote = ref "git@git.robur.coop:robur/blog.robur.coop.git#gh-pages"
let host = ref "https://blog.robur.coop" let host = ref "https://blog.robur.coop"
@ -51,14 +29,14 @@ module Source = Yocaml_git.From_identity (Yocaml_unix.Runtime)
let usage = let usage =
Fmt.str Fmt.str
"%s [--message <message>] [--author <author>] [--email <email>] -r \ "%s [--message <message>] [--author <author>] [--email <email>] -r \
<repository>#<branch>" <repository>"
Sys.argv.(0) Sys.argv.(0)
let specification = let specification =
[ [
("--message", Arg.Set_string message, "The commit message") ("--message", Arg.Set_string message, "The commit message")
; ("--email", Arg.Set_string email, "The email used to craft the commit") ; ("--email", Arg.Set_string email, "The email used to craft the commit")
; ("-r", Arg.Set_string remote, "The Git repository including #branch, e.g. " ^ !remote) ; ("-r", Arg.Set_string remote, "The Git repository")
; ("--author", Arg.Set_string author, "The Git commit author") ; ("--author", Arg.Set_string author, "The Git commit author")
; ("--host", Arg.Set_string host, "The host where the blog is available") ; ("--host", Arg.Set_string host, "The host where the blog is available")
] ]

View file

@ -19,13 +19,12 @@ bug-reports: "https://github.com/dinosaure/blogger/issues"
depends: [ depends: [
"ocaml" { >= "5.1.0" } "ocaml" { >= "5.1.0" }
"dune" { >= "3.16.0" } "dune" { >= "2.8" }
"preface" { >= "0.1.0" } "preface" { >= "0.1.0" }
"logs" {>= "0.7.0" } "logs" {>= "0.7.0" }
"cmdliner" { >= "1.0.0"} "cmdliner" { >= "1.0.0"}
"http-lwt-client" "http-lwt-client"
"bos" "yocaml"
"yocaml" {>= "2.0.1"}
"yocaml_unix" "yocaml_unix"
"yocaml_yaml" "yocaml_yaml"
"yocaml_cmarkit" "yocaml_cmarkit"
@ -33,3 +32,14 @@ depends: [
"yocaml_jingoo" "yocaml_jingoo"
"yocaml_syndication" "yocaml_syndication"
] ]
pin-depends: [
["yocaml.dev" "git+https://gitlab.com/funkywork/yocaml.git#c2809182a59571a863d6ad14a77f720f6fa577dc" ]
["yocaml_runtime.dev" "git+https://gitlab.com/funkywork/yocaml.git#c2809182a59571a863d6ad14a77f720f6fa577dc" ]
["yocaml_unix.dev" "git+https://gitlab.com/funkywork/yocaml.git#c2809182a59571a863d6ad14a77f720f6fa577dc" ]
["yocaml_yaml.dev" "git+https://gitlab.com/funkywork/yocaml.git#c2809182a59571a863d6ad14a77f720f6fa577dc" ]
["yocaml_cmarkit.dev" "git+https://gitlab.com/funkywork/yocaml.git#c2809182a59571a863d6ad14a77f720f6fa577dc" ]
["yocaml_git.dev" "git+https://gitlab.com/funkywork/yocaml.git#c2809182a59571a863d6ad14a77f720f6fa577dc" ]
["yocaml_jingoo.dev" "git+https://gitlab.com/funkywork/yocaml.git#c2809182a59571a863d6ad14a77f720f6fa577dc" ]
["yocaml_syndication.dev" "git+https://gitlab.com/funkywork/yocaml.git#c2809182a59571a863d6ad14a77f720f6fa577dc" ]
]

View file

@ -197,10 +197,6 @@ article code {
color: #fff; color: #fff;
} }
.tag-box:target > h3 > span {
background-color: #c2410c;
}
.tag-box > h3 > span::before { .tag-box > h3 > span::before {
content: "#"; content: "#";
} }

View file

@ -1,2 +1,2 @@
(lang dune 3.16) (lang dune 2.8)
(name blogger) (name blogger)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 128 KiB

View file

@ -1,10 +1,10 @@
<a href="/index.html">Back to index</a> <a href="{{ host }}/index.html">Back to index</a>
<article> <article>
<h1>{{ title }}</h1> <h1>{{ title }}</h1>
<ul class="tags-list"> <ul class="tags-list">
{%- for tag in tags -%} {%- for tag in tags -%}
<li><a href="/tags.html#tag-{{ tag }}">{{ tag }}</a></li> <li>{{ tag }}</li>
{%- endfor -%} {%- endfor -%}
</ul> </ul>
{%- autoescape false -%} {%- autoescape false -%}

View file

@ -1,4 +1,4 @@
<a class="small-button rss" href="/feed.xml">RSS</a> <a class="small-button rss" href="{{ host }}/feed.xml">RSS</a>
{%- autoescape false -%} {%- autoescape false -%}
{{ yocaml_body }} {{ yocaml_body }}
@ -21,12 +21,12 @@
</div> </div>
<div class="content"> <div class="content">
<span class="date">{{ article.date.human }}</span> <span class="date">{{ article.date.human }}</span>
<a href="{{ article.url }}">{{ article.title }}</a><br /> <a href="{{ host }}{{ article.url }}">{{ article.title }}</a><br />
<p>{{ article.description }}</p> <p>{{ article.description }}</p>
<div class="bottom"> <div class="bottom">
<ul class="tags-list"> <ul class="tags-list">
{%- for tag in article.tags -%} {%- for tag in article.tags -%}
<li><a href="/tags.html#tag-{{ tag }}">{{ tag }}</a></li> <li>{{ tag }}</li>
{%- endfor -%} {%- endfor -%}
</ul> </ul>
</div> </div>

View file

@ -5,13 +5,13 @@
<meta http-equiv="x-ua-compatible" content="ie=edge"> <meta http-equiv="x-ua-compatible" content="ie=edge">
<meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="viewport" content="width=device-width, initial-scale=1">
<title> <title>
Robur's blog - {{ title }} Robur's blog{{ dash }}{{ title }}
</title> </title>
<meta name="description" content="{{ description }}"> <meta name="description" content="{{ description }}">
<link type="text/css" rel="stylesheet" href="/css/hl.css"> <link type="text/css" rel="stylesheet" href="{{ host }}/css/hl.css">
<link type="text/css" rel="stylesheet" href="/css/style.css"> <link type="text/css" rel="stylesheet" href="{{ host }}/css/style.css">
<script src="/js/hl.js"></script> <script src="{{ host }}/js/hl.js"></script>
<link rel="alternate" type="application/rss+xml" href="/feed.xml" title="blog.robur.coop"> <link rel="alternate" type="application/rss+xml" href="{{ host }}/feed.xml" title="blog.robur.coop">
</head> </head>
<body> <body>
<header> <header>

21
templates/tag.html Normal file
View file

@ -0,0 +1,21 @@
<a href="/index.html">Back to index</a>
<ul class="tags-list aeration">
{%- for tag in tags -%}
<li><a href="/{{ tag.link }}">{{ tag.name }} ({{ tag.number }})</a></li>
{%- endfor -%}
</ul>
<div class="tag-box" id="tag-{{ tag }}">
{%- set nb_tags = length (articles) -%}
<h3>
<span>{{ tag }}</span>
{{ nb_tags }}
{%- if nb_tags > 1 %} entries{%- else %} entry{%- endif -%}
</h3>
<ul>
{%- for article in articles -%}
<li><a href="/{{ article.url }}">{{ article.metadata.title }}</a></li>
{%- endfor -%}
</ul>
</div>

View file

@ -1,20 +0,0 @@
<a href="/index.html">Back to index</a>
<ul class="tags-list aeration">
{%- for tag in all_tags -%}
<li><a href="#tag-{{ tag.name }}">{{ tag.name }}</a></li>
{%- endfor -%}
</ul>
{%- for tag in all_tags -%}
<div class="tag-box" id="tag-{{ tag.name }}">
<h3>
<span>{{ tag.name }}</span>
</h3>
<ul>
{%- for article in tag.articles -%}
<li><a href="{{ article.url }}">{{ article.title }}</a></li>
{%- endfor -%}
</ul>
</div>
{%- endfor -%}

View file

@ -1,6 +1,6 @@
#!/bin/sh #!/bin/sh
opam exec -- dune exec bin/push.exe -- opam exec -- dune exec src/push.exe --
-r git@git.robur.coop:robur/blog.robur.coop.git#gh-pages \ -r git@git.robur.coop:robur/blog.robur.coop.git#gh-pages \
--host https://blog.robur.coop \ --host https://blog.robur.coop \
--name "The Robur team" \ --name "The Robur team" \