updated from main (commit 212884184b)

This commit is contained in:
Canopy bot 2022-11-17 12:45:17 +00:00
parent 03c6110bef
commit 2a73e4a6c6
6 changed files with 171 additions and 36 deletions

81
Posts/Albatross Normal file
View file

@ -0,0 +1,81 @@
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml"><head><title>Deploying reproducible unikernels with albatross</title><meta charset="UTF-8"/><link rel="stylesheet" href="/static/css/style.css"/><link rel="stylesheet" href="/static/css/highlight.css"/><script src="/static/js/highlight.pack.js"></script><script>hljs.initHighlightingOnLoad();</script><link rel="alternate" href="/atom" title="Deploying reproducible unikernels with albatross" type="application/atom+xml"/><meta name="viewport" content="width=device-width, initial-scale=1, viewport-fit=cover"/></head><body><nav class="navbar navbar-default navbar-fixed-top"><div class="container"><div class="navbar-header"><a class="navbar-brand" href="/Posts">full stack engineer</a></div><div class="collapse navbar-collapse collapse"><ul class="nav navbar-nav navbar-right"><li><a href="/About"><span>About</span></a></li><li><a href="/Posts"><span>Posts</span></a></li></ul></div></div></nav><main><div class="flex-container"><div class="post"><h2>Deploying reproducible unikernels with albatross</h2><span class="author">Written by hannes</span><br/><div class="tags">Classified under: <a href="/tags/mirageos" class="tag">mirageos</a><a href="/tags/deployment" class="tag">deployment</a></div><span class="date">Published: 2022-11-17 (last updated: 2022-11-17)</span><article><h2>Deploying MirageOS unikernels</h2>
<p>More than five years ago, I posted <a href="/Posts/VMM">how to deploy MirageOS unikernels</a>. My motivation to work on this topic is that I'm convinced of reduced complexity, improved security, and more sustainable resource footprint of MirageOS unikernels, and want to ease deployment thereof. More than one year ago, I described <a href="/Posts/Deploy">how to deploy reproducible unikernels</a>.</p>
<h2>Albatross</h2>
<p>In recent months we worked hard on the underlying infrastructure: <a href="https://github.com/roburio/albatross">albatross</a>. Albatross is the orchestration system for MirageOS unikernels that use solo5 with <a href="https://github.com/Solo5/solo5/blob/master/docs/architecture.md">hvt or spt tender</a>. It deals with three tasks:</p>
<ul>
<li>unikernel creation (destroyal, restart)
</li>
<li>capturing console output
</li>
<li>collecting metrics in the host system about unikernels
</li>
</ul>
<p>An addition to the above is dealing with multiple tenants on the same machine: remote management of your unikernel fleet via TLS, and resource policies.</p>
<h2>History</h2>
<p>The initial commit of albatross was in May 2017. Back then it replaced the shell scripts and manual <code>scp</code> of unikernel images to the server. Over time it evolved and adapted to new environments. Initially a solo5 unikernel would only know of a single network interface, these days there can be multiple distinguished by name. Initially there was no support for block devices. Only FreeBSD was supported in the early days. Nowadays we built daily packages for Debian, Ubuntu, FreeBSD, and have support for NixOS, and the client side is supported on macOS as well.</p>
<h3>ASN.1</h3>
<p>The communication format between the albatross daemons and clients was changed multiple times. I'm glad that albatross uses ASN.1 as communication format, which makes extension with optional fields easy, and also allows &quot;choice&quot; (the sum type) to be not tagged (the binary is the same as no choice type), thus adding choice to an existing grammar, and preserving the old in the default (untagged) case is a decent solution.</p>
<p>So, if you care about backward and forward compatibility, as we do, since we may be in control of which albatross servers are deployed on our machine, but not what albatross versions the clients are using -- it may be wise to look into ASN.1. Recent efforts (json with schema, ...) may solve similar issues, but ASN.1 is as well very tiny in size.</p>
<h2>What resources does a unikernel need?</h2>
<p>A unikernel is just an operating system for a single service, there can't be much it can need.</p>
<h3>Name</h3>
<p>So, first of all a unikernel has a name, or a handle. This is useful for reporting statistics, but also to specify which console output you're interested in. The name is a string with printable ASCII characters (and dash '-' and dot '.'), with a length up to 64 characters - so yes, you can use an UUID if you like.</p>
<h3>Memory</h3>
<p>Another resource is the amount of memory assigned to the unikernel. This is specified in megabyte (as solo5 does), with the range being 10 (below not even a hello world wants to start) to 1024.</p>
<h3>Arguments</h3>
<p>Of course, you can pass via albatross boot parameters to the unikernel. Albatross doesn't impose any restrictions here, but the lower levels may.</p>
<h3>CPU</h3>
<p>Due to multiple tenants, and side channel attacks, it looked right at the beginning like a good idea to restrict each unikernel to a specific CPU. This way, one tenant may use CPU 5, and another CPU 9 - and they'll not starve each other (best to make sure that these CPUs are in different packages). So, albatross takes a number as the CPU, and executes the solo5 tender within <code>taskset</code>/<code>cpuset</code>.</p>
<h3>Fail behaviour</h3>
<p>In normal operations, exceptional behaviour may occur. I have to admit that I've seen MirageOS unikernels that suffer from not freeing all the memory they have allocated. To avoid having to get up at 4 AM just to start the unikernel that went out of memory, there's the possibility to restart the unikernel when it exited. You can even specify on which exit codes it should be restarted (the exit code is the only piece of information we have from the outside what caused the exit). This feature was implemented in October 2019, and has been very precious since then. :)</p>
<h3>Network</h3>
<p>This becomes a bit more complex: a MirageOS unikernel can have network interfaces, and solo5 specifies a so-called manifest with a list of these (name and type, and type is so far always basic). Then, on the actual server there are bridges (virtual switches) configured. Now, these may have the same name, or may need to be mapped. And of course, the unikernel expects a tap interface that is connected to such a bridge, not the bridge itself. Thus, albatross creates tap devices, attaches these to the respective bridges, and takes care about cleaning them up on teardown. The albatross client verifies that for each network interface in the manifest, there is a command-line argument specified (<code>--net service:my_bridge</code> or just <code>--net service</code> if the bridge is named service). The tap interface name is not really of interest to the user, and will not be exposed.</p>
<h3>Block devices</h3>
<p>On the host system, it's just a file, and passed to the unikernel. There's the need to be able to create one, dump it, and ensure that each file is only used by one unikernel. That's all that is there.</p>
<h2>Metrics</h2>
<p>Everyone likes graphs, over time, showing how much traffic or CPU or memory or whatever has been used by your service. Some of these statistics are only available in the host system, and it is also crucial for development purposes to compare whether the bytes sent in the unikernel sum up to the same on the host system's tap interface.</p>
<p>The albatross-stats daemon collects metrics from three sources: network interfaces, getrusage (of a child process), VMM debug counters (to count VM exits etc.). Since the recent 1.5.3, albatross-stats now connects at startup to the albatross-daemon and then retrieves the information which unikernels are up and running, and starts periodically collecting data in memory.</p>
<p>Other clients, being it a dump on your console window, a write into an rrd file (good old MRTG times), or a push to influx, can use the stats data to correlate and better analyse what is happening on the grand scale of things. This helped a lot by running several unikernels with different opam package sets to figure out which opam packages leave their hands on memory over time.</p>
<p>As a side note, if you make the unikernel name also available in the unikernel, it can tag its own metrics with the same identifier, and you can correlate high-level events (such as amount of HTTP requests) with low-level things &quot;allocated more memory&quot; or &quot;consumed a lot of CPU&quot;.</p>
<h2>Console</h2>
<p>There's not much to say about the console, just that the albatross-console daemon is running with low privileges, and reading from a FIFO that the unikernel writes to. It never writes anything to disk, but keeps the last 1000 lines in memory, available from a client asking for it.</p>
<h2>The daemons</h2>
<p>So, the main albatross-daemon runs with superuser privileges to create virtual machines, and opens a unix domain socket where the clients and other daemons are connecting to. The other daemons are executed with normal user privileges, and never write anything to disk.</p>
<p>The albatross-daemon keeps state about the running unikernels, and if it is restarted, the unikernels are started again. Maybe worth to mention that this lead sometimes to headaches (due to data being dumped to disk, and the old format should always be supported), but was also a huge relief to not have to care about creating all the unikernels just because albatross-daemon was killed.</p>
<h2>Remote management</h2>
<p>There's one more daemon program, either albatross-tls-inetd (to be executed by inetd), or albatross-tls-endpoint. They accept clients via a remote TCP connection, and establish a mutual-authenticated TLS handshake. When done, they forward the command to the respective Unix domain socket, and send back the reply.</p>
<p>The daemon itself has a X.509 certificate to authenticate, but the client is requested to show its certificate chain as well. This by now requires TLS 1.3, so the client certificates are sent over the encrypted channel.</p>
<p>A step back, x X.509 certificate contains a public key and a signature from one level up. When the server knows about the root (or certificate authority (CA)) certificate, and following the chain can verify that the leaf certificate is valid. Additionally, a X.509 certificate is a ASN.1 structure with some fixed fields, but also contains extensions, a key-value store where the keys are object identifiers, and the values are key-dependent data. Also note that this key-value store is cryptographically signed.</p>
<p>Albatross uses the object identifier, assigned to Camelus Dromedarius (MirageOS - 1.3.6.1.4.1.49836.42) to encode the command to be executed. This means that once the TLS handshake is established, the command to be executed is already transferred.</p>
<p>In the leaf certificate, there may be the &quot;create unikernel&quot; command with the unikernel image, it's boot parameters, and other resources. Or a &quot;read the console of my unikernel&quot;. In the intermediate certificates (from root to leaf), resource policies are encoded (this path may only have X unikernels running with a total of Y MB memory, and Z MB of block storage, using CPUs A and B, accessing bridges C and D). From the root downwards these policies may only decrease. When a unikernel should be created (or other commands are executed), the policies are verified to hold. If they do not, an error is reported.</p>
<h2>Fleet management</h2>
<p>Of course it is very fine to create your locally compiled unikernel to your albatross server, go for it. But in terms of &quot;what is actually running here?&quot; and &quot;does this unikernel need to be updated because some opam package had a security issues?&quot;, this is not optimal.</p>
<p>Since we provide <a href="https://builds.robur.coop">daily reproducible builds</a> with the current HEAD of the main opam-repository, and these unikernels have no configuration embedded (but take everything as boot parameters), we just deploy them. They come with the information what opam packages contributed to the binary, which environment variables were set, and which system packages were installed with which versions.</p>
<p>The whole result of reproducible builds for us means: we have a hash of a unikernel image that we can lookup in our build infrastructure, and take a look whether there is a newer image for the same job. And if there is, we provide a diff between the packages contributed to the currently running unikernel and the new image. That is what the albatross-client update command is all about.</p>
<p>Of course, your mileage may vary and you want automated deployments where each git commit triggers recompilation and redeployment. The downside would be that sometimes only dependencies are updated and you've to cope with that.</p>
<p>At the moment, there is a client connecting directly to the unix domain sockets, <code>albatross-client-local</code>, and one connecting to the TLS endpoint, <code>albatross-client-bistro</code>. The latter applies compression to the unikernel image.</p>
<h2>Installation</h2>
<p>For Debian and Ubuntu systems, we provide package repositories. Browse the dists folder for one matching your distribution, and add it to <code>/etc/apt/sources.list</code>:</p>
<pre><code>$ wget -q -O /etc/apt/trusted.gpg.d/apt.robur.coop.gpg https://apt.robur.coop/gpg.pub
$ echo &quot;deb https://apt.robur.coop ubuntu-20.04 main&quot; &gt;&gt; /etc/apt/sources.list # replace ubuntu-20.04 with e.g. debian-11 on a debian buster machine
$ apt update
$ apt install solo5 albatross
</code></pre>
<p>On FreeBSD:</p>
<pre><code>$ fetch -o /usr/local/etc/pkg/robur.pub https://pkg.robur.coop/repo.pub # download RSA public key
$ echo 'robur: {
url: &quot;https://pkg.robur.coop/${ABI}&quot;,
mirror_type: &quot;srv&quot;,
signature_type: &quot;pubkey&quot;,
pubkey: &quot;/usr/local/etc/pkg/robur.pub&quot;,
enabled: yes
}' &gt; /usr/local/etc/pkg/repos/robur.conf # Check https://pkg.robur.coop which ABI are available
$ pkg update
$ pkg install solo5 albatross
</code></pre>
<p>For other distributions and systems we do not (yet?) provide binary packages. You can compile and install them using opam (<code>opam install solo5 albatross</code>). Get in touch if you're keen on adding some other distribution to our reproducible build infrastructure.</p>
<h2>Conclusion</h2>
<p>After five years of development and operating albatross, feel free to get it and try it out. Or read the code, discuss issues and shortcomings with us - either at the issue tracker or via eMail.</p>
<p>Please reach out to us (at team AT robur DOT coop) if you have feedback and suggestions. We are a non-profit company, and rely on <a href="https://robur.coop/Donate">donations</a> for doing our work - everyone can contribute.</p>
</article></div></div></main></body></html>

View file

@ -1,5 +1,6 @@
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml"><head><title>full stack engineer</title><meta charset="UTF-8"/><link rel="stylesheet" href="/static/css/style.css"/><link rel="stylesheet" href="/static/css/highlight.css"/><script src="/static/js/highlight.pack.js"></script><script>hljs.initHighlightingOnLoad();</script><link rel="alternate" href="/atom" title="full stack engineer" type="application/atom+xml"/><meta name="viewport" content="width=device-width, initial-scale=1, viewport-fit=cover"/></head><body><nav class="navbar navbar-default navbar-fixed-top"><div class="container"><div class="navbar-header"><a class="navbar-brand" href="/Posts">full stack engineer</a></div><div class="collapse navbar-collapse collapse"><ul class="nav navbar-nav navbar-right"><li><a href="/About"><span>About</span></a></li><li><a href="/Posts"><span>Posts</span></a></li></ul></div></div></nav><main><div class="flex-container"><div class="flex-container"><div class="list-group listing"><a href="/Posts/OpamMirror" class="list-group-item"><h2 class="list-group-item-heading">Mirroring the opam repository and all tarballs</h2><span class="author">Written by hannes</span> <time>2022-09-29</time><br/><p class="list-group-item-text abstract"><p>Re-developing an opam cache from scratch, as a MirageOS unikernel</p>
<html xmlns="http://www.w3.org/1999/xhtml"><head><title>full stack engineer</title><meta charset="UTF-8"/><link rel="stylesheet" href="/static/css/style.css"/><link rel="stylesheet" href="/static/css/highlight.css"/><script src="/static/js/highlight.pack.js"></script><script>hljs.initHighlightingOnLoad();</script><link rel="alternate" href="/atom" title="full stack engineer" type="application/atom+xml"/><meta name="viewport" content="width=device-width, initial-scale=1, viewport-fit=cover"/></head><body><nav class="navbar navbar-default navbar-fixed-top"><div class="container"><div class="navbar-header"><a class="navbar-brand" href="/Posts">full stack engineer</a></div><div class="collapse navbar-collapse collapse"><ul class="nav navbar-nav navbar-right"><li><a href="/About"><span>About</span></a></li><li><a href="/Posts"><span>Posts</span></a></li></ul></div></div></nav><main><div class="flex-container"><div class="flex-container"><div class="list-group listing"><a href="/Posts/Albatross" class="list-group-item"><h2 class="list-group-item-heading">Deploying reproducible unikernels with albatross</h2><span class="author">Written by hannes</span> <time>2022-11-17</time><br/><p class="list-group-item-text abstract"><p>fleet management for MirageOS unikernels using a mutually authenticated TLS handshake</p>
</p></a><a href="/Posts/OpamMirror" class="list-group-item"><h2 class="list-group-item-heading">Mirroring the opam repository and all tarballs</h2><span class="author">Written by hannes</span> <time>2022-09-29</time><br/><p class="list-group-item-text abstract"><p>Re-developing an opam cache from scratch, as a MirageOS unikernel</p>
</p></a><a href="/Posts/Monitoring" class="list-group-item"><h2 class="list-group-item-heading">All your metrics belong to influx</h2><span class="author">Written by hannes</span> <time>2022-03-08</time><br/><p class="list-group-item-text abstract"><p>How to monitor your MirageOS unikernel with albatross and monitoring-experiments</p>
</p></a><a href="/Posts/Deploy" class="list-group-item"><h2 class="list-group-item-heading">Deploying binary MirageOS unikernels</h2><span class="author">Written by hannes</span> <time>2021-06-30</time><br/><p class="list-group-item-text abstract"><p>Finally, we provide reproducible binary MirageOS unikernels together with packages to reproduce them and setup your own builder</p>
</p></a><a href="/Posts/EC" class="list-group-item"><h2 class="list-group-item-heading">Cryptography updates in OCaml and MirageOS</h2><span class="author">Written by hannes</span> <time>2021-04-23</time><br/><p class="list-group-item-text abstract"><p>Elliptic curves (ECDSA/ECDH) are supported in a maintainable and secure way.</p>

114
atom
View file

@ -1,4 +1,84 @@
<feed xmlns="http://www.w3.org/2005/Atom"><link href="https://hannes.robur.coop/atom" rel="self"/><id>urn:uuid:981361ca-e71d-4997-a52c-baeee78e4156</id><title type="text">full stack engineer</title><updated>2022-11-06T10:56:43-00:00</updated><entry><summary type="text">&lt;p&gt;Re-developing an opam cache from scratch, as a MirageOS unikernel&lt;/p&gt;
<feed xmlns="http://www.w3.org/2005/Atom"><link href="https://hannes.robur.coop/atom" rel="self"/><id>urn:uuid:981361ca-e71d-4997-a52c-baeee78e4156</id><title type="text">full stack engineer</title><updated>2022-11-17T12:41:11-00:00</updated><entry><summary type="text">&lt;p&gt;fleet management for MirageOS unikernels using a mutually authenticated TLS handshake&lt;/p&gt;
</summary><published>2022-11-17T12:41:11-00:00</published><link href="Posts/Albatross" rel="alternate"/><content type="html">&lt;h2&gt;Deploying MirageOS unikernels&lt;/h2&gt;
&lt;p&gt;More than five years ago, I posted &lt;a href=&quot;/Posts/VMM&quot;&gt;how to deploy MirageOS unikernels&lt;/a&gt;. My motivation to work on this topic is that I'm convinced of reduced complexity, improved security, and more sustainable resource footprint of MirageOS unikernels, and want to ease deployment thereof. More than one year ago, I described &lt;a href=&quot;/Posts/Deploy&quot;&gt;how to deploy reproducible unikernels&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Albatross&lt;/h2&gt;
&lt;p&gt;In recent months we worked hard on the underlying infrastructure: &lt;a href=&quot;https://github.com/roburio/albatross&quot;&gt;albatross&lt;/a&gt;. Albatross is the orchestration system for MirageOS unikernels that use solo5 with &lt;a href=&quot;https://github.com/Solo5/solo5/blob/master/docs/architecture.md&quot;&gt;hvt or spt tender&lt;/a&gt;. It deals with three tasks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;unikernel creation (destroyal, restart)
&lt;/li&gt;
&lt;li&gt;capturing console output
&lt;/li&gt;
&lt;li&gt;collecting metrics in the host system about unikernels
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;An addition to the above is dealing with multiple tenants on the same machine: remote management of your unikernel fleet via TLS, and resource policies.&lt;/p&gt;
&lt;h2&gt;History&lt;/h2&gt;
&lt;p&gt;The initial commit of albatross was in May 2017. Back then it replaced the shell scripts and manual &lt;code&gt;scp&lt;/code&gt; of unikernel images to the server. Over time it evolved and adapted to new environments. Initially a solo5 unikernel would only know of a single network interface, these days there can be multiple distinguished by name. Initially there was no support for block devices. Only FreeBSD was supported in the early days. Nowadays we built daily packages for Debian, Ubuntu, FreeBSD, and have support for NixOS, and the client side is supported on macOS as well.&lt;/p&gt;
&lt;h3&gt;ASN.1&lt;/h3&gt;
&lt;p&gt;The communication format between the albatross daemons and clients was changed multiple times. I'm glad that albatross uses ASN.1 as communication format, which makes extension with optional fields easy, and also allows &amp;quot;choice&amp;quot; (the sum type) to be not tagged (the binary is the same as no choice type), thus adding choice to an existing grammar, and preserving the old in the default (untagged) case is a decent solution.&lt;/p&gt;
&lt;p&gt;So, if you care about backward and forward compatibility, as we do, since we may be in control of which albatross servers are deployed on our machine, but not what albatross versions the clients are using -- it may be wise to look into ASN.1. Recent efforts (json with schema, ...) may solve similar issues, but ASN.1 is as well very tiny in size.&lt;/p&gt;
&lt;h2&gt;What resources does a unikernel need?&lt;/h2&gt;
&lt;p&gt;A unikernel is just an operating system for a single service, there can't be much it can need.&lt;/p&gt;
&lt;h3&gt;Name&lt;/h3&gt;
&lt;p&gt;So, first of all a unikernel has a name, or a handle. This is useful for reporting statistics, but also to specify which console output you're interested in. The name is a string with printable ASCII characters (and dash '-' and dot '.'), with a length up to 64 characters - so yes, you can use an UUID if you like.&lt;/p&gt;
&lt;h3&gt;Memory&lt;/h3&gt;
&lt;p&gt;Another resource is the amount of memory assigned to the unikernel. This is specified in megabyte (as solo5 does), with the range being 10 (below not even a hello world wants to start) to 1024.&lt;/p&gt;
&lt;h3&gt;Arguments&lt;/h3&gt;
&lt;p&gt;Of course, you can pass via albatross boot parameters to the unikernel. Albatross doesn't impose any restrictions here, but the lower levels may.&lt;/p&gt;
&lt;h3&gt;CPU&lt;/h3&gt;
&lt;p&gt;Due to multiple tenants, and side channel attacks, it looked right at the beginning like a good idea to restrict each unikernel to a specific CPU. This way, one tenant may use CPU 5, and another CPU 9 - and they'll not starve each other (best to make sure that these CPUs are in different packages). So, albatross takes a number as the CPU, and executes the solo5 tender within &lt;code&gt;taskset&lt;/code&gt;/&lt;code&gt;cpuset&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;Fail behaviour&lt;/h3&gt;
&lt;p&gt;In normal operations, exceptional behaviour may occur. I have to admit that I've seen MirageOS unikernels that suffer from not freeing all the memory they have allocated. To avoid having to get up at 4 AM just to start the unikernel that went out of memory, there's the possibility to restart the unikernel when it exited. You can even specify on which exit codes it should be restarted (the exit code is the only piece of information we have from the outside what caused the exit). This feature was implemented in October 2019, and has been very precious since then. :)&lt;/p&gt;
&lt;h3&gt;Network&lt;/h3&gt;
&lt;p&gt;This becomes a bit more complex: a MirageOS unikernel can have network interfaces, and solo5 specifies a so-called manifest with a list of these (name and type, and type is so far always basic). Then, on the actual server there are bridges (virtual switches) configured. Now, these may have the same name, or may need to be mapped. And of course, the unikernel expects a tap interface that is connected to such a bridge, not the bridge itself. Thus, albatross creates tap devices, attaches these to the respective bridges, and takes care about cleaning them up on teardown. The albatross client verifies that for each network interface in the manifest, there is a command-line argument specified (&lt;code&gt;--net service:my_bridge&lt;/code&gt; or just &lt;code&gt;--net service&lt;/code&gt; if the bridge is named service). The tap interface name is not really of interest to the user, and will not be exposed.&lt;/p&gt;
&lt;h3&gt;Block devices&lt;/h3&gt;
&lt;p&gt;On the host system, it's just a file, and passed to the unikernel. There's the need to be able to create one, dump it, and ensure that each file is only used by one unikernel. That's all that is there.&lt;/p&gt;
&lt;h2&gt;Metrics&lt;/h2&gt;
&lt;p&gt;Everyone likes graphs, over time, showing how much traffic or CPU or memory or whatever has been used by your service. Some of these statistics are only available in the host system, and it is also crucial for development purposes to compare whether the bytes sent in the unikernel sum up to the same on the host system's tap interface.&lt;/p&gt;
&lt;p&gt;The albatross-stats daemon collects metrics from three sources: network interfaces, getrusage (of a child process), VMM debug counters (to count VM exits etc.). Since the recent 1.5.3, albatross-stats now connects at startup to the albatross-daemon and then retrieves the information which unikernels are up and running, and starts periodically collecting data in memory.&lt;/p&gt;
&lt;p&gt;Other clients, being it a dump on your console window, a write into an rrd file (good old MRTG times), or a push to influx, can use the stats data to correlate and better analyse what is happening on the grand scale of things. This helped a lot by running several unikernels with different opam package sets to figure out which opam packages leave their hands on memory over time.&lt;/p&gt;
&lt;p&gt;As a side note, if you make the unikernel name also available in the unikernel, it can tag its own metrics with the same identifier, and you can correlate high-level events (such as amount of HTTP requests) with low-level things &amp;quot;allocated more memory&amp;quot; or &amp;quot;consumed a lot of CPU&amp;quot;.&lt;/p&gt;
&lt;h2&gt;Console&lt;/h2&gt;
&lt;p&gt;There's not much to say about the console, just that the albatross-console daemon is running with low privileges, and reading from a FIFO that the unikernel writes to. It never writes anything to disk, but keeps the last 1000 lines in memory, available from a client asking for it.&lt;/p&gt;
&lt;h2&gt;The daemons&lt;/h2&gt;
&lt;p&gt;So, the main albatross-daemon runs with superuser privileges to create virtual machines, and opens a unix domain socket where the clients and other daemons are connecting to. The other daemons are executed with normal user privileges, and never write anything to disk.&lt;/p&gt;
&lt;p&gt;The albatross-daemon keeps state about the running unikernels, and if it is restarted, the unikernels are started again. Maybe worth to mention that this lead sometimes to headaches (due to data being dumped to disk, and the old format should always be supported), but was also a huge relief to not have to care about creating all the unikernels just because albatross-daemon was killed.&lt;/p&gt;
&lt;h2&gt;Remote management&lt;/h2&gt;
&lt;p&gt;There's one more daemon program, either albatross-tls-inetd (to be executed by inetd), or albatross-tls-endpoint. They accept clients via a remote TCP connection, and establish a mutual-authenticated TLS handshake. When done, they forward the command to the respective Unix domain socket, and send back the reply.&lt;/p&gt;
&lt;p&gt;The daemon itself has a X.509 certificate to authenticate, but the client is requested to show its certificate chain as well. This by now requires TLS 1.3, so the client certificates are sent over the encrypted channel.&lt;/p&gt;
&lt;p&gt;A step back, x X.509 certificate contains a public key and a signature from one level up. When the server knows about the root (or certificate authority (CA)) certificate, and following the chain can verify that the leaf certificate is valid. Additionally, a X.509 certificate is a ASN.1 structure with some fixed fields, but also contains extensions, a key-value store where the keys are object identifiers, and the values are key-dependent data. Also note that this key-value store is cryptographically signed.&lt;/p&gt;
&lt;p&gt;Albatross uses the object identifier, assigned to Camelus Dromedarius (MirageOS - 1.3.6.1.4.1.49836.42) to encode the command to be executed. This means that once the TLS handshake is established, the command to be executed is already transferred.&lt;/p&gt;
&lt;p&gt;In the leaf certificate, there may be the &amp;quot;create unikernel&amp;quot; command with the unikernel image, it's boot parameters, and other resources. Or a &amp;quot;read the console of my unikernel&amp;quot;. In the intermediate certificates (from root to leaf), resource policies are encoded (this path may only have X unikernels running with a total of Y MB memory, and Z MB of block storage, using CPUs A and B, accessing bridges C and D). From the root downwards these policies may only decrease. When a unikernel should be created (or other commands are executed), the policies are verified to hold. If they do not, an error is reported.&lt;/p&gt;
&lt;h2&gt;Fleet management&lt;/h2&gt;
&lt;p&gt;Of course it is very fine to create your locally compiled unikernel to your albatross server, go for it. But in terms of &amp;quot;what is actually running here?&amp;quot; and &amp;quot;does this unikernel need to be updated because some opam package had a security issues?&amp;quot;, this is not optimal.&lt;/p&gt;
&lt;p&gt;Since we provide &lt;a href=&quot;https://builds.robur.coop&quot;&gt;daily reproducible builds&lt;/a&gt; with the current HEAD of the main opam-repository, and these unikernels have no configuration embedded (but take everything as boot parameters), we just deploy them. They come with the information what opam packages contributed to the binary, which environment variables were set, and which system packages were installed with which versions.&lt;/p&gt;
&lt;p&gt;The whole result of reproducible builds for us means: we have a hash of a unikernel image that we can lookup in our build infrastructure, and take a look whether there is a newer image for the same job. And if there is, we provide a diff between the packages contributed to the currently running unikernel and the new image. That is what the albatross-client update command is all about.&lt;/p&gt;
&lt;p&gt;Of course, your mileage may vary and you want automated deployments where each git commit triggers recompilation and redeployment. The downside would be that sometimes only dependencies are updated and you've to cope with that.&lt;/p&gt;
&lt;p&gt;At the moment, there is a client connecting directly to the unix domain sockets, &lt;code&gt;albatross-client-local&lt;/code&gt;, and one connecting to the TLS endpoint, &lt;code&gt;albatross-client-bistro&lt;/code&gt;. The latter applies compression to the unikernel image.&lt;/p&gt;
&lt;h2&gt;Installation&lt;/h2&gt;
&lt;p&gt;For Debian and Ubuntu systems, we provide package repositories. Browse the dists folder for one matching your distribution, and add it to &lt;code&gt;/etc/apt/sources.list&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ wget -q -O /etc/apt/trusted.gpg.d/apt.robur.coop.gpg https://apt.robur.coop/gpg.pub
$ echo &amp;quot;deb https://apt.robur.coop ubuntu-20.04 main&amp;quot; &amp;gt;&amp;gt; /etc/apt/sources.list # replace ubuntu-20.04 with e.g. debian-11 on a debian buster machine
$ apt update
$ apt install solo5 albatross
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;On FreeBSD:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ fetch -o /usr/local/etc/pkg/robur.pub https://pkg.robur.coop/repo.pub # download RSA public key
$ echo 'robur: {
url: &amp;quot;https://pkg.robur.coop/${ABI}&amp;quot;,
mirror_type: &amp;quot;srv&amp;quot;,
signature_type: &amp;quot;pubkey&amp;quot;,
pubkey: &amp;quot;/usr/local/etc/pkg/robur.pub&amp;quot;,
enabled: yes
}' &amp;gt; /usr/local/etc/pkg/repos/robur.conf # Check https://pkg.robur.coop which ABI are available
$ pkg update
$ pkg install solo5 albatross
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For other distributions and systems we do not (yet?) provide binary packages. You can compile and install them using opam (&lt;code&gt;opam install solo5 albatross&lt;/code&gt;). Get in touch if you're keen on adding some other distribution to our reproducible build infrastructure.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;After five years of development and operating albatross, feel free to get it and try it out. Or read the code, discuss issues and shortcomings with us - either at the issue tracker or via eMail.&lt;/p&gt;
&lt;p&gt;Please reach out to us (at team AT robur DOT coop) if you have feedback and suggestions. We are a non-profit company, and rely on &lt;a href=&quot;https://robur.coop/Donate&quot;&gt;donations&lt;/a&gt; for doing our work - everyone can contribute.&lt;/p&gt;
</content><category scheme="https://hannes.robur.coop/tags/deployment" term="deployment"/><category scheme="https://hannes.robur.coop/tags/mirageos" term="mirageos"/><id>urn:uuid:1f354218-e8c3-5136-a2ca-c88f3c2878d8</id><title type="text">Deploying reproducible unikernels with albatross</title><updated>2022-11-17T12:41:11-00:00</updated><author><name>hannes</name></author></entry><entry><summary type="text">&lt;p&gt;Re-developing an opam cache from scratch, as a MirageOS unikernel&lt;/p&gt;
</summary><published>2022-09-29T13:04:14-00:00</published><link href="Posts/OpamMirror" rel="alternate"/><content type="html">&lt;p&gt;We at &lt;a href=&quot;https://robur.coop&quot;&gt;robur&lt;/a&gt; developed &lt;a href=&quot;https://git.robur.io/robur/opam-mirror&quot;&gt;opam-mirror&lt;/a&gt; in the last month and run a public opam mirror at https://opam.robur.coop (updated hourly).&lt;/p&gt;
&lt;h1&gt;What is opam and why should I care?&lt;/h1&gt;
&lt;p&gt;&lt;a href=&quot;https://opam.ocaml.org&quot;&gt;Opam&lt;/a&gt; is the OCaml package manager (also used by other projects such as &lt;a href=&quot;https://coq.inria.fr&quot;&gt;coq&lt;/a&gt;). It is a source based system: the so-called repository contains the metadata (url to source tarballs, build dependencies, author, homepage, development repository) of all packages. The main repository is hosted on GitHub as &lt;a href=&quot;https://github.com/ocaml/opam-repository&quot;&gt;ocaml/opam-repository&lt;/a&gt;, where authors of OCaml software can contribute (as pull request) their latest releases.&lt;/p&gt;
@ -1026,34 +1106,4 @@ Once these features are supported, the library should likely be named PKCS since
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;The work on X.509 was sponsored by &lt;a href=&quot;http://ocamllabs.io/&quot;&gt;OCaml Labs&lt;/a&gt;. You can support our work at robur by a &lt;a href=&quot;https://robur.io/Donate&quot;&gt;donation&lt;/a&gt;, which we will use to work on our OCaml and MirageOS projects. You can also reach out to us to realize commercial products.&lt;/p&gt;
&lt;p&gt;I'm interested in feedback, either via &lt;strike&gt;&lt;a href=&quot;https://twitter.com/h4nnes&quot;&gt;twitter&lt;/a&gt;&lt;/strike&gt; &lt;a href=&quot;https://mastodon.social/@hannesm&quot;&gt;hannesm@mastodon.social&lt;/a&gt; or via eMail.&lt;/p&gt;
</content><category scheme="https://hannes.robur.coop/tags/tls" term="tls"/><category scheme="https://hannes.robur.coop/tags/security" term="security"/><category scheme="https://hannes.robur.coop/tags/mirageos" term="mirageos"/><id>urn:uuid:f2cf2a6a-8eef-5c2c-be03-d81a1bf0f066</id><title type="text">X509 0.7</title><updated>2021-11-19T18:04:52-00:00</updated><author><name>hannes</name></author></entry><entry><summary type="text">&lt;p&gt;Bringing MirageOS into production, take IV monitoring, CalDAV, DNS&lt;/p&gt;
</summary><published>2019-07-08T19:29:05-00:00</published><link href="Posts/Summer2019" rel="alternate"/><content type="html">&lt;h2&gt;Working at &lt;a href=&quot;https://robur.io&quot;&gt;robur&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;As announced &lt;a href=&quot;/Posts/DNS&quot;&gt;previously&lt;/a&gt;, I started to work at robur early 2018. We're a collective of five people, distributed around Europe and the US, with the goal to deploy MirageOS unikernels. We do this by developing bespoke MirageOS unikernels which provide useful services, and deploy them for ourselves. We also develop new libraries and enhance existing ones and other components of MirageOS. Example unikernels include &lt;a href=&quot;https://robur.io&quot;&gt;our website&lt;/a&gt; which uses &lt;a href=&quot;https://github.com/Engil/Canopy&quot;&gt;Canopy&lt;/a&gt;, a &lt;a href=&quot;https://robur.io/Our%20Work/Projects#CalDAV-Server&quot;&gt;CalDAV server that stores entries in a git remote&lt;/a&gt;, and &lt;a href=&quot;https://github.com/roburio/unikernels&quot;&gt;DNS servers&lt;/a&gt; (the latter two are further described below).&lt;/p&gt;
&lt;p&gt;Robur is part of the non-profit company &lt;a href=&quot;https://techcultivation.org&quot;&gt;Center for the Cultivation of Technology&lt;/a&gt;, who are managing the legal and administrative sides for us. We're ourselves responsible to acquire funding to pay ourselves reasonable salaries. We received funding for CalDAV from &lt;a href=&quot;https://prototypefund.de&quot;&gt;prototypefund&lt;/a&gt; and further funding from &lt;a href=&quot;https://tarides.com&quot;&gt;Tarides&lt;/a&gt;, for TLS 1.3 from &lt;a href=&quot;http://ocamllabs.io/&quot;&gt;OCaml Labs&lt;/a&gt;; security-audited an OCaml codebase, and received &lt;a href=&quot;https://robur.io/Donate&quot;&gt;donations&lt;/a&gt;, also in the form of Bitcoins. We're looking for further funded collaborations and also contracting, mail us at &lt;code&gt;team@robur.io&lt;/code&gt;. Please &lt;a href=&quot;https://robur.io/Donate&quot;&gt;donate&lt;/a&gt; (tax-deductible in EU), so we can accomplish our goal of putting robust and sustainable MirageOS unikernels into production, replacing insecure legacy system that emit tons of CO&lt;span style=&quot;vertical-align: baseline; position: relative;bottom: -0.4em;&quot;&gt;2&lt;/span&gt;.&lt;/p&gt;
&lt;h2&gt;Deploying MirageOS unikernels&lt;/h2&gt;
&lt;p&gt;While several examples are running since years (the &lt;a href=&quot;https://mirage.io&quot;&gt;MirageOS website&lt;/a&gt;, &lt;a href=&quot;http://ownme.ipredator.se&quot;&gt;Bitcoin Piñata&lt;/a&gt;, &lt;a href=&quot;https://tls.nqsb.io&quot;&gt;TLS demo server&lt;/a&gt;, etc.), and some shell-scripts for cloud providers are floating around, it is not (yet) streamlined.&lt;/p&gt;
&lt;p&gt;Service deployment is complex: you have to consider its configuration, exfiltration of logs and metrics, provisioning with valid key material (TLS certificate, hmac shared secret) and authenticators (CA certificate, ssh key fingerprint). Instead of requiring millions lines of code during orchestration (such as Kubernetes), creating the images (docker), or provisioning (ansible), why not minimise the required configuration and dependencies?&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/Posts/VMM&quot;&gt;Earlier in this blog I introduced Albatross&lt;/a&gt;, which serves in an enhanced version as our deployment platform on a physical machine (running 15 unikernels at the moment), I won't discuss more detail thereof in this article.&lt;/p&gt;
&lt;h2&gt;CalDAV&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://linse.me/&quot;&gt;Steffi&lt;/a&gt; and I developed in 2018 a CalDAV server. Since November 2018 we have a test installation for robur, initially running as a Unix process on a virtual machine and persisting data to files on the disk. Mid-June 2019 we migrated it to a MirageOS unikernel, thanks to great efforts in &lt;a href=&quot;https://github.com/mirage/ocaml-git&quot;&gt;git&lt;/a&gt; and &lt;a href=&quot;https://github.com/mirage/irmin&quot;&gt;irmin&lt;/a&gt;, unikernels can push to a remote git repository. We &lt;a href=&quot;https://github.com/haesbaert/awa-ssh/pull/8&quot;&gt;extended the ssh library&lt;/a&gt; with a ssh client and &lt;a href=&quot;https://github.com/mirage/ocaml-git/pull/362&quot;&gt;use this in git&lt;/a&gt;. This also means our CalDAV server is completely immutable (does not carry state across reboots, apart from the data in the remote repository) and does not have persistent state in the form of a block device. Its configuration is mainly done at compile time by the selection of libraries (syslog, monitoring, ...), and boot arguments passed to the unikernel at startup.&lt;/p&gt;
&lt;p&gt;We monitored the resource usage when migrating our CalDAV server from Unix process to a MirageOS unikernel. The unikernel size is just below 10MB. The workload is some clients communicating with the server on a regular basis. We use &lt;a href=&quot;https://grafana.com/&quot;&gt;Grafana&lt;/a&gt; with a &lt;a href=&quot;https://www.influxdata.com/&quot;&gt;influx&lt;/a&gt; time series database to monitor virtual machines. Data is collected on the host system (&lt;code&gt;rusage&lt;/code&gt; sysctl, &lt;code&gt;kinfo_mem&lt;/code&gt; sysctl, &lt;code&gt;ifdata&lt;/code&gt; sysctl, &lt;code&gt;vm_get_stats&lt;/code&gt; BHyve statistics), and our unikernels these days emit further metrics (mostly counters: gc statistics, malloc statistics, tcp sessions, http requests and status codes).&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/static/img/crobur-june-2019.png&quot;&gt;&lt;img src=&quot;/static/img/crobur-june-2019.png&quot; width=&quot;700&quot; /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Please note that memory usage (upper right) and vm exits (lower right) use logarithmic scale. The CPU usage reduced by more than a factor of 4. The memory usage dropped by a factor of 25, and the network traffic increased - previously we stored log messages on the virtual machine itself, now we send them to a dedicated log host.&lt;/p&gt;
&lt;p&gt;A MirageOS unikernel, apart from a smaller attack surface, indeed uses fewer resources and actually emits less CO&lt;span style=&quot;vertical-align: baseline; position: relative;bottom: -0.4em;&quot;&gt;2&lt;/span&gt; than the same service on a Unix virtual machine. So we're doing something good for the environment! :)&lt;/p&gt;
&lt;p&gt;Our calendar server contains at the moment 63 events, the git repository had around 500 commits in the past month: nearly all of them from the CalDAV server itself when a client modified data via CalDAV, and two manual commits: the initial data imported from the file system, and one commit for fixing a bug of the encoder in our &lt;a href=&quot;https://github.com/roburio/icalendar/pull/2&quot;&gt;icalendar library&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Our CalDAV implementation is very basic, scheduling, adding attendees (which requires sending out eMail), is not supported. But it works well for us, we have individual calendars and a shared one which everyone can write to. On the client side we use macOS and iOS iCalendar, Android DAVdroid, and Thunderbird. If you like to try our CalDAV server, have a look &lt;a href=&quot;https://github.com/roburio/caldav/tree/future/README.md&quot;&gt;at our installation instructions&lt;/a&gt;. Please &lt;a href=&quot;https://github.com/roburio/caldav/issues&quot;&gt;report issues&lt;/a&gt; if you find issues or struggle with the installation.&lt;/p&gt;
&lt;h2&gt;DNS&lt;/h2&gt;
&lt;p&gt;There has been more work on our DNS implementation, now &lt;a href=&quot;https://github.com/mirage/ocaml-dns&quot;&gt;here&lt;/a&gt;. We included a DNS client library, and some &lt;a href=&quot;https://github.com/roburio/unikernels/tree/future&quot;&gt;example unikernels&lt;/a&gt; are available. They as well require our &lt;a href=&quot;https://github.com/roburio/git-ssh-dns-mirage3-repo&quot;&gt;opam repository overlay&lt;/a&gt;. Please report issues if you run into trouble while experimenting with that.&lt;/p&gt;
&lt;p&gt;Most prominently is &lt;code&gt;primary-git&lt;/code&gt;, a unikernel which acts as a primary authoritative DNS server (UDP and TCP). On startup, it fetches a remote git repository that contains zone files and shared hmac secrets. The zones are served, and secondary servers are notified with the respective serial numbers of the zones, authenticated using TSIG with the shared secrets. The primary server provides dynamic in-protocol updates of DNS resource records (&lt;code&gt;nsupdate&lt;/code&gt;), and after successful authentication pushes the change to the remote git. To change the zone, you can just edit the zonefile and push to the git remote - with the proper pre- and post-commit-hooks an authenticated notify is send to the primary server which then pulls the git remote.&lt;/p&gt;
&lt;p&gt;Another noteworthy unikernel is &lt;code&gt;letsencrypt&lt;/code&gt;, which acts as a secondary server, and whenever a TLSA record with custom type (0xFF) and a DER-encoded certificate signing request is observed, it requests a signature from letsencrypt by solving the DNS challenge. The certificate is pushed to the DNS server as TLSA record as well. The DNS implementation provides &lt;code&gt;ocertify&lt;/code&gt; and &lt;code&gt;dns-mirage-certify&lt;/code&gt; which use the above mechanism to retrieve valid let's encrypt certificates. The caller (unikernel or Unix command-line utility) either takes a private key directly or generates one from a (provided) seed and generates a certificate signing request. It then looks in DNS for a certificate which is still valid and matches the public key and the hostname. If such a certificate is not present, the certificate signing request is pushed to DNS (via the nsupdate protocol), authenticated using TSIG with a given secret. This way our public facing unikernels (website, this blog, TLS demo server, ..) block until they got a certificate via DNS on startup - we avoid embedding of the certificate into the unikernel image.&lt;/p&gt;
&lt;h2&gt;Monitoring&lt;/h2&gt;
&lt;p&gt;We like to gather statistics about the resource usage of our unikernels to find potential bottlenecks and observe memory leaks ;) The base for the setup is the &lt;a href=&quot;https://github.com/mirage/metrics&quot;&gt;metrics&lt;/a&gt; library, which is similarly in design to the &lt;a href=&quot;https://erratique.ch/software/logs&quot;&gt;logs&lt;/a&gt; library: libraries use the core to gather metrics. A different aspect is the reporter, which is globally registered and responsible for exfiltrating the data via their favourite protocol. If no reporter is registered, the work overhead is negligible.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/static/img/crobur-june-2019-unikernel.png&quot;&gt;&lt;img src=&quot;/static/img/crobur-june-2019-unikernel.png&quot; width=&quot;700&quot; /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This is a dashboard which combines both statistics gathered from the host system and various metrics from the MirageOS unikernel. The &lt;code&gt;monitoring&lt;/code&gt; branch of our opam repository overlay is used together with &lt;a href=&quot;https://github.com/hannesm/monitoring-experiments&quot;&gt;monitoring-experiments&lt;/a&gt;. The logs errors counter (middle right) was the icalendar parser which tried to parse its badly emitted ics (the bug is now fixed, the dashboard is from last month).&lt;/p&gt;
&lt;h2&gt;OCaml libraries&lt;/h2&gt;
&lt;p&gt;The &lt;a href=&quot;https://github.com/hannesm/domain-name&quot;&gt;domain-name&lt;/a&gt; library was developed to handle RFC 1035 domain names and host names. It initially was part of the DNS code, but is now freestanding to be used in other core libraries (such as ipaddr) with a small dependency footprint.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&quot;https://github.com/hannesm/gmap&quot;&gt;GADT map&lt;/a&gt; is a normal OCaml Map structure, but takes key-dependent value types by using a GADT. This library also was part of DNS, but is more broadly useful, we already use it in our icalendar (the data format for calendar entries in CalDAV) library, our &lt;a href=&quot;https://git.robur.io/?p=openvpn.git;a=summary&quot;&gt;OpenVPN&lt;/a&gt; configuration parser uses it as well, and also &lt;a href=&quot;https://github.com/mirleft/ocaml-x509/pull/115&quot;&gt;x509&lt;/a&gt; - which got reworked quite a bit recently (release pending), and there's preliminary PKCS12 support (which deserves its own article). &lt;a href=&quot;https://github.com/hannesm/ocaml-tls&quot;&gt;TLS 1.3&lt;/a&gt; is available on a branch, but is not yet merged. More work is underway, hopefully with sufficient time to write more articles about it.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;More projects are happening as we speak, it takes time to upstream all the changes, such as monitoring, new core libraries, getting our DNS implementation released, pushing Conex into production, more features such as DNSSec, ...&lt;/p&gt;
&lt;p&gt;I'm interested in feedback, either via &lt;strike&gt;&lt;a href=&quot;https://twitter.com/h4nnes&quot;&gt;twitter&lt;/a&gt;&lt;/strike&gt; &lt;a href=&quot;https://mastodon.social/@hannesm&quot;&gt;hannesm@mastodon.social&lt;/a&gt; or via eMail.&lt;/p&gt;
</content><category scheme="https://hannes.robur.coop/tags/deployment" term="deployment"/><category scheme="https://hannes.robur.coop/tags/monitoring" term="monitoring"/><category scheme="https://hannes.robur.coop/tags/tls" term="tls"/><category scheme="https://hannes.robur.coop/tags/package%20signing" term="package signing"/><category scheme="https://hannes.robur.coop/tags/security" term="security"/><category scheme="https://hannes.robur.coop/tags/mirageos" term="mirageos"/><id>urn:uuid:fd3a6aa5-a7ba-549a-9d0f-5f05fa6c434e</id><title type="text">Summer 2019</title><updated>2021-11-19T18:04:52-00:00</updated><author><name>hannes</name></author></entry></feed>
</content><category scheme="https://hannes.robur.coop/tags/tls" term="tls"/><category scheme="https://hannes.robur.coop/tags/security" term="security"/><category scheme="https://hannes.robur.coop/tags/mirageos" term="mirageos"/><id>urn:uuid:f2cf2a6a-8eef-5c2c-be03-d81a1bf0f066</id><title type="text">X509 0.7</title><updated>2021-11-19T18:04:52-00:00</updated><author><name>hannes</name></author></entry></feed>

View file

@ -1,5 +1,6 @@
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml"><head><title>full stack engineer</title><meta charset="UTF-8"/><link rel="stylesheet" href="/static/css/style.css"/><link rel="stylesheet" href="/static/css/highlight.css"/><script src="/static/js/highlight.pack.js"></script><script>hljs.initHighlightingOnLoad();</script><link rel="alternate" href="/atom" title="full stack engineer" type="application/atom+xml"/><meta name="viewport" content="width=device-width, initial-scale=1, viewport-fit=cover"/></head><body><nav class="navbar navbar-default navbar-fixed-top"><div class="container"><div class="navbar-header"><a class="navbar-brand" href="/Posts">full stack engineer</a></div><div class="collapse navbar-collapse collapse"><ul class="nav navbar-nav navbar-right"><li><a href="/About"><span>About</span></a></li><li><a href="/Posts"><span>Posts</span></a></li></ul></div></div></nav><main><div class="flex-container"><div class="flex-container"><div class="list-group listing"><a href="/Posts/OpamMirror" class="list-group-item"><h2 class="list-group-item-heading">Mirroring the opam repository and all tarballs</h2><span class="author">Written by hannes</span> <time>2022-09-29</time><br/><p class="list-group-item-text abstract"><p>Re-developing an opam cache from scratch, as a MirageOS unikernel</p>
<html xmlns="http://www.w3.org/1999/xhtml"><head><title>full stack engineer</title><meta charset="UTF-8"/><link rel="stylesheet" href="/static/css/style.css"/><link rel="stylesheet" href="/static/css/highlight.css"/><script src="/static/js/highlight.pack.js"></script><script>hljs.initHighlightingOnLoad();</script><link rel="alternate" href="/atom" title="full stack engineer" type="application/atom+xml"/><meta name="viewport" content="width=device-width, initial-scale=1, viewport-fit=cover"/></head><body><nav class="navbar navbar-default navbar-fixed-top"><div class="container"><div class="navbar-header"><a class="navbar-brand" href="/Posts">full stack engineer</a></div><div class="collapse navbar-collapse collapse"><ul class="nav navbar-nav navbar-right"><li><a href="/About"><span>About</span></a></li><li><a href="/Posts"><span>Posts</span></a></li></ul></div></div></nav><main><div class="flex-container"><div class="flex-container"><div class="list-group listing"><a href="/Posts/Albatross" class="list-group-item"><h2 class="list-group-item-heading">Deploying reproducible unikernels with albatross</h2><span class="author">Written by hannes</span> <time>2022-11-17</time><br/><p class="list-group-item-text abstract"><p>fleet management for MirageOS unikernels using a mutually authenticated TLS handshake</p>
</p></a><a href="/Posts/OpamMirror" class="list-group-item"><h2 class="list-group-item-heading">Mirroring the opam repository and all tarballs</h2><span class="author">Written by hannes</span> <time>2022-09-29</time><br/><p class="list-group-item-text abstract"><p>Re-developing an opam cache from scratch, as a MirageOS unikernel</p>
</p></a><a href="/Posts/Monitoring" class="list-group-item"><h2 class="list-group-item-heading">All your metrics belong to influx</h2><span class="author">Written by hannes</span> <time>2022-03-08</time><br/><p class="list-group-item-text abstract"><p>How to monitor your MirageOS unikernel with albatross and monitoring-experiments</p>
</p></a><a href="/Posts/Deploy" class="list-group-item"><h2 class="list-group-item-heading">Deploying binary MirageOS unikernels</h2><span class="author">Written by hannes</span> <time>2021-06-30</time><br/><p class="list-group-item-text abstract"><p>Finally, we provide reproducible binary MirageOS unikernels together with packages to reproduce them and setup your own builder</p>
</p></a><a href="/Posts/EC" class="list-group-item"><h2 class="list-group-item-heading">Cryptography updates in OCaml and MirageOS</h2><span class="author">Written by hannes</span> <time>2021-04-23</time><br/><p class="list-group-item-text abstract"><p>Elliptic curves (ECDSA/ECDH) are supported in a maintainable and secure way.</p>

View file

@ -1,5 +1,6 @@
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml"><head><title>full stack engineer</title><meta charset="UTF-8"/><link rel="stylesheet" href="/static/css/style.css"/><link rel="stylesheet" href="/static/css/highlight.css"/><script src="/static/js/highlight.pack.js"></script><script>hljs.initHighlightingOnLoad();</script><link rel="alternate" href="/atom" title="full stack engineer" type="application/atom+xml"/><meta name="viewport" content="width=device-width, initial-scale=1, viewport-fit=cover"/></head><body><nav class="navbar navbar-default navbar-fixed-top"><div class="container"><div class="navbar-header"><a class="navbar-brand" href="/Posts">full stack engineer</a></div><div class="collapse navbar-collapse collapse"><ul class="nav navbar-nav navbar-right"><li><a href="/About"><span>About</span></a></li><li><a href="/Posts"><span>Posts</span></a></li></ul></div></div></nav><main><div class="flex-container"><div class="flex-container"><div class="list-group listing"><a href="/Posts/OpamMirror" class="list-group-item"><h2 class="list-group-item-heading">Mirroring the opam repository and all tarballs</h2><span class="author">Written by hannes</span> <time>2022-09-29</time><br/><p class="list-group-item-text abstract"><p>Re-developing an opam cache from scratch, as a MirageOS unikernel</p>
<html xmlns="http://www.w3.org/1999/xhtml"><head><title>full stack engineer</title><meta charset="UTF-8"/><link rel="stylesheet" href="/static/css/style.css"/><link rel="stylesheet" href="/static/css/highlight.css"/><script src="/static/js/highlight.pack.js"></script><script>hljs.initHighlightingOnLoad();</script><link rel="alternate" href="/atom" title="full stack engineer" type="application/atom+xml"/><meta name="viewport" content="width=device-width, initial-scale=1, viewport-fit=cover"/></head><body><nav class="navbar navbar-default navbar-fixed-top"><div class="container"><div class="navbar-header"><a class="navbar-brand" href="/Posts">full stack engineer</a></div><div class="collapse navbar-collapse collapse"><ul class="nav navbar-nav navbar-right"><li><a href="/About"><span>About</span></a></li><li><a href="/Posts"><span>Posts</span></a></li></ul></div></div></nav><main><div class="flex-container"><div class="flex-container"><div class="list-group listing"><a href="/Posts/Albatross" class="list-group-item"><h2 class="list-group-item-heading">Deploying reproducible unikernels with albatross</h2><span class="author">Written by hannes</span> <time>2022-11-17</time><br/><p class="list-group-item-text abstract"><p>fleet management for MirageOS unikernels using a mutually authenticated TLS handshake</p>
</p></a><a href="/Posts/OpamMirror" class="list-group-item"><h2 class="list-group-item-heading">Mirroring the opam repository and all tarballs</h2><span class="author">Written by hannes</span> <time>2022-09-29</time><br/><p class="list-group-item-text abstract"><p>Re-developing an opam cache from scratch, as a MirageOS unikernel</p>
</p></a><a href="/Posts/Monitoring" class="list-group-item"><h2 class="list-group-item-heading">All your metrics belong to influx</h2><span class="author">Written by hannes</span> <time>2022-03-08</time><br/><p class="list-group-item-text abstract"><p>How to monitor your MirageOS unikernel with albatross and monitoring-experiments</p>
</p></a><a href="/Posts/Deploy" class="list-group-item"><h2 class="list-group-item-heading">Deploying binary MirageOS unikernels</h2><span class="author">Written by hannes</span> <time>2021-06-30</time><br/><p class="list-group-item-text abstract"><p>Finally, we provide reproducible binary MirageOS unikernels together with packages to reproduce them and setup your own builder</p>
</p></a><a href="/Posts/DnsServer" class="list-group-item"><h2 class="list-group-item-heading">Deploying authoritative OCaml-DNS servers as MirageOS unikernels</h2><span class="author">Written by hannes</span> <time>2019-12-23</time><br/><p class="list-group-item-text abstract"><p>A tutorial how to deploy authoritative name servers, let's encrypt, and updating entries from unix services.</p>

View file

@ -1,5 +1,6 @@
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml"><head><title>full stack engineer</title><meta charset="UTF-8"/><link rel="stylesheet" href="/static/css/style.css"/><link rel="stylesheet" href="/static/css/highlight.css"/><script src="/static/js/highlight.pack.js"></script><script>hljs.initHighlightingOnLoad();</script><link rel="alternate" href="/atom" title="full stack engineer" type="application/atom+xml"/><meta name="viewport" content="width=device-width, initial-scale=1, viewport-fit=cover"/></head><body><nav class="navbar navbar-default navbar-fixed-top"><div class="container"><div class="navbar-header"><a class="navbar-brand" href="/Posts">full stack engineer</a></div><div class="collapse navbar-collapse collapse"><ul class="nav navbar-nav navbar-right"><li><a href="/About"><span>About</span></a></li><li><a href="/Posts"><span>Posts</span></a></li></ul></div></div></nav><main><div class="flex-container"><div class="flex-container"><div class="list-group listing"><a href="/Posts/OpamMirror" class="list-group-item"><h2 class="list-group-item-heading">Mirroring the opam repository and all tarballs</h2><span class="author">Written by hannes</span> <time>2022-09-29</time><br/><p class="list-group-item-text abstract"><p>Re-developing an opam cache from scratch, as a MirageOS unikernel</p>
<html xmlns="http://www.w3.org/1999/xhtml"><head><title>full stack engineer</title><meta charset="UTF-8"/><link rel="stylesheet" href="/static/css/style.css"/><link rel="stylesheet" href="/static/css/highlight.css"/><script src="/static/js/highlight.pack.js"></script><script>hljs.initHighlightingOnLoad();</script><link rel="alternate" href="/atom" title="full stack engineer" type="application/atom+xml"/><meta name="viewport" content="width=device-width, initial-scale=1, viewport-fit=cover"/></head><body><nav class="navbar navbar-default navbar-fixed-top"><div class="container"><div class="navbar-header"><a class="navbar-brand" href="/Posts">full stack engineer</a></div><div class="collapse navbar-collapse collapse"><ul class="nav navbar-nav navbar-right"><li><a href="/About"><span>About</span></a></li><li><a href="/Posts"><span>Posts</span></a></li></ul></div></div></nav><main><div class="flex-container"><div class="flex-container"><div class="list-group listing"><a href="/Posts/Albatross" class="list-group-item"><h2 class="list-group-item-heading">Deploying reproducible unikernels with albatross</h2><span class="author">Written by hannes</span> <time>2022-11-17</time><br/><p class="list-group-item-text abstract"><p>fleet management for MirageOS unikernels using a mutually authenticated TLS handshake</p>
</p></a><a href="/Posts/OpamMirror" class="list-group-item"><h2 class="list-group-item-heading">Mirroring the opam repository and all tarballs</h2><span class="author">Written by hannes</span> <time>2022-09-29</time><br/><p class="list-group-item-text abstract"><p>Re-developing an opam cache from scratch, as a MirageOS unikernel</p>
</p></a><a href="/Posts/Monitoring" class="list-group-item"><h2 class="list-group-item-heading">All your metrics belong to influx</h2><span class="author">Written by hannes</span> <time>2022-03-08</time><br/><p class="list-group-item-text abstract"><p>How to monitor your MirageOS unikernel with albatross and monitoring-experiments</p>
</p></a><a href="/Posts/Deploy" class="list-group-item"><h2 class="list-group-item-heading">Deploying binary MirageOS unikernels</h2><span class="author">Written by hannes</span> <time>2021-06-30</time><br/><p class="list-group-item-text abstract"><p>Finally, we provide reproducible binary MirageOS unikernels together with packages to reproduce them and setup your own builder</p>
</p></a><a href="/Posts/EC" class="list-group-item"><h2 class="list-group-item-heading">Cryptography updates in OCaml and MirageOS</h2><span class="author">Written by hannes</span> <time>2021-04-23</time><br/><p class="list-group-item-text abstract"><p>Elliptic curves (ECDSA/ECDH) are supported in a maintainable and secure way.</p>