updated from main (commit 1f46f74964)

This commit is contained in:
Canopy bot 2023-11-29 12:58:21 +00:00
parent 1a8e2df853
commit c5c0de33cb
2 changed files with 8 additions and 8 deletions

View file

@ -6,7 +6,7 @@
<p>In 2012 I attended ICFP in Copenhagen - while being a PhD student at ITU Copenhagen - and there <a href="https://www.cl.cam.ac.uk/~pes20/">Peter Sewell</a> gave an invited talk &quot;Tales from the jungle&quot; about rigorous methods for real-world infrastructure (C semantics, hardware (concurrency) behaviour of CPUs, TCP/IP, and likely more). Working on formal specifications myself (<a href="https://en.itu.dk/-/media/EN/Research/PhD-Programme/PhD-defences/2013/130731-Hannes-Mehnert-PhD-dissertation-finalpdf.pdf">my dissertation</a>), and having a strong interest in real systems, I was immediately hooked by his perspective.</p>
<p>To dive a bit more into <a href="https://www.cl.cam.ac.uk/~pes20/Netsem/">network semantics</a> - the work done on TCP by Peter Sewell et al - is a formal specification (or a model) of TCP/IP and the Unix sockets API developed in HOL4. It is a label transition system with non-deterministic choices, and the model itself is executable. It has been validated with the real world by collecting thousands of traces on Linux, Windows, and FreeBSD - which have been checked by the model for validity - this copes with the different implementations of the English prose of the RFCs. The network semantics research found several issues in existing TCP stack and reported them upstream to have them fixed (though, there still is some special treatment, e.g. for the &quot;BSD listen bug&quot;).</p>
<p>In 2014 I joined Peter's research group in Cambridge to continue their work on the model: updating to more recent versions of HOL4 and PolyML, revising the test system to use DTrace, updating to a more recent FreeBSD network stack (from FreeBSD 4.6 to FreeBSD 10), and finally getting the <a href="https://dl.acm.org/doi/10.1145/3243650">journal paper</a> (<a href="http://www.cl.cam.ac.uk/~pes20/Netsem/paper3.pdf">author's copy</a>) published. At the same time the <a href="https://mirage.io">MirageOS</a> melting pot was happening at University of Cambridge, where I contributed OCaml-TLS etc. with David.</p>
<p>My intention was to understand TCP better, and use the specification as a basis for a TCP stack for MirageOS - the <a href="https://github.com/mirage/mirage-tcpip">existing one</a> (which is still used) has technical debt: a high issue to number of lines ratio, the lwt monad is ubiquitous which makes testing and debugging pretty hard, utilizing multiple cores with OCaml multicore won't be easy, it has various resource leaks, and there is no active maintainer. But honestly, it works fine on a local network, and with well-behaved traffic. It doesn't work that well on the wild Internet with a variety of broken implementations. Apart from resource leakage, which made me to implement things such as restart-on-failure in albatross, there are certain connection states which will never be exited.</p>
<p>My intention was to understand TCP better, and use the specification as a basis for a TCP stack for MirageOS - the <a href="https://github.com/mirage/mirage-tcpip">existing one</a> (which is still used) has technical debt: a high issue to number of lines ratio, the lwt monad is ubiquitous which makes testing and debugging pretty hard, utilizing multiple cores with OCaml multicore won't be easy, it has various resource leaks, and there is no active maintainer. But honestly, it works fine on a local network, and with well-behaved traffic. It doesn't work that well on the wild Internet with a variety of broken implementations. Apart from resource leakage, which made me to implement things such as restart-on-failure in <a href="https://github.com/robur-coop/albatross">Albatross</a>, there are certain connection states which will never be exited.</p>
<h1 id="the-rise-of-µtcp">The rise of <a href="https://github.com/robur-coop/utcp">µTCP</a></h1>
<p>Back in Cambridge I didn't manage to write a TCP stack based on the model, but in 2019 I re-started that work and got µTCP (the formal model manually translated to OCaml) to compile and do TCP session setup and teardown. Since it was a model that uses non-determinism, this couldn't be translated one-to-one into an executable program, but there are places where decisions have to be done. Due to other projects, I worked only briefly in 2021 and 2022 on µTCP, but finally in the summer 2023 I motivated myself to push µTCP into a usable state - so far I spend 25 days in 2023 on µTCP. Thanks to <a href="https://tarides.com">Tarides</a> for supporting my work.</p>
<p>Since late August we are running some unikernels using µTCP, e.g. the <a href="https://retreat.mirage.io">retreat</a> website. This allows us to observe µTCP and find and solve issues that occur in the real world. It turned out that the model is not always correct (i.e. in the model there is no retransmit timer in the close wait state - which avoids proper session teardowns). We report statistics about how many TCP connections are in which state to an influx time series database and view graphs rendered by grafana. If there are connections that are stuck for multiple hours, this indicates a resource leak that should be addressed. Grafana was tremendously helpful for us to find out where to look for resource leaks. Still, there's work to understand the behaviour, look at what the model does, what µTCP does, what the RFC says, and eventually what existing deployed TCP stacks do.</p>
@ -15,9 +15,9 @@
<p><a href="/static/img/a.ns.mtcp.png"><img src="/static/img/a.ns.mtcp.png" width="750" /></a></p>
<p>Now, after switching over to µTCP, graphed below, there's much fewer network utilization and the memory limit is only reached after 36 hours, which is a great result. Though, still it is not very satisfying that the unikernel leaks memory. Both graphs contain on their left side a few hours of mirage-tcpip, and shortly after 20:00 on Nov 23rd µTCP got deployed.</p>
<p><a href="/static/img/a.ns.mtcp-utcp.png"><img src="/static/img/a.ns.mtcp-utcp.png" width="750" /></a></p>
<p>Investigating the involved parts showed that a TCP connection that was never established has been registered at the MirageOS layer, but the pure core does not expose an event from the received RST that the connection has been cancelled. This means the MirageOS layer piles up all the connection attempts, and doesn't inform the application that the connection couldn't be established. Note that the MirageOS layer is not code derived from the formal model, but boilerplate for (a) effectful side-effects (IO) and (b) meeting the needs of the <a href="https://github.com/mirage/mirage-tcpip/blob/v8.0.0/src/core/tcp.ml">TCP.S</a> module type (so that µTCP can be used as a drop-in replacement for mirage-tcpip). Once this was well understood, developing the <a href="https://github.com/robur-coop/utcp/commit/67fc49468e6b75b96a481ebe44dd11ce4bb76e6c">required code changes</a> was straightforward. The graph shows that the fix was deployed at 15:25. The memory usage is constant afterwards, but the network utilization increased enormously.</p>
<p>Investigating the involved parts showed that an unestablished TCP connection has been registered at the MirageOS layer, but the pure core does not expose an event from the received RST that the connection has been cancelled. This means the MirageOS layer piles up all the connection attempts, and doesn't inform the application that the connection couldn't be established. Note that the MirageOS layer is not code derived from the formal model, but boilerplate for (a) effectful side-effects (IO) and (b) meeting the needs of the <a href="https://github.com/mirage/mirage-tcpip/blob/v8.0.0/src/core/tcp.ml">TCP.S</a> module type (so that µTCP can be used as a drop-in replacement for mirage-tcpip). Once this was well understood, developing the <a href="https://github.com/robur-coop/utcp/commit/67fc49468e6b75b96a481ebe44dd11ce4bb76e6c">required code changes</a> was straightforward. The graph shows that the fix was deployed at 15:25. The memory usage is constant afterwards, but the network utilization increased enormously.</p>
<p><a href="/static/img/a.ns.utcp-ev.png"><img src="/static/img/a.ns.utcp-ev.png" width="750" /></a></p>
<p>Now, the network utilization is unwanted. This was hidden by the application waiting forever that the TCP connection getting established. Our bugfix uncovered another issue, a tight loop:</p>
<p>Now, the network utilization is unwanted. This was hidden by the application waiting forever while the TCP connection getting established. Our bugfix uncovered another issue, a tight loop:</p>
<ul>
<li>the nameserver attempts to connect to the other nameserver (<code>request</code>);
</li>

10
atom
View file

@ -1,4 +1,4 @@
<feed xmlns="http://www.w3.org/2005/Atom"><link href="https://hannes.robur.coop/atom" rel="self"/><id>urn:uuid:981361ca-e71d-4997-a52c-baeee78e4156</id><title type="text">full stack engineer</title><updated>2023-11-29T12:45:29-00:00</updated><entry><summary type="html">&lt;p&gt;Core Internet protocols require operational experiments, even if formally specified&lt;/p&gt;
<feed xmlns="http://www.w3.org/2005/Atom"><link href="https://hannes.robur.coop/atom" rel="self"/><id>urn:uuid:981361ca-e71d-4997-a52c-baeee78e4156</id><title type="text">full stack engineer</title><updated>2023-11-29T12:57:48-00:00</updated><entry><summary type="html">&lt;p&gt;Core Internet protocols require operational experiments, even if formally specified&lt;/p&gt;
</summary><published>2023-11-28T21:17:01-00:00</published><link href="/Posts/TCP-ns" rel="alternate"/><content type="html">&lt;p&gt;The &lt;a href=&quot;https://en.wikipedia.org/wiki/Transmission_Control_Protocol&quot;&gt;Transmission Control Protocol (TCP)&lt;/a&gt; is one of the main Internet protocols. Usually spoken on top of the Internet Protocol (legacy version 4 or version 6), it provides a reliable, ordered, and error-checked stream of octets. When an application uses TCP, they get these properties for free (in contrast to UDP).&lt;/p&gt;
&lt;p&gt;As common for Internet protocols, also TCP is specified in a series of so-called requests for comments (RFC), the latest revised version from August 2022 is &lt;a href=&quot;https://datatracker.ietf.org/doc/html/rfc9293&quot;&gt;RFC 9293&lt;/a&gt;, the initial one was &lt;a href=&quot;https://datatracker.ietf.org/doc/html/rfc793&quot;&gt;RFC 793&lt;/a&gt; from September 1981.&lt;/p&gt;
&lt;h1 id=&quot;my-brief-personal-tcp-story&quot;&gt;My brief personal TCP story&lt;/h1&gt;
@ -6,7 +6,7 @@
&lt;p&gt;In 2012 I attended ICFP in Copenhagen - while being a PhD student at ITU Copenhagen - and there &lt;a href=&quot;https://www.cl.cam.ac.uk/~pes20/&quot;&gt;Peter Sewell&lt;/a&gt; gave an invited talk &amp;quot;Tales from the jungle&amp;quot; about rigorous methods for real-world infrastructure (C semantics, hardware (concurrency) behaviour of CPUs, TCP/IP, and likely more). Working on formal specifications myself (&lt;a href=&quot;https://en.itu.dk/-/media/EN/Research/PhD-Programme/PhD-defences/2013/130731-Hannes-Mehnert-PhD-dissertation-finalpdf.pdf&quot;&gt;my dissertation&lt;/a&gt;), and having a strong interest in real systems, I was immediately hooked by his perspective.&lt;/p&gt;
&lt;p&gt;To dive a bit more into &lt;a href=&quot;https://www.cl.cam.ac.uk/~pes20/Netsem/&quot;&gt;network semantics&lt;/a&gt; - the work done on TCP by Peter Sewell et al - is a formal specification (or a model) of TCP/IP and the Unix sockets API developed in HOL4. It is a label transition system with non-deterministic choices, and the model itself is executable. It has been validated with the real world by collecting thousands of traces on Linux, Windows, and FreeBSD - which have been checked by the model for validity - this copes with the different implementations of the English prose of the RFCs. The network semantics research found several issues in existing TCP stack and reported them upstream to have them fixed (though, there still is some special treatment, e.g. for the &amp;quot;BSD listen bug&amp;quot;).&lt;/p&gt;
&lt;p&gt;In 2014 I joined Peter's research group in Cambridge to continue their work on the model: updating to more recent versions of HOL4 and PolyML, revising the test system to use DTrace, updating to a more recent FreeBSD network stack (from FreeBSD 4.6 to FreeBSD 10), and finally getting the &lt;a href=&quot;https://dl.acm.org/doi/10.1145/3243650&quot;&gt;journal paper&lt;/a&gt; (&lt;a href=&quot;http://www.cl.cam.ac.uk/~pes20/Netsem/paper3.pdf&quot;&gt;author's copy&lt;/a&gt;) published. At the same time the &lt;a href=&quot;https://mirage.io&quot;&gt;MirageOS&lt;/a&gt; melting pot was happening at University of Cambridge, where I contributed OCaml-TLS etc. with David.&lt;/p&gt;
&lt;p&gt;My intention was to understand TCP better, and use the specification as a basis for a TCP stack for MirageOS - the &lt;a href=&quot;https://github.com/mirage/mirage-tcpip&quot;&gt;existing one&lt;/a&gt; (which is still used) has technical debt: a high issue to number of lines ratio, the lwt monad is ubiquitous which makes testing and debugging pretty hard, utilizing multiple cores with OCaml multicore won't be easy, it has various resource leaks, and there is no active maintainer. But honestly, it works fine on a local network, and with well-behaved traffic. It doesn't work that well on the wild Internet with a variety of broken implementations. Apart from resource leakage, which made me to implement things such as restart-on-failure in albatross, there are certain connection states which will never be exited.&lt;/p&gt;
&lt;p&gt;My intention was to understand TCP better, and use the specification as a basis for a TCP stack for MirageOS - the &lt;a href=&quot;https://github.com/mirage/mirage-tcpip&quot;&gt;existing one&lt;/a&gt; (which is still used) has technical debt: a high issue to number of lines ratio, the lwt monad is ubiquitous which makes testing and debugging pretty hard, utilizing multiple cores with OCaml multicore won't be easy, it has various resource leaks, and there is no active maintainer. But honestly, it works fine on a local network, and with well-behaved traffic. It doesn't work that well on the wild Internet with a variety of broken implementations. Apart from resource leakage, which made me to implement things such as restart-on-failure in &lt;a href=&quot;https://github.com/robur-coop/albatross&quot;&gt;Albatross&lt;/a&gt;, there are certain connection states which will never be exited.&lt;/p&gt;
&lt;h1 id=&quot;the-rise-of-µtcp&quot;&gt;The rise of &lt;a href=&quot;https://github.com/robur-coop/utcp&quot;&gt;µTCP&lt;/a&gt;&lt;/h1&gt;
&lt;p&gt;Back in Cambridge I didn't manage to write a TCP stack based on the model, but in 2019 I re-started that work and got µTCP (the formal model manually translated to OCaml) to compile and do TCP session setup and teardown. Since it was a model that uses non-determinism, this couldn't be translated one-to-one into an executable program, but there are places where decisions have to be done. Due to other projects, I worked only briefly in 2021 and 2022 on µTCP, but finally in the summer 2023 I motivated myself to push µTCP into a usable state - so far I spend 25 days in 2023 on µTCP. Thanks to &lt;a href=&quot;https://tarides.com&quot;&gt;Tarides&lt;/a&gt; for supporting my work.&lt;/p&gt;
&lt;p&gt;Since late August we are running some unikernels using µTCP, e.g. the &lt;a href=&quot;https://retreat.mirage.io&quot;&gt;retreat&lt;/a&gt; website. This allows us to observe µTCP and find and solve issues that occur in the real world. It turned out that the model is not always correct (i.e. in the model there is no retransmit timer in the close wait state - which avoids proper session teardowns). We report statistics about how many TCP connections are in which state to an influx time series database and view graphs rendered by grafana. If there are connections that are stuck for multiple hours, this indicates a resource leak that should be addressed. Grafana was tremendously helpful for us to find out where to look for resource leaks. Still, there's work to understand the behaviour, look at what the model does, what µTCP does, what the RFC says, and eventually what existing deployed TCP stacks do.&lt;/p&gt;
@ -15,9 +15,9 @@
&lt;p&gt;&lt;a href=&quot;/static/img/a.ns.mtcp.png&quot;&gt;&lt;img src=&quot;/static/img/a.ns.mtcp.png&quot; width=&quot;750&quot; /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Now, after switching over to µTCP, graphed below, there's much fewer network utilization and the memory limit is only reached after 36 hours, which is a great result. Though, still it is not very satisfying that the unikernel leaks memory. Both graphs contain on their left side a few hours of mirage-tcpip, and shortly after 20:00 on Nov 23rd µTCP got deployed.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/static/img/a.ns.mtcp-utcp.png&quot;&gt;&lt;img src=&quot;/static/img/a.ns.mtcp-utcp.png&quot; width=&quot;750&quot; /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Investigating the involved parts showed that a TCP connection that was never established has been registered at the MirageOS layer, but the pure core does not expose an event from the received RST that the connection has been cancelled. This means the MirageOS layer piles up all the connection attempts, and doesn't inform the application that the connection couldn't be established. Note that the MirageOS layer is not code derived from the formal model, but boilerplate for (a) effectful side-effects (IO) and (b) meeting the needs of the &lt;a href=&quot;https://github.com/mirage/mirage-tcpip/blob/v8.0.0/src/core/tcp.ml&quot;&gt;TCP.S&lt;/a&gt; module type (so that µTCP can be used as a drop-in replacement for mirage-tcpip). Once this was well understood, developing the &lt;a href=&quot;https://github.com/robur-coop/utcp/commit/67fc49468e6b75b96a481ebe44dd11ce4bb76e6c&quot;&gt;required code changes&lt;/a&gt; was straightforward. The graph shows that the fix was deployed at 15:25. The memory usage is constant afterwards, but the network utilization increased enormously.&lt;/p&gt;
&lt;p&gt;Investigating the involved parts showed that an unestablished TCP connection has been registered at the MirageOS layer, but the pure core does not expose an event from the received RST that the connection has been cancelled. This means the MirageOS layer piles up all the connection attempts, and doesn't inform the application that the connection couldn't be established. Note that the MirageOS layer is not code derived from the formal model, but boilerplate for (a) effectful side-effects (IO) and (b) meeting the needs of the &lt;a href=&quot;https://github.com/mirage/mirage-tcpip/blob/v8.0.0/src/core/tcp.ml&quot;&gt;TCP.S&lt;/a&gt; module type (so that µTCP can be used as a drop-in replacement for mirage-tcpip). Once this was well understood, developing the &lt;a href=&quot;https://github.com/robur-coop/utcp/commit/67fc49468e6b75b96a481ebe44dd11ce4bb76e6c&quot;&gt;required code changes&lt;/a&gt; was straightforward. The graph shows that the fix was deployed at 15:25. The memory usage is constant afterwards, but the network utilization increased enormously.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/static/img/a.ns.utcp-ev.png&quot;&gt;&lt;img src=&quot;/static/img/a.ns.utcp-ev.png&quot; width=&quot;750&quot; /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Now, the network utilization is unwanted. This was hidden by the application waiting forever that the TCP connection getting established. Our bugfix uncovered another issue, a tight loop:&lt;/p&gt;
&lt;p&gt;Now, the network utilization is unwanted. This was hidden by the application waiting forever while the TCP connection getting established. Our bugfix uncovered another issue, a tight loop:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;the nameserver attempts to connect to the other nameserver (&lt;code&gt;request&lt;/code&gt;);
&lt;/li&gt;
@ -35,7 +35,7 @@
&lt;p&gt;We'll take some more time to investigate issues of µTCP in production, plan to write further documentation and blog posts, and hopefully soon are ready for an initial public release. In the meantime, you can follow our development repository.&lt;/p&gt;
&lt;p&gt;We at &lt;a href=&quot;https://robur.coop&quot;&gt;robur&lt;/a&gt; are working as a collective since 2018, on public funding, commercial contracts, and donations. Our mission is to get sustainable, robust and secure MirageOS unikernels developed and deployed. Running your own digital communication infrastructure should be easy, including trustworthy binaries and smooth upgrades. You can help us continuing our work by &lt;a href=&quot;https://aenderwerk.de/donate/&quot;&gt;donating&lt;/a&gt; (select robur from the drop-down, or put &amp;quot;donation robur&amp;quot; in the purpose of the bank transfer).&lt;/p&gt;
&lt;p&gt;If you have any questions, reach us best via eMail to team AT robur DOT coop.&lt;/p&gt;
</content><category scheme="https://hannes.robur.coop/tags/tcp" term="tcp"/><category scheme="https://hannes.robur.coop/tags/protocol" term="protocol"/><category scheme="https://hannes.robur.coop/tags/mirageos" term="mirageos"/><id>urn:uuid:96688956-0808-5d44-b795-1d64cbb4f947</id><title type="text">Re-developing TCP from the grounds up</title><updated>2023-11-29T12:45:29-00:00</updated><author><name>hannes</name></author></entry><entry><summary type="html">&lt;p&gt;fleet management for MirageOS unikernels using a mutually authenticated TLS handshake&lt;/p&gt;
</content><category scheme="https://hannes.robur.coop/tags/tcp" term="tcp"/><category scheme="https://hannes.robur.coop/tags/protocol" term="protocol"/><category scheme="https://hannes.robur.coop/tags/mirageos" term="mirageos"/><id>urn:uuid:96688956-0808-5d44-b795-1d64cbb4f947</id><title type="text">Re-developing TCP from the grounds up</title><updated>2023-11-29T12:57:48-00:00</updated><author><name>hannes</name></author></entry><entry><summary type="html">&lt;p&gt;fleet management for MirageOS unikernels using a mutually authenticated TLS handshake&lt;/p&gt;
</summary><published>2022-11-17T12:41:11-00:00</published><link href="/Posts/Albatross" rel="alternate"/><content type="html">&lt;p&gt;EDIT (2023-05-16): Updated with albatross release version 2.0.0.&lt;/p&gt;
&lt;h2 id=&quot;deploying-mirageos-unikernels&quot;&gt;Deploying MirageOS unikernels&lt;/h2&gt;
&lt;p&gt;More than five years ago, I posted &lt;a href=&quot;/Posts/VMM&quot;&gt;how to deploy MirageOS unikernels&lt;/a&gt;. My motivation to work on this topic is that I'm convinced of reduced complexity, improved security, and more sustainable resource footprint of MirageOS unikernels, and want to ease deployment thereof. More than one year ago, I described &lt;a href=&quot;/Posts/Deploy&quot;&gt;how to deploy reproducible unikernels&lt;/a&gt;.&lt;/p&gt;