updated from main (commit 1237862ec4
)
This commit is contained in:
parent
378f91b585
commit
7f78f017cf
2 changed files with 12 additions and 12 deletions
10
Posts/TCP-ns
10
Posts/TCP-ns
|
@ -12,11 +12,11 @@
|
||||||
<p>Since late August we are running some unikernels using µTCP, e.g. the <a href="https://retreat.mirage.io">retreat</a> website. This allows us to observe µTCP and find and solve issues that occur in the real world. It turned out that the model is not always correct (i.e. in the model there is no retransmit timer in the close wait state - which avoids proper session teardowns). We report statistics about how many TCP connections are in which state to an influx time series database and view graphs rendered by grafana. If there are connections that are stuck for multiple hours, this indicates a resource leak that should be addressed. Grafana was tremendously helpful for us to find out where to look for resource leaks. Still, there's work to understand the behaviour, look at what the model does, what µTCP does, what the RFC says, and eventually what existing deployed TCP stacks do.</p>
|
<p>Since late August we are running some unikernels using µTCP, e.g. the <a href="https://retreat.mirage.io">retreat</a> website. This allows us to observe µTCP and find and solve issues that occur in the real world. It turned out that the model is not always correct (i.e. in the model there is no retransmit timer in the close wait state - which avoids proper session teardowns). We report statistics about how many TCP connections are in which state to an influx time series database and view graphs rendered by grafana. If there are connections that are stuck for multiple hours, this indicates a resource leak that should be addressed. Grafana was tremendously helpful for us to find out where to look for resource leaks. Still, there's work to understand the behaviour, look at what the model does, what µTCP does, what the RFC says, and eventually what existing deployed TCP stacks do.</p>
|
||||||
<h1 id="the-secondary-nameserver-issue">The secondary nameserver issue</h1>
|
<h1 id="the-secondary-nameserver-issue">The secondary nameserver issue</h1>
|
||||||
<p>One of our secondary nameservers attempts to receive zones (via AXFR using TCP) from another nameserver that is currently not running. Thus it replies to each SYN packet a corresponding RST. Below I graphed the network utilization (send data/packets is positive y-axis, receive part on the negative) over time (on the x-axis) on the left and memory usage (bytes on y-axis) over time (x-axis) on the right of our nameserver - you can observe that both increases over time, and roughly every 3 hours the unikernel hits its configured memory limit (64 MB), crashes with out of memory, and is restarted. The graph below is using the mirage-tcpip stack.</p>
|
<p>One of our secondary nameservers attempts to receive zones (via AXFR using TCP) from another nameserver that is currently not running. Thus it replies to each SYN packet a corresponding RST. Below I graphed the network utilization (send data/packets is positive y-axis, receive part on the negative) over time (on the x-axis) on the left and memory usage (bytes on y-axis) over time (x-axis) on the right of our nameserver - you can observe that both increases over time, and roughly every 3 hours the unikernel hits its configured memory limit (64 MB), crashes with out of memory, and is restarted. The graph below is using the mirage-tcpip stack.</p>
|
||||||
<p><img src="/static/img/a.ns.mtcp.png" alt="" /></p>
|
<p><a href="/static/img/a.ns.mtcp.png"><img src="/static/img/a.ns.mtcp.png" width="350" /></a></p>
|
||||||
<p>Now, after switching over to µTCP, graphed below, there's much fewer network utilization and the memory limit is only reached after 36 hours, which is a great result. Though, still it is not very satisfying that the unikernel leaks memory. Both graphs contain on their left side a few hours of mirage-tcpip, and shortly after 20:00 on Nov 23rd µTCP got deployed.</p>
|
<p>Now, after switching over to µTCP, graphed below, there's much fewer network utilization and the memory limit is only reached after 36 hours, which is a great result. Though, still it is not very satisfying that the unikernel leaks memory. Both graphs contain on their left side a few hours of mirage-tcpip, and shortly after 20:00 on Nov 23rd µTCP got deployed.</p>
|
||||||
<p><img src="/static/img/a.ns.mtcp-utcp.png" alt="" /></p>
|
<p><a href="/static/img/a.ns.mtcp-utcp.png"><img src="/static/img/a.ns.mtcp-utcp.png" width="350" /></a></p>
|
||||||
<p>Investigating the involved parts showed that a TCP connection that was never established has been registered at the MirageOS layer, but the pure core does not expose an event from the received RST that the connection has been cancelled. This means the MirageOS layer piles up all the connection attempts, and doesn't inform the application that the connection couldn't be established. Once this was well understood, developing the <a href="https://github.com/robur-coop/utcp/commit/67fc49468e6b75b96a481ebe44dd11ce4bb76e6c">required code changes</a> was straightforward. The graph shows that the fix was deployed at 15:25. The memory usage is constant afterwards, but the network utilization increased enormously.</p>
|
<p>Investigating the involved parts showed that a TCP connection that was never established has been registered at the MirageOS layer, but the pure core does not expose an event from the received RST that the connection has been cancelled. This means the MirageOS layer piles up all the connection attempts, and doesn't inform the application that the connection couldn't be established. Once this was well understood, developing the <a href="https://github.com/robur-coop/utcp/commit/67fc49468e6b75b96a481ebe44dd11ce4bb76e6c">required code changes</a> was straightforward. The graph shows that the fix was deployed at 15:25. The memory usage is constant afterwards, but the network utilization increased enormously.</p>
|
||||||
<p><img src="/static/img/a.ns.utcp-utcp.png" alt="" /></p>
|
<p><a href="/static/img/a.ns.utcp-utcp.png"><img src="/static/img/a.ns.utcp-utcp.png" width="350" /></a></p>
|
||||||
<p>Now, the network utilization is unwanted. This was hidden by the application waiting forever that the TCP connection getting established. Our bugfix uncovered another issue, a tight loop:</p>
|
<p>Now, the network utilization is unwanted. This was hidden by the application waiting forever that the TCP connection getting established. Our bugfix uncovered another issue, a tight loop:</p>
|
||||||
<ul>
|
<ul>
|
||||||
<li>the nameserver attempts to connect to the other nameserver (<code>request</code>);
|
<li>the nameserver attempts to connect to the other nameserver (<code>request</code>);
|
||||||
|
@ -27,9 +27,9 @@
|
||||||
</li>
|
</li>
|
||||||
</ul>
|
</ul>
|
||||||
<p>This is unnecessary since the DNS server code has a timer to attempt to connect to the remote nameserver periodically (but takes a break between attempts). After understanding this behaviour, we worked on <a href="https://github.com/mirage/ocaml-dns/pull/347">the fix</a> and re-deployed the nameserver again. The graph has on the left edge the tight loop (so you have a comparison), at 16:05 we deployed the fix - since then it looks pretty smooth, both in memory usage and in network utilization.</p>
|
<p>This is unnecessary since the DNS server code has a timer to attempt to connect to the remote nameserver periodically (but takes a break between attempts). After understanding this behaviour, we worked on <a href="https://github.com/mirage/ocaml-dns/pull/347">the fix</a> and re-deployed the nameserver again. The graph has on the left edge the tight loop (so you have a comparison), at 16:05 we deployed the fix - since then it looks pretty smooth, both in memory usage and in network utilization.</p>
|
||||||
<p><img src="/static/img/a.ns.utcp-fixed.png" alt="" /></p>
|
<p><a href="/static/img/a.ns.utcp-fixed.png"><img src="/static/img/a.ns.utcp-fixed.png" width="350" /></a></p>
|
||||||
<p>To give you the entire picture, below is the graph where you can spot the mirage-tcpip stack (lots of network, restarting every 3 hours), µTCP-without-informing-application (run for 3 * ~36 hours), dns-server-high-network-utilization (which only lasted for a brief period, thus it is more a point in the graph), and finally the unikernel with both fixes applied.</p>
|
<p>To give you the entire picture, below is the graph where you can spot the mirage-tcpip stack (lots of network, restarting every 3 hours), µTCP-without-informing-application (run for 3 * ~36 hours), dns-server-high-network-utilization (which only lasted for a brief period, thus it is more a point in the graph), and finally the unikernel with both fixes applied.</p>
|
||||||
<p><img src="/static/img/a.ns.all.png" alt="" /></p>
|
<p><a href="/static/img/a.ns.all.png"><img src="/static/img/a.ns.all.png" width="350" /></a></p>
|
||||||
<h1 id="conclusion">Conclusion</h1>
|
<h1 id="conclusion">Conclusion</h1>
|
||||||
<p>What can we learn from that? Choosing convenient tooling is crucial for effective debugging. Also, fixing one issue may uncover other issues. And of course, the mirage-tcpip was running with the dns-server that had a tight reconnect loop. But, below the line: should such an application lead to memory leaks? I don't think so. My approach is that all core network libraries should work in a non-resource-leaky way with any kind of application on top of it. When one TCP connection returns an error (and thus is destroyed), the TCP stack should have no more resources used for that connection.</p>
|
<p>What can we learn from that? Choosing convenient tooling is crucial for effective debugging. Also, fixing one issue may uncover other issues. And of course, the mirage-tcpip was running with the dns-server that had a tight reconnect loop. But, below the line: should such an application lead to memory leaks? I don't think so. My approach is that all core network libraries should work in a non-resource-leaky way with any kind of application on top of it. When one TCP connection returns an error (and thus is destroyed), the TCP stack should have no more resources used for that connection.</p>
|
||||||
<p>We'll take some more time to investigate issues of µTCP in production, plan to write further documentation and blog posts, and hopefully soon are ready for an initial public release. In the meantime, you can follow our development repository.</p>
|
<p>We'll take some more time to investigate issues of µTCP in production, plan to write further documentation and blog posts, and hopefully soon are ready for an initial public release. In the meantime, you can follow our development repository.</p>
|
||||||
|
|
14
atom
14
atom
|
@ -1,4 +1,4 @@
|
||||||
<feed xmlns="http://www.w3.org/2005/Atom"><link href="https://hannes.robur.coop/atom" rel="self"/><id>urn:uuid:981361ca-e71d-4997-a52c-baeee78e4156</id><title type="text">full stack engineer</title><updated>2023-11-28T21:21:16-00:00</updated><entry><summary type="html"><p>Core Internet protocols require operational experiments, even if formally specified</p>
|
<feed xmlns="http://www.w3.org/2005/Atom"><link href="https://hannes.robur.coop/atom" rel="self"/><id>urn:uuid:981361ca-e71d-4997-a52c-baeee78e4156</id><title type="text">full stack engineer</title><updated>2023-11-28T21:23:37-00:00</updated><entry><summary type="html"><p>Core Internet protocols require operational experiments, even if formally specified</p>
|
||||||
</summary><published>2023-11-28T21:17:01-00:00</published><link href="/Posts/TCP-ns" rel="alternate"/><content type="html"><p>The <a href="https://en.wikipedia.org/wiki/Transmission_Control_Protocol">Transmission Control Protocol (TCP)</a> is one of the main Internet protocols. Usually spoken on top of the Internet Protocol (legacy version 4 or version 6), it provides a reliable, ordered, and error-checked stream of octets. When an application uses TCP, they get these properties for free (in contrast to UDP).</p>
|
</summary><published>2023-11-28T21:17:01-00:00</published><link href="/Posts/TCP-ns" rel="alternate"/><content type="html"><p>The <a href="https://en.wikipedia.org/wiki/Transmission_Control_Protocol">Transmission Control Protocol (TCP)</a> is one of the main Internet protocols. Usually spoken on top of the Internet Protocol (legacy version 4 or version 6), it provides a reliable, ordered, and error-checked stream of octets. When an application uses TCP, they get these properties for free (in contrast to UDP).</p>
|
||||||
<p>As common for Internet protocols, also TCP is specified in a series of so-called requests for comments (RFC), the latest revised version from August 2022 is <a href="https://datatracker.ietf.org/doc/html/rfc9293">RFC 9293</a>, the initial one was <a href="https://datatracker.ietf.org/doc/html/rfc793">RFC 793</a> from September 1981.</p>
|
<p>As common for Internet protocols, also TCP is specified in a series of so-called requests for comments (RFC), the latest revised version from August 2022 is <a href="https://datatracker.ietf.org/doc/html/rfc9293">RFC 9293</a>, the initial one was <a href="https://datatracker.ietf.org/doc/html/rfc793">RFC 793</a> from September 1981.</p>
|
||||||
<h1 id="my-brief-personal-tcp-story">My brief personal TCP story</h1>
|
<h1 id="my-brief-personal-tcp-story">My brief personal TCP story</h1>
|
||||||
|
@ -12,11 +12,11 @@
|
||||||
<p>Since late August we are running some unikernels using µTCP, e.g. the <a href="https://retreat.mirage.io">retreat</a> website. This allows us to observe µTCP and find and solve issues that occur in the real world. It turned out that the model is not always correct (i.e. in the model there is no retransmit timer in the close wait state - which avoids proper session teardowns). We report statistics about how many TCP connections are in which state to an influx time series database and view graphs rendered by grafana. If there are connections that are stuck for multiple hours, this indicates a resource leak that should be addressed. Grafana was tremendously helpful for us to find out where to look for resource leaks. Still, there's work to understand the behaviour, look at what the model does, what µTCP does, what the RFC says, and eventually what existing deployed TCP stacks do.</p>
|
<p>Since late August we are running some unikernels using µTCP, e.g. the <a href="https://retreat.mirage.io">retreat</a> website. This allows us to observe µTCP and find and solve issues that occur in the real world. It turned out that the model is not always correct (i.e. in the model there is no retransmit timer in the close wait state - which avoids proper session teardowns). We report statistics about how many TCP connections are in which state to an influx time series database and view graphs rendered by grafana. If there are connections that are stuck for multiple hours, this indicates a resource leak that should be addressed. Grafana was tremendously helpful for us to find out where to look for resource leaks. Still, there's work to understand the behaviour, look at what the model does, what µTCP does, what the RFC says, and eventually what existing deployed TCP stacks do.</p>
|
||||||
<h1 id="the-secondary-nameserver-issue">The secondary nameserver issue</h1>
|
<h1 id="the-secondary-nameserver-issue">The secondary nameserver issue</h1>
|
||||||
<p>One of our secondary nameservers attempts to receive zones (via AXFR using TCP) from another nameserver that is currently not running. Thus it replies to each SYN packet a corresponding RST. Below I graphed the network utilization (send data/packets is positive y-axis, receive part on the negative) over time (on the x-axis) on the left and memory usage (bytes on y-axis) over time (x-axis) on the right of our nameserver - you can observe that both increases over time, and roughly every 3 hours the unikernel hits its configured memory limit (64 MB), crashes with out of memory, and is restarted. The graph below is using the mirage-tcpip stack.</p>
|
<p>One of our secondary nameservers attempts to receive zones (via AXFR using TCP) from another nameserver that is currently not running. Thus it replies to each SYN packet a corresponding RST. Below I graphed the network utilization (send data/packets is positive y-axis, receive part on the negative) over time (on the x-axis) on the left and memory usage (bytes on y-axis) over time (x-axis) on the right of our nameserver - you can observe that both increases over time, and roughly every 3 hours the unikernel hits its configured memory limit (64 MB), crashes with out of memory, and is restarted. The graph below is using the mirage-tcpip stack.</p>
|
||||||
<p><img src="/static/img/a.ns.mtcp.png" alt="" /></p>
|
<p><a href="/static/img/a.ns.mtcp.png"><img src="/static/img/a.ns.mtcp.png" width="350" /></a></p>
|
||||||
<p>Now, after switching over to µTCP, graphed below, there's much fewer network utilization and the memory limit is only reached after 36 hours, which is a great result. Though, still it is not very satisfying that the unikernel leaks memory. Both graphs contain on their left side a few hours of mirage-tcpip, and shortly after 20:00 on Nov 23rd µTCP got deployed.</p>
|
<p>Now, after switching over to µTCP, graphed below, there's much fewer network utilization and the memory limit is only reached after 36 hours, which is a great result. Though, still it is not very satisfying that the unikernel leaks memory. Both graphs contain on their left side a few hours of mirage-tcpip, and shortly after 20:00 on Nov 23rd µTCP got deployed.</p>
|
||||||
<p><img src="/static/img/a.ns.mtcp-utcp.png" alt="" /></p>
|
<p><a href="/static/img/a.ns.mtcp-utcp.png"><img src="/static/img/a.ns.mtcp-utcp.png" width="350" /></a></p>
|
||||||
<p>Investigating the involved parts showed that a TCP connection that was never established has been registered at the MirageOS layer, but the pure core does not expose an event from the received RST that the connection has been cancelled. This means the MirageOS layer piles up all the connection attempts, and doesn't inform the application that the connection couldn't be established. Once this was well understood, developing the <a href="https://github.com/robur-coop/utcp/commit/67fc49468e6b75b96a481ebe44dd11ce4bb76e6c">required code changes</a> was straightforward. The graph shows that the fix was deployed at 15:25. The memory usage is constant afterwards, but the network utilization increased enormously.</p>
|
<p>Investigating the involved parts showed that a TCP connection that was never established has been registered at the MirageOS layer, but the pure core does not expose an event from the received RST that the connection has been cancelled. This means the MirageOS layer piles up all the connection attempts, and doesn't inform the application that the connection couldn't be established. Once this was well understood, developing the <a href="https://github.com/robur-coop/utcp/commit/67fc49468e6b75b96a481ebe44dd11ce4bb76e6c">required code changes</a> was straightforward. The graph shows that the fix was deployed at 15:25. The memory usage is constant afterwards, but the network utilization increased enormously.</p>
|
||||||
<p><img src="/static/img/a.ns.utcp-utcp.png" alt="" /></p>
|
<p><a href="/static/img/a.ns.utcp-utcp.png"><img src="/static/img/a.ns.utcp-utcp.png" width="350" /></a></p>
|
||||||
<p>Now, the network utilization is unwanted. This was hidden by the application waiting forever that the TCP connection getting established. Our bugfix uncovered another issue, a tight loop:</p>
|
<p>Now, the network utilization is unwanted. This was hidden by the application waiting forever that the TCP connection getting established. Our bugfix uncovered another issue, a tight loop:</p>
|
||||||
<ul>
|
<ul>
|
||||||
<li>the nameserver attempts to connect to the other nameserver (<code>request</code>);
|
<li>the nameserver attempts to connect to the other nameserver (<code>request</code>);
|
||||||
|
@ -27,15 +27,15 @@
|
||||||
</li>
|
</li>
|
||||||
</ul>
|
</ul>
|
||||||
<p>This is unnecessary since the DNS server code has a timer to attempt to connect to the remote nameserver periodically (but takes a break between attempts). After understanding this behaviour, we worked on <a href="https://github.com/mirage/ocaml-dns/pull/347">the fix</a> and re-deployed the nameserver again. The graph has on the left edge the tight loop (so you have a comparison), at 16:05 we deployed the fix - since then it looks pretty smooth, both in memory usage and in network utilization.</p>
|
<p>This is unnecessary since the DNS server code has a timer to attempt to connect to the remote nameserver periodically (but takes a break between attempts). After understanding this behaviour, we worked on <a href="https://github.com/mirage/ocaml-dns/pull/347">the fix</a> and re-deployed the nameserver again. The graph has on the left edge the tight loop (so you have a comparison), at 16:05 we deployed the fix - since then it looks pretty smooth, both in memory usage and in network utilization.</p>
|
||||||
<p><img src="/static/img/a.ns.utcp-fixed.png" alt="" /></p>
|
<p><a href="/static/img/a.ns.utcp-fixed.png"><img src="/static/img/a.ns.utcp-fixed.png" width="350" /></a></p>
|
||||||
<p>To give you the entire picture, below is the graph where you can spot the mirage-tcpip stack (lots of network, restarting every 3 hours), µTCP-without-informing-application (run for 3 * ~36 hours), dns-server-high-network-utilization (which only lasted for a brief period, thus it is more a point in the graph), and finally the unikernel with both fixes applied.</p>
|
<p>To give you the entire picture, below is the graph where you can spot the mirage-tcpip stack (lots of network, restarting every 3 hours), µTCP-without-informing-application (run for 3 * ~36 hours), dns-server-high-network-utilization (which only lasted for a brief period, thus it is more a point in the graph), and finally the unikernel with both fixes applied.</p>
|
||||||
<p><img src="/static/img/a.ns.all.png" alt="" /></p>
|
<p><a href="/static/img/a.ns.all.png"><img src="/static/img/a.ns.all.png" width="350" /></a></p>
|
||||||
<h1 id="conclusion">Conclusion</h1>
|
<h1 id="conclusion">Conclusion</h1>
|
||||||
<p>What can we learn from that? Choosing convenient tooling is crucial for effective debugging. Also, fixing one issue may uncover other issues. And of course, the mirage-tcpip was running with the dns-server that had a tight reconnect loop. But, below the line: should such an application lead to memory leaks? I don't think so. My approach is that all core network libraries should work in a non-resource-leaky way with any kind of application on top of it. When one TCP connection returns an error (and thus is destroyed), the TCP stack should have no more resources used for that connection.</p>
|
<p>What can we learn from that? Choosing convenient tooling is crucial for effective debugging. Also, fixing one issue may uncover other issues. And of course, the mirage-tcpip was running with the dns-server that had a tight reconnect loop. But, below the line: should such an application lead to memory leaks? I don't think so. My approach is that all core network libraries should work in a non-resource-leaky way with any kind of application on top of it. When one TCP connection returns an error (and thus is destroyed), the TCP stack should have no more resources used for that connection.</p>
|
||||||
<p>We'll take some more time to investigate issues of µTCP in production, plan to write further documentation and blog posts, and hopefully soon are ready for an initial public release. In the meantime, you can follow our development repository.</p>
|
<p>We'll take some more time to investigate issues of µTCP in production, plan to write further documentation and blog posts, and hopefully soon are ready for an initial public release. In the meantime, you can follow our development repository.</p>
|
||||||
<p>We at <a href="https://robur.coop">robur</a> are working as a collective since 2018, on public funding, commercial contracts, and donations. Our mission is to get sustainable, robust and secure MirageOS unikernels developed and deployed. Running your own digital communication infrastructure should be easy, including trustworthy binaries and smooth upgrades. You can help us continuing our work by <a href="https://aenderwerk.de/donate/">donating</a> (select robur from the drop-down, or put &quot;donation robur&quot; in the purpose of the bank transfer).</p>
|
<p>We at <a href="https://robur.coop">robur</a> are working as a collective since 2018, on public funding, commercial contracts, and donations. Our mission is to get sustainable, robust and secure MirageOS unikernels developed and deployed. Running your own digital communication infrastructure should be easy, including trustworthy binaries and smooth upgrades. You can help us continuing our work by <a href="https://aenderwerk.de/donate/">donating</a> (select robur from the drop-down, or put &quot;donation robur&quot; in the purpose of the bank transfer).</p>
|
||||||
<p>If you have any questions, reach us best via eMail to team AT robur DOT coop.</p>
|
<p>If you have any questions, reach us best via eMail to team AT robur DOT coop.</p>
|
||||||
</content><category scheme="https://hannes.robur.coop/tags/tcp" term="tcp"/><category scheme="https://hannes.robur.coop/tags/protocol" term="protocol"/><category scheme="https://hannes.robur.coop/tags/mirageos" term="mirageos"/><id>urn:uuid:96688956-0808-5d44-b795-1d64cbb4f947</id><title type="text">Re-developing TCP from the grounds up</title><updated>2023-11-28T21:21:16-00:00</updated><author><name>hannes</name></author></entry><entry><summary type="html"><p>fleet management for MirageOS unikernels using a mutually authenticated TLS handshake</p>
|
</content><category scheme="https://hannes.robur.coop/tags/tcp" term="tcp"/><category scheme="https://hannes.robur.coop/tags/protocol" term="protocol"/><category scheme="https://hannes.robur.coop/tags/mirageos" term="mirageos"/><id>urn:uuid:96688956-0808-5d44-b795-1d64cbb4f947</id><title type="text">Re-developing TCP from the grounds up</title><updated>2023-11-28T21:23:37-00:00</updated><author><name>hannes</name></author></entry><entry><summary type="html"><p>fleet management for MirageOS unikernels using a mutually authenticated TLS handshake</p>
|
||||||
</summary><published>2022-11-17T12:41:11-00:00</published><link href="/Posts/Albatross" rel="alternate"/><content type="html"><p>EDIT (2023-05-16): Updated with albatross release version 2.0.0.</p>
|
</summary><published>2022-11-17T12:41:11-00:00</published><link href="/Posts/Albatross" rel="alternate"/><content type="html"><p>EDIT (2023-05-16): Updated with albatross release version 2.0.0.</p>
|
||||||
<h2 id="deploying-mirageos-unikernels">Deploying MirageOS unikernels</h2>
|
<h2 id="deploying-mirageos-unikernels">Deploying MirageOS unikernels</h2>
|
||||||
<p>More than five years ago, I posted <a href="/Posts/VMM">how to deploy MirageOS unikernels</a>. My motivation to work on this topic is that I'm convinced of reduced complexity, improved security, and more sustainable resource footprint of MirageOS unikernels, and want to ease deployment thereof. More than one year ago, I described <a href="/Posts/Deploy">how to deploy reproducible unikernels</a>.</p>
|
<p>More than five years ago, I posted <a href="/Posts/VMM">how to deploy MirageOS unikernels</a>. My motivation to work on this topic is that I'm convinced of reduced complexity, improved security, and more sustainable resource footprint of MirageOS unikernels, and want to ease deployment thereof. More than one year ago, I described <a href="/Posts/Deploy">how to deploy reproducible unikernels</a>.</p>
|
||||||
|
|
Loading…
Reference in a new issue