Upcoming release of Yggdrasil v0.5

Hornbeam
11 min

This is a translation of a pre-release blog post by one of the developers of the Yggdrasil network, in which he tried to give an extended explanation of the changes that will affect the network after the release of 0.5.

In my opinion, the changes are quite interesting and inspiring. Look forward to. The client for Android, I have already begun to modify it for these changes, the assemblies are already being tested by enthusiasts of the Russian-speaking community.

But enough introductory words, I leave the speech to Arceliar :

Introduction

With version 0.5.0 coming soon, now seems like a good time to explain what we've been working on over the past couple of years. While we're overall quite happy with version 0.4.X, there are a few issues with this design that may cause the network to behave differently than we like. The purpose of this blog post is to briefly review how version 0.4 works, explain the problems with this approach, and describe the changes we made in version 0.5 to try to solve them.

Current state

The 0.4.X routing scheme consists of three main components:

  1. DHT-based routing used to send traffic when the route to the destination is unknown.

  2. Greedy tree routing scheme used for routing traffic in DHT and "pathfinding" for source routing.

  3. A source routing scheme that encodes the path found in the tree space in packet headers so traffic can follow a more direct route than the one offered by DHT (and continue to operate while the tree state changes).

The life cycle of a connection goes through these three stages sequentially. During the initial key exchange, the path to the destination is unknown, so traffic is routed through DHT. When nodes receive traffic coming through the DHT, they initiate a path search to find a more efficient route through the tree space. When the path search is completed, they switch to source routing, which encapsulates the DHT-routed packet inside the source-routed packet. If a packet sent by the source ever deadlocks, the source routing header is removed and routing ends via DHT. Receiving this routed DHT packet triggers a path search in the background to find a new path.

Overall this design works well. Peers can start communicating (in our case, sending key exchange traffic) before needing to look up any routes, and everything will fall back gracefully. No additional traffic is required to detect corrupted paths because the DHT fallback takes care of the path loss signaling.

Problems

While I don't have any concerns with the overall design of version 0.4, all of the individual components have issues.

First of all, the DHT design used in version 0.4 does not scale as well as we had hoped. Nodes need to keep track not only of paths to their key space neighbors, but also of any such paths passing through their node. This means that some portion of nodes ends up in a position where they remember a large percentage of all paths through their node. This results in high memory costs and potentially high traffic. DHT traffic usage on the 0.4 network is relatively low because DHT is predominantly hard state, but all attempts to create more secure DHT have resulted in soft state designs where traffic costs can become significant. Without protection, DHT will remain vulnerable to some attacks (or will behave poorly in the presence of misconfigured nodes, such as random anycast nodes). A more insidious problem is the DHT convergence time: in the worst case, convergence requires O(n) steps, and we have good reason to believe that this happens in some typical use cases. Additionally, the hard-state design required active monitoring of each peer connection to quickly detect when a link was down. This results in much more idle traffic between nodes than we would like to see.

Second, the tree can create conflicting views of the network, depending on which node's information the node is paying attention to. This results in "flapping" where a non-parent connection breaks because nodes tend to switch to a new parent that was using the same (now broken) connection but has not yet had time to declare the connection broken. Thus, nodes tend to switch from their parent to an alternative node, and then back to the original parent when the alternative ends up reporting the same error. This oscillation causes nodes throughout the tree (all children) to oscillate, which can cascade across the network. In version 0.4 there are mechanisms to control the rate of change, but this is a temporary solution to the main problem in the design.

Finally, source routing is good in principle, but the packet format we used for it is not. It is too easy for an attacker to insert multiple redundant hops to create a (terminal) loop, which can result in wasted throughput on the target set of nodes.

Changes

Many changes were made to Yggdrasil's design to combat the above problems. The new approaches are not necessarily how we want the network to function in the long term, but rather are alternatives that we would like to test to better explore the space of possible solutions. Generally speaking, they are not user-facing, except for some changes to the information available in the yggdrasilctl API.

Finding ways

The most significant change is the removal of the DHT-based routing scheme that was used to initially set up routes through the tree space. We now use a simpler YggIP/key->coord lookup protocol, which is similar to ARP/NDP lookup on an Ethernet broadcast network (but without broadcasting traffic across the entire network). Nodes keep track of which nodes are reachable by link in the tree (i.e. parent and child nodes), as well as a Bloom filter of the keys of all nodes reachable by that link (with the keys truncated to the /64 parts to allow IP/prefix lookups) . A search packet received on a tree link is forwarded to any other tree link whose destination is in the Bloom filter.

Despite the shortcomings of this approach, it has a number of advantages. First, random anycast configurations (using the same key from multiple nodes) do not break any network-wide data structures, but simply result in search traffic arriving at more than one node. Subsequent steps will typically fail (route lookup, key exchange, etc.) but will not cause collateral damage to the rest of the network. Second, it requires very little traffic for idle service and only requires a constant number of states per node. This means that nodes at the core of the network are not responsible for maintaining the state of anything other than their immediate surroundings, and are not overwhelmed by idle DHT traffic originating from remote nodes. Likewise, nodes at the edge of the network do not need to send regular DHT keepalive traffic, which can help with traffic usage and power consumption on mobile devices. Third, this structure converges asynchronously and in a time proportional to the depth of the tree, rather than sequentially and in a time proportional to the size of the network, so DHT's very poor worst-case scenario convergence time can be avoided.

The main disadvantage of this approach is that Bloom filters can and will generate false positives as they fill up. In practice, we expect the filters at the core of the network to become saturated when each node is reachable via any path. This in turn means that the node's route to the "core" of the network (usually through the parent node) will take on the role of the "default route" and will receive a copy of every request sent by the node. We expect search traffic to reach the core of the network, effectively act as broadcast traffic within the core, and then be filtered out by Bloom filters as it approaches the edges (so that a node at the edge of the network is unlikely to receive any traffic that is not intended for it ). In short, nodes in the core will use less memory and less traffic when idle, but active network use will consume more traffic. It remains to be seen whether this is a worthwhile trade-off.

Just to summarize a bit: we are using 8192-bit Bloom filters with 8 hash functions. If there is a node that acts as a gateway on a subnet with 200 nodes, then the false positive rate is about 1 in a million (that is, we expect the network to require about a million nodes before the gateway sees any false positive lookup traffic) ). Most lookup traffic is correct up to the gateway in a 500-node subnet in a 1-million-node network.

Thus, in practice, most nodes should not see any significant number of false positives unless they are acting as a gateway to a very large subnet (or are on a network many orders of magnitude larger than the current version 0.4 network). In our current network, a few nodes may end up in the "core" region, where they receive false positive search traffic from most searches. We hope this is still preferable to constant DHT traffic when idle, and potentially very high memory requirements.

CRDT-tree


Previously, the location of each node in the tree was verified by a chain of signatures (links to the parent node) from the node back to the root. This can lead to inconsistencies when different nodes have mutually incompatible views of the same ancestor (for example, node A says parent P has a grandparent G, but node B says the same parent P has a grandparent G'), which complicates parent choice in response to changes in network state. To solve this problem, we split the tree information into separate information for each link, which is transferred between nodes and combined into a CRDT structure. This forces nodes to have a locally consistent view of the network, which prevents unnecessary "jitter" in some cases where a node's route to the root is disrupted. This also reduces the amount of information that must be sent over the network, since the node does not need to send information back to the peer if it knows that the peer has already seen it.

Greedy Routing

Source routing from version 0.4 has been removed in favor of greedy routing, as was done in version 0.3.X. In a stable network, this does not affect the route that packets take, only how the decision to choose that route is made. We may return to a source routing approach in the future, but the approach used in version 0.4 had some issues that needed to be addressed first. Source routing is a good performance optimization if it can be done safely, but it is not the explicit goal of this design. Although I have ideas on how to do this, it is not a priority in the short term. Since the source routing scheme will presumably still rely on greedy routing to find the path, I think it is useful for this release to focus on stress testing the greedy routing part of the network and leave source routing for when other parts the stack will become closer to a stable state.

Keep-alive for each node removed

We no longer spam keep-alive packets over peer links every few seconds. Instead, when we send traffic, we require an acknowledgment within a few seconds (unless the traffic we sent was an acknowledgment). This means we don't detect link failures as quickly when idle (we have to wait for user traffic or protocol traffic to use the connection), but it should reduce idle traffic consumption (and likely reduce power consumption on mobile devices). Note that this is different from, for example, TCP's own keep-alive mechanisms, which remain enabled (TCP Keep-alive).

New opportunities

Version 0.5 also adds several new features. It is now possible to limit peering using the and ?password=Xline argument (and in the multicast configuration). This requires nodes to agree on a password before they begin peering. Note that this does not allow for network isolation: nodes can still communicate with the rest of the network if they want, and access is still transitive. This makes it easy to limit who can automatically connect within a subnet, or to set up a public peer without allowing connections to everyone who can find it. There is also support for connections. QUIC peering will only use one traffic stream, so its semantics are much the same as TCP/TLS peering, but it can be useful in cases where UDP packets can more easily traverse a NAT or firewall. We generally expect it to perform worse than TCP/TLS, so we don't recommend using it when it's not needed.ListenPeersquic://

Results

Barring unforeseen delays, Yggdrasil v0.5 should be released within the next few weeks. We hope that we have addressed the most important stability and scaling issues in version 0.4 and have significantly reduced memory and idle traffic consumption for some nodes. Some aspects of the new design are radically different from version 0.4, so it remains to be seen how well these changes will work in the real world. Preliminary tests (and plenty of simulations) make us optimistic that version 0.5 will give us a stable foundation to build on as we explore any limitations of this new approach and work toward the inevitable design change for version 0.6.

Коментарі

Популярні дописи з цього блогу

Як видалити троян Conteban з вашої системи

INFERNUM1488 :: Trojan Win32 Conteban A Ml — | что это и как бороться?