Skip to content
This repository has been archived by the owner on Feb 1, 2023. It is now read-only.

PoC of Bitswap protocol extensions implementation #189

Merged
merged 1 commit into from
Jan 30, 2020

Conversation

dirkmc
Copy link
Contributor

@dirkmc dirkmc commented Aug 29, 2019

This is a Proof of concept of the Bitswap protocol extensions outlined in #186.

TODO:

  • When requesting blocks, send a single optimistic want-block and send want-haves to all other peers in the session
  • Remove queue ordering from SessionPotentialManager - just send wants as soon as we get them
  • If there is a timeout for a want-have or want-block, assume DONT_HAVE and (possibly) move the peer to "unresponsive list" in the session
  • Add a debounce function to the message queue that takes two parameters:
    • debounce time
      wait at least this interval after the last invocation before calling the target function
    • max time
      if we're still debouncing after this amount of time, don't wait any more, call the target function anyway
  • Either remove commented out debugging code or put it behind a flag

@dirkmc
Copy link
Contributor Author

dirkmc commented Oct 1, 2019

Currently the proof-of-concept has some contention because of the way sessions with wants are matched with peers.

This image from a recent presentation about Bitswap Improvements demonstrates the functionality we would like to implement:
Screen Shot 2019-10-01 at 5 43 56 PM

Currently we have a PeerBroker, that knows which peers have availability in their request queues.

  • when a session is ready to send wants, it signals the PeerBroker
  • the PeerBroker gives the session the list of available peers and asks it for a want to send to one of the peers
  • the session compares the list of all available peers to the peers the session is interested in, and chooses the want / peer combination that will give the highest potential gain

The problem with this approach is that the PeerBroker must repeatedly query the session in real time, and the session must perform a complicated algorithm to find the best want / peer match. The algorithm must take into account which wants have already been sent to the peer (by any session) and ignore those wants.

Ideally the session would be able to perform the matching algorithm asynchronously so as not to block the PeerBroker.

@dirkmc
Copy link
Contributor Author

dirkmc commented Oct 2, 2019

This proposal eliminates the PeerBroker. Instead, the sessions themselves directly query each PeerManager for the peers that the session is interested in (The PeerManager knows how much free space the peer has in it's request queue).

When a session becomes interested in a peer, the session registers with the PeerManager for the peer. When the peer's availability changes, the PeerManager signals each registered session. The session requests tokens from the PeerManager until either

  • the session has no more wants to send to the peer
  • the peer has no more available slots in its queue

The session keeps an ordered list of wants. Each want has a "potential gain" for each peer, depending on how confident the local node is that the peer has the block. The wants are ordered by maximum potential gain, then by FIFO. The "sent potential" for a want is the sum of each want / peer combination that has already been sent to peers.

For example consider a scenario in which CID1 has a potential gain of

  • Peer A: 0.8
  • Peer B: 0.5
  • Peer C: 0.2

Initially CID1 has

  • Sent potential of 0
  • Maximum potential gain of 0.8 (Peer A: 0.8 is the largest potential gain)

The local node sends WANT CID1 to Peer A.
Now CID1 will have

  • Sent potential of 0.8 (Peer A: 0.8)
  • Maximum potential gain of 0.5 (Peer B: 0.5 is the largest potential gain)

The order changes when:

  • A peer becomes available / unavailable (low frequency)
    Filter unavailable peers from the sort calculation (a column in the diagram below)
  • The threshold value changes (frequent)
    Filter the wants with a sent potential above the threshold (a row in the diagram below)
  • Want potential changes for a want / peer (frequent)
    When the peer sends a HAVE / DONT_HAVE message for the want, the want potential for the peer changes.
    Update the want potential for the want (a cell in the diagram below)
  • A block is received
    Remove the want (a row in the diagram below)
  • A want is sent
    Remove the want potential for the want / peer and add it to the total sent potential for the want
                                             Want Potential
  Sent potential  Want CID  Max Ptcl   Peer A  Peer B  Peer C  Peer D
  --------------  --------  --------   ------  ------  ------  ------
       0.2          CID3    A/B: 0.8     0.8     0.8    0.5     Sent
       0.2          CID1    B:   0.5     0.2     0.5    Sent    -0.8
       0.4          CID2    C:   0.5    -0.8     0.2    0.5     Sent

In practice this sort can be a little fuzzy, it's not necessary for it to make exactly the right choice.
When the session requests to send a want to the peer, the PeerManager can reject the request (if the want has already been sent to the peer).

The interfaces are:

Session:
  SignalAvailability(peer, isAvailable)
  UpdateThreshold(threshold)
  UpdateBlockPresence(peer, isPresent)
  BlockReceived(cid)

PeerManager:
  RegisterSession(session)
  RequestToken() bool
  // Return value indicates whether want can be sent to peer
  // (it may already have been sent)
  SendWant(cid) bool

@dirkmc
Copy link
Contributor Author

dirkmc commented Oct 21, 2019

Benchmark Comparison: master vs proof-of-concept

All benchmarks assume a fixed network delay of 10ms. Code: benchmarks_test.go

3Nodes-AllToAll-OneAtATime

Fetch from two seed nodes that both have all 100 blocks.
Request one block at a time, in series.

               duplicates
master: 2.42s  30 / 130 (23%)
poc:    2.49s   5 / 105 (5%)

This test fetches a block, then another block, then another block, etc until all 100 blocks have been fetched. The time taken is about the same, but the proof-of-concept fetches less duplicates.

3Nodes-AllToAll-BigBatch

Fetch from two seed nodes that both have all 100 blocks.
Request all 100 blocks at the same time with a single call to Session.GetBlocks()

               duplicates
master: 0.096s  39 / 139 (28%)
poc:    0.058s  35 / 135 (26%)

The proof-of-concept branch has less restrictive rate-limiting, so it fetches faster.

3Nodes-Overlap1-OneAtATime

Fetch from two seed nodes, one at a time, where:

  • node A has blocks 0 - 74
  • node B has blocks 25 - 99
               duplicates
master: 2.83s  0 / 100
poc:    2.65s  0 / 100
  • The session will broadcast the CID of the first block to all peers.
  • Only node A will respond (node B does not have the block for CID 0).
  • The session will ask node A for blocks up to CID 74.
  • The session will ask node A for the block for CID 75, but node A doesn't have it, and the session doesn't know of any other nodes.
  • The session will get a timeout and broadcast a want for CID 75.
  • Node B will respond (it has the block with CID 75).
  • The session will get the remaining 25 blocks from Node B.

The only difference here is that on the proof-of-concept branch, the remote peer responds with DONT_HAVE if it doesn't have a block. The session sees that all peers (just peer A in this case) have responded with DONT_HAVE for CID 75 so it immediately broadcasts.
On master there is no DONT_HAVE message so the session instead waits for a timeout before broadcasting.

3Nodes-Overlap3

The Overlap3 benchmarks fetch from two seed nodes, where:

  • node A has even blocks
  • node B has odd blocks
  • both nodes have every third block

3Nodes-Overlap3-OneAtATime

Request 100 blocks, one at a time, in series.

               duplicates
master: 2.23s  34 / 134 (25%)
poc:    2.50s  34 / 134 (25%)

This benchmark tests that the session can retrieve blocks at a consistent rate from inconsistently populated seeds.

3Nodes-Overlap3-BatchBy10

Request 100 blocks, 10 at a time, in series.

               duplicates
master: 0.842s  32 / 132 (24%)
poc:    0.277s  31 / 131 (24%)

The proof-of-concept branch sends HAVE and DONT_HAVE messages so that the requesting node can quickly determine where the desired blocks are, so overall it runs faster.

3Nodes-Overlap3-AllConcurrent

Request all 100 blocks in parallel as individual Session.GetBlock() calls.

               duplicates
master: 0.725s  24 / 124 (19%)
poc:    0.055s  13 / 113 (10%)

As above, the proof-of-concept branch can quickly determine block distribution, and has less restrictive rate-limiting, meaning it's faster and there are less duplicates.

3Nodes-Overlap3-BigBatch

Request all 100 blocks at once with a single Session.GetBlocks() call.

               duplicates
master: 0.713s  24 / 124 (19%)
poc:    0.056s  13 / 113 (12%)

Same reasons as above.

3Nodes-Overlap3-UnixfsFetch

Similar to how IPFS requests blocks in a DAG: request 1, then 10, then 89 blocks.

               duplicates
master: 0.336s  33 / 133 (25%)
poc:    0.079s  34 / 134 (25%)

Same reasons as above.

10Nodes-AllToAll-OneAtATime

Request 100 blocks, one by one, from 9 seeds that each have all of the blocks.

               duplicates
master: 2.322s  66 / 166 (25%)
poc:    2.304s  13 / 113 (25%)

In this case all seeds have all the blocks so there is no advantage to the proof-of-concept using HAVE / DONT_HAVE messages.

10Nodes-AllToAll-BatchFetchBy10

Request 100 blocks, 10 at a time, from 9 seeds that each have all of the blocks.

               duplicates
master: 0.251s  32 / 132 (24%)
poc:    0.265s  8 / 108 (7%)

Same as above.

10Nodes-AllToAll-BigBatch

Request all 100 blocks with a single Session.GetBlocks() call.

               duplicates
master: 0.093s  32 / 132 (24%)
poc:    0.050s  14 / 114 (12%)

The proof-of-concept has less restrictive rate-limiting so it can fetch faster.

10Nodes-AllToAll-AllConcurrent

Request all 100 blocks in parallel as individual Session.GetBlock() calls.

               duplicates
master: 0.090s  34 / 132 (25%)
poc:    0.050s  21 / 121 (17%)

The proof-of-concept has less restrictive rate-limiting so it can fetch faster.

10Nodes-AllToAll-UnixfsFetch

Similar to how IPFS requests blocks in a DAG: request 1, then 10, then 89 blocks.

               duplicates
master: 0.112s  93 / 193 (48%)
poc:    0.072s  18 / 118 (15%)

The proof-of-concept has less restrictive rate-limiting and better duplicate block management.

10Nodes-AllToAll-UnixfsFetchLarge

Similar to how IPFS requests blocks in a DAG: request 1, then fetch 10 at a time up to 1000.

               duplicates
master: 0.900s  557 / 1557 (36%)
poc:    0.507s  18 / 1018 (2%)

The proof-of-concept has less restrictive rate-limiting and better duplicate block management.

10Nodes-OnePeerPerBlock

The 10Nodes-OnePeerPerBlock benchmarks fetch from 9 seed nodes, where 100 blocks are distributed randomly across the seeds (no duplicates)

10Nodes-OnePeerPerBlock-OneAtATime

Request 100 blocks, one by one, where blocks are randomly distributed across seeds.

               duplicates
master: 6.708s  0 / 100
poc:    4.035s  0 / 100
  1. The session broadcasts a request for the first block
  2. Peer X responds with the block
  3. The session asks Peer X for the second block
  4. Peer X does not respond
  5. The session times out
  6. Repeat step 1
    On the proof-of-concept branch, in step 4 the remote peer immediately responds with DONT_HAVE, so the requesting node can immediately broadcast instead of waiting for a timeout.

10Nodes-OnePeerPerBlock-BigBatch

Request all 100 blocks with a single Session.GetBlocks() call, where blocks are randomly distributed across seeds.

               duplicates
master: 1.349s  0 / 100
poc:    0.070s  0 / 100

The proof-of-concept branch has less restrictive rate-limiting and can quickly determine where the blocks are with HAVE / DONT_HAVE messages.

10Nodes-OnePeerPerBlock-UnixfsFetch

Similar to how IPFS requests blocks in a DAG: request 1, then 10, then 89 blocks, where blocks are randomly distributed across seeds.

               duplicates
master: 1.309s  0 / 100
poc:    0.186s  0 / 100

Same reasons as above.

200Nodes-AllToAll-BigBatch

Fetch from 199 seed nodes, all nodes have all blocks, fetch all 20 blocks with a single Session.GetBlocks() call.

               duplicates
master: 0.043s  792 / 812 (98%)
poc:    0.048s  198 / 218 (90%)

@dirkmc
Copy link
Contributor Author

dirkmc commented Nov 4, 2019

Previously in the proof-of-concept engine we tried to pull data from the request queue up to a maximum size (in this case 1MiB).
@Stebalien suggested changing strategy to target a minimum amount of data, and accept an overflow, so that in the case where there's a small amount of control data (HAVE / DONT_HAVE messages) followed by a block, the block will be included.

I benchmarked this approach against EC2 and it appears to take about the same amount of time but produce a lot less duplicate data:

1 leech / 4 seeds

previous poc
------------

Total time: 1 second 1 millisecond
Total time: 1 second 5 milliseconds
Total time: 1 second 9 milliseconds
Total time: 985 milliseconds

| BLOCKSRECV | BLOCKSSENT | DUPBLOCKS | DATARECV | DATASENT | DUPDATA |
|   4,803    |   28,199   |    274    |  1.2 GB  |  7.2 GB  |  61 MB  |
|   4,811    |   30,354   |    273    |  1.2 GB  |  7.8 GB  |  61 MB  |
|   4,803    |   32,127   |    269    |  1.2 GB  |  8.2 GB  |  61 MB  |
|   4,803    |   33,851   |    273    |  1.2 GB  |  8.7 GB  |  61 MB  |

current poc
-----------

Total time: 1 second 238 milliseconds
Total time: 1 second 265 milliseconds
Total time: 1 second 125 milliseconds
Total time: 992 milliseconds

| BLOCKSRECV | BLOCKSSENT | DUPBLOCKS | DATARECV | DATASENT | DUPDATA |
|   4,803    |   8,009    |    274    |  1.2 GB  |  2.0 GB  |  61 MB  |
|   4,803    |   9,052    |    274    |  1.2 GB  |  2.3 GB  |  61 MB  |
|   4,811    |   10,550   |    274    |  1.2 GB  |  2.7 GB  |  61 MB  |
|   4,803    |   12,628   |    273    |  1.2 GB  |  3.2 GB  |  61 MB  |

9 leeches / 1 seed

previous poc
------------

Total time: 1 second 854 milliseconds
Total time: 1 second 874 milliseconds
Total time: 1 second 770 milliseconds

| BLOCKSRECV | BLOCKSSENT | DUPBLOCKS | DATARECV | DATASENT | DUPDATA |
|   18,142   |   18,289   |   5,414   |  4.6 GB  |  4.6 GB  | 1.3 GB  |
|   18,862   |   18,924   |   5,933   |  4.8 GB  |  4.8 GB  | 1.4 GB  |
|   17,158   |   17,254   |   5,255   |  4.3 GB  |  4.4 GB  | 1.3 GB  |

current poc
-----------

Total time: 1 second 502 milliseconds
Total time: 1 second 461 milliseconds
Total time: 1 second 641 milliseconds
Total time: 1 second 591 milliseconds

| BLOCKSRECV | BLOCKSSENT | DUPBLOCKS | DATARECV | DATASENT | DUPDATA |
|   14,372   |   14,383   |   2,823   |  3.6 GB  |  3.6 GB  | 613 MB  |
|   14,240   |   14,250   |   2,686   |  3.5 GB  |  3.6 GB  | 570 MB  |
|   14,945   |   14,964   |   3,311   |  3.7 GB  |  3.7 GB  | 741 MB  |
|   14,427   |   14,448   |   2,798   |  3.6 GB  |  3.6 GB  | 606 MB  |

@Stebalien
Copy link
Member

Hypotheses: Because we pack fewer blocks into a single message (as long as we include enough data to get over the minimum size), we've reduced the latency to the first block (because we no longer need to read a large message to get at the first block).

Suggestion: Consider making our timeouts variable (or just longer). This suggests that we're not waiting long enough.

@dirkmc
Copy link
Contributor Author

dirkmc commented Nov 5, 2019

Right I think reduced latency to first block makes sense. In any case it's demonstrably better so let's keep this change :)

With regards to the timeouts, I'm not sure which ones you mean?

@hannahhoward
Copy link
Contributor

Just popping by to say the results above are really awesome! Nice work @dirkmc

@Stebalien
Copy link
Member

With regards to the timeouts, I'm not sure which ones you mean?

I assume we still have timeouts such that, if we don't receive a block from a peer within a period of time, we ask another peer for that block. Higher block latencies might cause us to trip over this timeout.

Alternatively, it could be that we're now able to send a cancel fast enough to actually cancel these duplicate blocks. If this is the case, we might want to tune down how likely we are to ask for duplicate blocks.

@dirkmc
Copy link
Contributor Author

dirkmc commented Nov 6, 2019

Ah I see.

So the only timeout we have now is for when we don't get a response from all peers in the session: if that times out then we broadcast (to all peers we're connected to).

Currently I've implemented requests for a CID such that

  • want-block is sent optimistically to at most 2 peers
  • want-have is sent to all other peers in the session

In addition, because we have better knowledge of the distribution of blocks we can

  • send want-blocks more accurately (to peers that are likely to have the block)
  • aggressively pare back the number of optimistic want-blocks

So in practice we shouldn't be getting much duplicate data once we've started receiving responses and can get a sense of block distribution.

@Stebalien
Copy link
Member

Hm. Looking at those tables, we're still getting more duplicate data than I'd expect. Much less than before but we should be getting a few duplicate blocks at the beginning then almost nothing.

@dirkmc
Copy link
Contributor Author

dirkmc commented Nov 6, 2019

For the 1 leech / 4 seed case the dup data is close to optimal.

For the 9 leech / 1 seed case there is about 20% dup data.
I believe this is because the nodes all know which blocks each other want, so as soon as one has the block, it immediately sends the block to its neighbours, sometimes before receiving a cancel from the neighbour (which received the block from someone else).

The reason I'm testing with the 9 leech / 1 seed use case is because it puts a lot of stress on the seed node. In practice I would assume it's quite unlikely for 9 nodes to start asking for data at exactly the same time - more likely in this kind of scenario the requests would be slightly staggered, which should reduce duplication. Do you think that's a reasonable assumption?

@Stebalien
Copy link
Member

The reason I'm testing with the 9 leech / 1 seed use case is because it puts a lot of stress on the seed node. In practice I would assume it's quite unlikely for 9 nodes to start asking for data at exactly the same time - more likely in this kind of scenario the requests would be slightly staggered, which should reduce duplication. Do you think that's a reasonable assumption?

Somewhat but we should test what happens when they're slightly staggered. I am a bit concerned about the streaming video use-case but we can optimize that later.


it immediately sends the block to its neighbours, sometimes before receiving a cancel from the neighbour (which received the block from someone else).

Musings for future improvement:

I think we're missing a classification here. We currently have:

  • Nodes that have the dag: we ask blocks from these nodes.
  • Nodes that don't have (this part of) the dag: we don't ask for blocks from these nodes (but periodically ask for haves?).

We probably want a third category:

  • Nodes that keep flip-flopping (having, not having, having): we should probably always send these nodes have wants.

@dirkmc
Copy link
Contributor Author

dirkmc commented Nov 6, 2019

we should test what happens when they're slightly staggered

Agreed 👍 Let's add that as a test plan parameter in testground (it's not currently possible in p2plab)

With respect to sending wants, note that

  • we always send want-haves for each CID to each peer in the session.
  • we optimistically send want-block to some peers (maximum 2) that have a high probability of having the block.
  • when a peer responds with HAVE we immediately send want-block to the node

@Stebalien
Copy link
Member

@dirkmc I'm having trouble parsing the benchmarks above. It looks like the sum of the data sent is significantly more than the sum of the data received.

@dirkmc
Copy link
Contributor Author

dirkmc commented Nov 22, 2019

There was a lot of output so I reduced it to just the relevant lines. I will run the benchmark again with full output to give you an idea

@dirkmc
Copy link
Contributor Author

dirkmc commented Nov 22, 2019

Previous PoC: 7a00e140b73eec57141ae977fd782084dacaf59f
Current PoC: f46ce2a6976dfca52474e801c46160888c4adf99

1 leech / 4 seeds

previous poc
------------

# Summary
Total time: 931 milliseconds

# Bandwidth
+-------------------+---------------------+---------+----------+--------+----------+
|       QUERY       |        NODE         | TOTALIN | TOTALOUT | RATEIN | RATEOUT  |
+-------------------+---------------------+---------+----------+--------+----------+
| (not 'neighbors') | i-06c5e4667654a6dcc |   0 B   |   0 B    | 0 B/s  |  0 B/s   |
+-------------------+---------------------+         +          +        +          +
|         -         | i-004ea12218a2604b3 |         |          |        |          |
+                   +---------------------+---------+----------+        +----------+
|                   | i-093b9da06c4a3573e | 1.6 MB  |  5.9 GB  |        | 49 MB/s  |
+                   +---------------------+---------+----------+        +----------+
|                   | i-0bbe87846d79ac7e7 |   0 B   |   0 B    |        |  0 B/s   |
+                   +---------------------+---------+----------+        +----------+
|                   | i-0d9eeee0f4828d684 | 126 MB  |  6.3 GB  |        | 64 MB/s  |
+-------------------+---------------------+---------+----------+--------+----------+
|                            TOTAL        | 128 MB  |  12 GB   | 0 B/s  | 113 MB/s |
+-------------------+---------------------+---------+----------+--------+----------+

# Bitswap
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
|       QUERY       |        NODE         | BLOCKSRECV | BLOCKSSENT | DUPBLOCKS | DATARECV | DATASENT | DUPDATA |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
| (not 'neighbors') | i-06c5e4667654a6dcc |   1,222    |     0      |    24     |  309 MB  |   0 B    |  17 kB  |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
|         -         | i-004ea12218a2604b3 |     0      |    566     |     0     |   0 B    |  145 MB  |   0 B   |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-093b9da06c4a3573e |   1,707    |   23,138   |    42     |  440 MB  |  5.9 GB  |  11 MB  |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-0bbe87846d79ac7e7 |     0      |    312     |     0     |   0 B    |  78 MB   |   0 B   |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-0d9eeee0f4828d684 |   1,874    |   24,652   |    209    |  479 MB  |  6.3 GB  |  50 MB  |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
|                            TOTAL        |   4,803    |   48,668   |    275    |  1.2 GB  |  12 GB   |  61 MB  |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+

current poc
-----------

# Summary
Total time: 1 second 453 milliseconds

# Bandwidth
+-------------------+---------------------+---------+----------+----------+----------+
|       QUERY       |        NODE         | TOTALIN | TOTALOUT |  RATEIN  | RATEOUT  |
+-------------------+---------------------+---------+----------+----------+----------+
| (not 'neighbors') | i-06c5e4667654a6dcc | 224 MB  |  344 kB  | 224 MB/s | 344 kB/s |
+-------------------+---------------------+---------+----------+----------+----------+
|         -         | i-004ea12218a2604b3 | 130 kB  |  171 MB  | 130 kB/s | 171 MB/s |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-093b9da06c4a3573e | 1.6 MB  |  5.1 GB  |  0 B/s   | 12 MB/s  |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-0bbe87846d79ac7e7 | 109 kB  |  26 MB   | 109 kB/s | 26 MB/s  |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-0d9eeee0f4828d684 | 126 MB  |  5.6 GB  |  0 B/s   | 21 MB/s  |
+-------------------+---------------------+---------+----------+----------+----------+
|                            TOTAL        | 351 MB  |  11 GB   | 224 MB/s | 231 MB/s |
+-------------------+---------------------+---------+----------+----------+----------+

# Bitswap
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
|       QUERY       |        NODE         | BLOCKSRECV | BLOCKSSENT | DUPBLOCKS | DATARECV | DATASENT | DUPDATA |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
| (not 'neighbors') | i-06c5e4667654a6dcc |   1,217    |     0      |    19     |  309 MB  |   0 B    |  16 kB  |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
|         -         | i-004ea12218a2604b3 |     0      |    972     |     0     |   0 B    |  250 MB  |   0 B   |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-093b9da06c4a3573e |   1,707    |   19,865   |    42     |  440 MB  |  5.1 GB  |  11 MB  |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-0bbe87846d79ac7e7 |     0      |    112     |     0     |   0 B    |  26 MB   |   0 B   |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-0d9eeee0f4828d684 |   1,874    |   22,082   |    209    |  479 MB  |  5.6 GB  |  50 MB  |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
|                            TOTAL        |   4,798    |   43,031   |    270    |  1.2 GB  |  11 GB   |  61 MB  |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+

9 leeches / 1 seed

previous poc
------------

# Summary
Total time: 1 second 892 milliseconds

# Bandwidth
+-------------------+---------------------+---------+----------+----------+----------+
|       QUERY       |        NODE         | TOTALIN | TOTALOUT |  RATEIN  | RATEOUT  |
+-------------------+---------------------+---------+----------+----------+----------+
| (not 'neighbors') | i-0037217ce77b6d4e4 | 211 MB  |  341 MB  | 134 MB/s | 215 MB/s |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-00b5ff44b55107f7c | 278 MB  |  222 MB  | 173 MB/s | 138 MB/s |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-02350a7095c66c0cb | 303 MB  |  174 MB  | 191 MB/s | 110 MB/s |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-04ab929f265802770 | 261 MB  |  204 MB  | 165 MB/s | 129 MB/s |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-0556e55ae273c939b | 244 MB  |  142 MB  | 154 MB/s | 90 MB/s  |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-05969e410b1e6e7f7 | 293 MB  |  177 MB  | 184 MB/s | 111 MB/s |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-07578111218cbe532 | 304 MB  |  157 MB  | 191 MB/s | 99 MB/s  |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-0d1c4d8d2cf599bf9 | 326 MB  |  123 MB  | 206 MB/s | 78 MB/s  |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-0fff353fa2e7c3cb3 | 239 MB  |  180 MB  | 151 MB/s | 114 MB/s |
+-------------------+---------------------+---------+----------+----------+----------+
|         -         | i-0c3d6ed4bdbe74f8b | 1.1 MB  |  892 MB  | 659 kB/s | 561 MB/s |
+-------------------+---------------------+---------+----------+----------+----------+
|                            TOTAL        | 2.5 GB  |  2.6 GB  | 1.5 GB/s | 1.6 GB/s |
+-------------------+---------------------+---------+----------+----------+----------+

# Bitswap
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
|       QUERY       |        NODE         | BLOCKSRECV | BLOCKSSENT | DUPBLOCKS | DATARECV | DATASENT | DUPDATA |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
| (not 'neighbors') | i-0037217ce77b6d4e4 |   1,525    |   1,818    |    305    |  385 MB  |  461 MB  |  71 MB  |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-00b5ff44b55107f7c |   1,919    |   1,935    |    602    |  481 MB  |  486 MB  | 142 MB  |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-02350a7095c66c0cb |   2,484    |   1,143    |    767    |  634 MB  |  279 MB  | 190 MB  |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-04ab929f265802770 |   2,419    |   1,406    |    866    |  613 MB  |  349 MB  | 213 MB  |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-0556e55ae273c939b |   1,696    |    997     |    421    |  428 MB  |  253 MB  | 100 MB  |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-05969e410b1e6e7f7 |   1,963    |   1,235    |    553    |  492 MB  |  310 MB  | 128 MB  |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-07578111218cbe532 |   2,188    |   1,155    |    632    |  551 MB  |  288 MB  | 149 MB  |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-0d1c4d8d2cf599bf9 |   2,144    |    965     |    682    |  543 MB  |  243 MB  | 165 MB  |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-0fff353fa2e7c3cb3 |   1,682    |   1,118    |    451    |  425 MB  |  279 MB  | 108 MB  |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
|         -         | i-0c3d6ed4bdbe74f8b |     0      |   6,387    |     0     |   0 B    |  1.6 GB  |   0 B   |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
|                            TOTAL        |   18,020   |   18,159   |   5,279   |  4.6 GB  |  4.6 GB  | 1.3 GB  |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+

current poc
-----------

# Summary
Total time: 1 second 482 milliseconds

# Bandwidth
+-------------------+---------------------+---------+----------+----------+----------+
|       QUERY       |        NODE         | TOTALIN | TOTALOUT |  RATEIN  | RATEOUT  |
+-------------------+---------------------+---------+----------+----------+----------+
| (not 'neighbors') | i-0037217ce77b6d4e4 | 248 MB  |  200 MB  | 157 MB/s | 127 MB/s |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-00b5ff44b55107f7c | 258 MB  |  173 MB  | 163 MB/s | 109 MB/s |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-02350a7095c66c0cb | 277 MB  |  116 MB  | 175 MB/s | 73 MB/s  |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-04ab929f265802770 | 245 MB  |  210 MB  | 155 MB/s | 132 MB/s |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-0556e55ae273c939b | 262 MB  |  162 MB  | 166 MB/s | 102 MB/s |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-05969e410b1e6e7f7 | 283 MB  |  122 MB  | 179 MB/s | 77 MB/s  |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-07578111218cbe532 | 262 MB  |  152 MB  | 166 MB/s | 96 MB/s  |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-0d1c4d8d2cf599bf9 | 244 MB  |  199 MB  | 154 MB/s | 126 MB/s |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-0fff353fa2e7c3cb3 | 263 MB  |  147 MB  | 165 MB/s | 92 MB/s  |
+-------------------+---------------------+---------+----------+----------+----------+
|         -         | i-0c3d6ed4bdbe74f8b | 1.2 MB  |  895 MB  | 738 kB/s | 564 MB/s |
+-------------------+---------------------+---------+----------+----------+----------+
|                            TOTAL        | 2.3 GB  |  2.4 GB  | 1.5 GB/s | 1.5 GB/s |
+-------------------+---------------------+---------+----------+----------+----------+

# Bitswap
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
|       QUERY       |        NODE         | BLOCKSRECV | BLOCKSSENT | DUPBLOCKS | DATARECV | DATASENT | DUPDATA |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
| (not 'neighbors') | i-0037217ce77b6d4e4 |   1,502    |   1,175    |    248    |  375 MB  |  287 MB  |  51 MB  |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-00b5ff44b55107f7c |   1,525    |   1,030    |    272    |  379 MB  |  255 MB  |  56 MB  |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-02350a7095c66c0cb |   1,667    |    772     |    313    |  416 MB  |  184 MB  |  66 MB  |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-04ab929f265802770 |   1,532    |   1,272    |    279    |  381 MB  |  311 MB  |  58 MB  |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-0556e55ae273c939b |   1,687    |    891     |    380    |  416 MB  |  224 MB  |  80 MB  |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-05969e410b1e6e7f7 |   1,640    |    928     |    345    |  406 MB  |  230 MB  |  72 MB  |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-07578111218cbe532 |   1,586    |    978     |    318    |  394 MB  |  237 MB  |  68 MB  |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-0d1c4d8d2cf599bf9 |   1,721    |   1,039    |    350    |  427 MB  |  259 MB  |  74 MB  |
+                   +---------------------+------------+------------+-----------+----------+----------+---------+
|                   | i-0fff353fa2e7c3cb3 |   1,521    |   1,077    |    260    |  383 MB  |  260 MB  |  58 MB  |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
|         -         | i-0c3d6ed4bdbe74f8b |     0      |   5,235    |     0     |   0 B    |  1.3 GB  |   0 B   |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
|                            TOTAL        |   14,381   |   14,397   |   2,765   |  3.6 GB  |  3.6 GB  | 584 MB  |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+

@Stebalien
Copy link
Member

Something funky is going on here:

  • In the 1 leech / 4 seeds case (previous):
    • Not neighbors (leech?) has no bandwidth usag?
    • The total rate-in is 0?
    • Total-in is 128MiB while total-out is 12GiB?
    • Some of the seeds are receiving blocks?
    • total data received/sent don't match up and don't match up.
    • Total datarecv is greater than total-in?
  • In the 1 leech / 4 seeds case (current):
    • total-in != total-out but rate-in ~= rate-out?
    • blocks received != blocks sent

...

@dirkmc
Copy link
Contributor Author

dirkmc commented Nov 22, 2019

You're right there is some weirdness going on there. I think maybe it's getting confused between messages received and blocks received (seeds should receive control messages but not blocks). I'll dig in and make sure it's reporting the right values.

I've noticed that when there are only a few peers the stats seem a little wonky, but when there are several peers they seem to average out.

Note that these stats are coming from

  • libp2p.BandwidthReporter()
  • bitswap.Stats()

I will dig in and see if I can understand where the discrepancies are coming from for Bitswap, the libp2p part may take a little more digging as I'm unfamiliar with that code.

@Stebalien
Copy link
Member

Stebalien commented Nov 22, 2019

libp2p.BandwidthReporter()

Ah. Yeah, that may be: libp2p/go-flow-metrics#11.

(maybe) edit: actually, no. That issue should have reported insane bandwidth usage, not this kind of thing

@dirkmc
Copy link
Contributor Author

dirkmc commented Nov 25, 2019

I think the benchmarking data was just getting corrupted for that particular p2plab cluster. I created a new one and the results look reasonable:

# Bandwidth
+-------------------+---------------------+---------+----------+----------+----------+
|       QUERY       |        NODE         | TOTALIN | TOTALOUT |  RATEIN  | RATEOUT  |
+-------------------+---------------------+---------+----------+----------+----------+
| (not 'neighbors') | i-0265b925eabb0616d | 252 MB  |  216 kB  | 252 MB/s | 216 kB/s |
+-------------------+---------------------+---------+----------+----------+----------+
|         -         | i-00e77b860509fe6c4 |  84 kB  |  160 MB  | 84 kB/s  | 160 MB/s |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-02b19ffb9776df584 |  62 kB  |  33 MB   | 62 kB/s  | 33 MB/s  |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-08c48c4b30a412666 | 1.8 kB  |  1.8 kB  | 1.8 kB/s | 1.8 kB/s |
+                   +---------------------+---------+----------+----------+----------+
|                   | i-0fc394b3b620d06d2 |  74 kB  |  64 MB   | 74 kB/s  | 64 MB/s  |
+-------------------+---------------------+---------+----------+----------+----------+
|                            TOTAL        | 253 MB  |  257 MB  | 253 MB/s | 257 MB/s |
+-------------------+---------------------+---------+----------+----------+----------+

# Bitswap
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
|       QUERY       |        NODE         | BLOCKSRECV | BLOCKSSENT | DUPBLOCKS | DATARECV | DATASENT | DUPDATA |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
| (not 'neighbors') | i-0265b925eabb0616d |   1,388    |     0      |    188    |  354 MB  |   0 B    |  45 MB  |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
|         -         | i-00e77b860509fe6c4 |     0      |    993     |     0     |   0 B    |  257 MB  |   0 B   |
+                   +---------------------+            +------------+           +          +----------+         +
|                   | i-02b19ffb9776df584 |            |    138     |           |          |  33 MB   |         |
+                   +---------------------+            +------------+           +          +----------+         +
|                   | i-08c48c4b30a412666 |            |     0      |           |          |   0 B    |         |
+                   +---------------------+            +------------+           +          +----------+         +
|                   | i-0fc394b3b620d06d2 |            |    257     |           |          |  64 MB   |         |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+
|                            TOTAL        |   1,388    |   1,388    |    188    |  354 MB  |  354 MB  |  45 MB  |
+-------------------+---------------------+------------+------------+-----------+----------+----------+---------+

Copy link
Member

@Stebalien Stebalien left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initial comments.

bitswap.go Outdated Show resolved Hide resolved
blockpresencemanager/blockpresencemanager.go Show resolved Hide resolved
blockpresencemanager/blockpresencemanager.go Show resolved Hide resolved
blockpresencemanager/blockpresencemanager.go Outdated Show resolved Hide resolved
blockpresencemanager/blockpresencemanager.go Outdated Show resolved Hide resolved
blockpresencemanager/blockpresencemanager.go Show resolved Hide resolved
blockpresencemanager/blockpresencemanager.go Outdated Show resolved Hide resolved
decision/engine.go Show resolved Hide resolved
decision/engine.go Show resolved Hide resolved
@Stebalien
Copy link
Member

@Stebalien Stebalien mentioned this pull request Dec 3, 2019
21 tasks
@dirkmc dirkmc mentioned this pull request Dec 4, 2019
16 tasks
Copy link
Member

@Stebalien Stebalien left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

engine review

decision/engine.go Outdated Show resolved Hide resolved
decision/engine.go Outdated Show resolved Hide resolved
decision/engine.go Outdated Show resolved Hide resolved
decision/engine.go Outdated Show resolved Hide resolved
decision/engine.go Show resolved Hide resolved
decision/engine.go Show resolved Hide resolved
message/message.go Show resolved Hide resolved
decision/taskmerger.go Show resolved Hide resolved
getter/getter.go Outdated Show resolved Hide resolved
message/message.go Outdated Show resolved Hide resolved
Copy link
Member

@Stebalien Stebalien left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Working through a few more files.

decision/engine.go Outdated Show resolved Hide resolved
decision/engine.go Outdated Show resolved Hide resolved
decision/engine.go Outdated Show resolved Hide resolved
message/message.go Show resolved Hide resolved
message/message.go Show resolved Hide resolved
network/ipfs_impl.go Show resolved Hide resolved
network/ipfs_impl.go Outdated Show resolved Hide resolved
testutil/testutil.go Outdated Show resolved Hide resolved
messagequeue/messagequeue.go Outdated Show resolved Hide resolved
@momack2
Copy link

momack2 commented Jan 7, 2020

curious if this is intentionally a "Draft" PR? What's the timeline / blockers for merging this (and unblocking ipfs/kubo#6776)?

@dirkmc
Copy link
Contributor Author

dirkmc commented Jan 7, 2020

I consider this a draft PR

There are some things remaining for discussion in the PR itself, eg

  • should we still try to measure latency, or just remove that code altogether (it's used for tagging)
  • the exact manner in which we request wants / haves
  • how to deal with legacy bitswap peers

In order to merge the PR I would like to have corresponding repeatable testground tests that demonstrate

  • performance compared to current master
  • simulations in realistic environments (eg data center, internet)
  • simulations of realistic use cases (download a movie from many seeds, browse wikipedia)

The testground tests should help answer some of the outstanding discussion points above.

messagequeue/messagequeue.go Show resolved Hide resolved
messagequeue/messagequeue.go Show resolved Hide resolved
messagequeue/messagequeue.go Show resolved Hide resolved
messagequeue/messagequeue.go Show resolved Hide resolved
messagequeue/messagequeue.go Outdated Show resolved Hide resolved
messagequeue/messagequeue.go Outdated Show resolved Hide resolved
messagequeue/messagequeue.go Outdated Show resolved Hide resolved
messagequeue/messagequeue.go Show resolved Hide resolved
peermanager/peermanager.go Outdated Show resolved Hide resolved
peermanager/peerwantmanager.go Outdated Show resolved Hide resolved
@Stebalien
Copy link
Member

(I'm currently reviewing this bit by bit, there's just a lot here)

workers.go Outdated Show resolved Hide resolved
workers.go Outdated Show resolved Hide resolved
workers.go Outdated Show resolved Hide resolved
@@ -73,6 +78,19 @@ func New(ctx context.Context, id uint64, tagger PeerTagger, providerFinder PeerP
return spm
}

func (spm *SessionPeerManager) ReceiveFrom(p peer.ID, ks []cid.Cid, haves []cid.Cid) bool {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything on this type goes through an event loop. This is unlikely to be safe.

All this logic should be handled by s.sprm.RecordPeerResponse (or would be if we passed haves to that function). Is that not the case?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we need to be able to determine the number of active peers in the session, we probably need to send a request to the session peer manager over the channel.

@@ -41,6 +44,7 @@ type SessionPeerManager struct {
ctx context.Context
tagger PeerTagger
providerFinder PeerProviderFinder
peers *peer.Set
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

duplicate of "activePeers"?

@@ -8,11 +8,14 @@ import (
"time"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How much code in this service is dead? It looks like we're no longer using the peer optimizer, are we?

Copy link
Contributor Author

@dirkmc dirkmc Jan 21, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't really need the peer optimizer any more as we just send want-haves to all peers and we have a different mechanism for selecting which peer to send a want-block to for a given CID (based on HAVE / DONT_HAVE responses from peers, instead of latency).

I wanted to check in with you before making drastic changes here. I think the parts of the SessionPeerManager that we still need are

  • add peer to the session when we get a block / HAVE from that peer for a CID in the session
  • find more peers periodically, and also when the session gets a timeout for a CID

SessionPeerManager also takes care of tagging peers. I'm not so familiar with peer tagging, so I'm not sure if it still belongs here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the session peer manager just keeps track of which peers are in the session, so tagging makes sense.

session/sentwantblockstracker.go Outdated Show resolved Hide resolved
session/sentwantblockstracker.go Outdated Show resolved Hide resolved
session/peerresponsetracker.go Outdated Show resolved Hide resolved
session/peerresponsetracker.go Show resolved Hide resolved
session/peerresponsetracker.go Show resolved Hide resolved
message/message.go Outdated Show resolved Hide resolved
session/session.go Show resolved Hide resolved
session/session.go Outdated Show resolved Hide resolved
session/sessionwantsender.go Outdated Show resolved Hide resolved
session/sessionwantsender.go Outdated Show resolved Hide resolved
session/sessionwantsender.go Outdated Show resolved Hide resolved
session/sessionwantsender.go Show resolved Hide resolved
session/sessionwantsender.go Outdated Show resolved Hide resolved
Copy link
Member

@Stebalien Stebalien left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly documentation (and fixing some channel issues).

Before we merge, could we walk through code and just document everything. Importantly, document what each subsystem is supposed to do.

sessionmanager/sessionmanager.go Show resolved Hide resolved
session/sessionwantsender.go Outdated Show resolved Hide resolved
session/sessionwantsender.go Outdated Show resolved Hide resolved
session/sessionwantsender.go Show resolved Hide resolved
session/sessionwantsender.go Show resolved Hide resolved
peermanager/peerwantmanager.go Show resolved Hide resolved
peermanager/peerwantmanager.go Outdated Show resolved Hide resolved
peermanager/peerwantmanager.go Outdated Show resolved Hide resolved
peermanager/peerwantmanager.go Show resolved Hide resolved
peermanager/peerwantmanager.go Show resolved Hide resolved
@Stebalien Stebalien marked this pull request as ready for review January 30, 2020 17:22
Copy link
Member

@Stebalien Stebalien left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

@Stebalien
Copy link
Member

Let's merge master back in and squash before merging.

This commit extends the bitswap protocol with two additional wantlist properties:

* WANT_HAVE/HAVE: Instead of asking for a block, a node can specify that they
  want to know if any peers "have" the block.
* WANT_HAVE_NOT/HAVE_NOT: Instead of waiting for a timeout, a node can explicitly
  request to be told immediately if their peers don't currently have the given
  block.

Additionally, nodes now tell their peers how much data they have queued to send
them when sending messages. This allows peers to better distribute requests,
keeping all peers busy but not overloaded.

Changes in this PR are described in: #186
@Stebalien Stebalien merged commit 86178ba into master Jan 30, 2020
@Stebalien Stebalien deleted the feat/proto-ext-poc branch January 30, 2020 23:44
@Stebalien
Copy link
Member

@dirkmc could you turn the remaining TODOs into issues? The ones in the main comment and:

  • Remove dead code (latency tracking, etc.).
  • Architecture diagram. That is, how are all the components connected.
  • The load balancing logic we discussed. That is, use the "pending bytes" to:
    • Keep all peers "busy".
    • Prioritize the least busy peers.

I believe those were the main ones.

@dirkmc
Copy link
Contributor Author

dirkmc commented Jan 31, 2020

Shipppppeeeddddd 🚀

I'll make those issues tomorrow 👍

Jorropo pushed a commit to Jorropo/go-libipfs that referenced this pull request Jan 26, 2023
PoC of Bitswap protocol extensions implementation

This commit was moved from ipfs/go-bitswap@86178ba
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants