Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Guide: router module #1279

Merged
merged 12 commits into from
Jun 18, 2020
1 change: 1 addition & 0 deletions roadmap/implementors-guide/src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
- [Inclusion Module](runtime/inclusion.md)
- [InclusionInherent Module](runtime/inclusioninherent.md)
- [Validity Module](runtime/validity.md)
- [Router Module](runtime/router.md)
- [Node Architecture](node/README.md)
- [Subsystems and Jobs](node/subsystems-and-jobs.md)
- [Overseer](node/overseer.md)
Expand Down
2 changes: 2 additions & 0 deletions roadmap/implementors-guide/src/runtime/inclusion.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,9 @@ All failed checks should lead to an unrecoverable error making the block invalid
1. check that each candidate corresponds to a scheduled core and that they are ordered in ascending order by `ParaId`.
1. Ensure that any code upgrade scheduled by the candidate does not happen within `config.validation_upgrade_frequency` of the currently scheduled upgrade, if any, comparing against the value of `Paras::FutureCodeUpgrades` for the given para ID.
1. check the backing of the candidate using the signatures and the bitfields.
1. check that the upwards messages are not exceeding `config.max_upwards_queue_count` and `config.watermark_queue_size` parameters.
1. create an entry in the `PendingAvailability` map for each backed candidate with a blank `availability_votes` bitfield.
1. call `Router::queue_upward_messages` for each backed candidate.
1. Return a `Vec<CoreIndex>` of all scheduled cores of the list of passed assignments that a candidate was successfully backed for, sorted ascending by CoreIndex.
* `enact_candidate(relay_parent_number: BlockNumber, AbridgedCandidateReceipt)`:
1. If the receipt contains a code upgrade, Call `Paras::schedule_code_upgrade(para_id, code, relay_parent_number + config.validationl_upgrade_delay)`.
Expand Down
28 changes: 28 additions & 0 deletions roadmap/implementors-guide/src/runtime/router.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Router Module

The Router module is responsible for storing and dispatching Upwards and Downwards messages from and to parachains respectively. It is intended to later handle the XCMP logic as well.

## Storage

Storage layout:

```rust

/// Messages ready to be dispatched onto the relay chain.
/// This is subject to `max_upwards_queue_count` and
///`watermark_queue_size` from `HostConfiguration`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
///`watermark_queue_size` from `HostConfiguration`.
/// `watermark_queue_size` from `HostConfiguration`.

RelayDispatchQueues: map ParaId => Vec<UpwardMessage>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we should give the definition for UpwardMessage somewhere?

/// Size of the dispatch queues. Caches sizes of the queues in `RelayDispatchQueue`.
/// First item in the tuple is the count of messages and second
/// is the total length (in bytes) of the message payloads.
RelayDispatchQueueSize: map ParaId => (u32, u32);
/// The ordered list of `ParaId`s that have a `RelayDispatchQueue` entry.
NeedsDispatch: Vec<ParaId>;
```

## Routines

* `queue_upward_messages(AttestedCandidate)`:
1. Updates `NeedsDispatch`, and enqueues upward messages into `RelayDispatchQueue` and modifies the respective entry in `RelayDispatchQueueSize`.
* `dispatch_upward_messages(ParaId)`:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @pepyakin (as we talked about this in DM)

Given that NeedsDispatch already contains all ParaIds, I'm not sure what the ParaId parameter here is for.

This seems like a function that should be called once per block - either in on_initialize, on_finalize, or at the end of InclusionInherent::inclusion. I'd favor the latter 2 as it can immediately dispatch calls by enacted candidate blocks in many cases, leading to lower minimum latency.

1. If `NeedsDispatch` contains an entry passed as an input parameter start dispatching messages from it's respective entry in `RelayDispatchQueues`. The dispatch is done in the FIFO order and it drains the queue and removes it from `RelayDispatchQueues`.
6 changes: 6 additions & 0 deletions roadmap/implementors-guide/src/type-definitions.md
Original file line number Diff line number Diff line change
Expand Up @@ -232,6 +232,12 @@ struct HostConfiguration {
pub thread_availability_period: BlockNumber,
/// The amount of blocks ahead to schedule parathreads.
pub scheduling_lookahead: u32,
/// Total number of individual messages allowed in the parachain -> relay-chain message queue.
pub max_upwards_queue_count: u32,
/// Total size of messages allowed in the parachain -> relay-chain message queue before which
/// no further messages may be added to it. If it exceeds this then the queue may contain only
/// a single message.
pub watermark_queue_size: u32,
}
```

Expand Down