Unverified Commit 73de27e0 authored by Sergey Pepyakin's avatar Sergey Pepyakin Committed by GitHub
Browse files

Implementer's Guide: Flesh out more details for upward messages (#1556)



* Take 2 at the upward messages

* Trying to restore stuff from unsuccesful rebase

* Fix whitespace

* Clean up

* Change rustdoc to comment

* Pivot to a less stricter, w.r.t. to acceptance, model

* Rename `max_upward_message_num_per_candidate`

* Update docs for DownwardMessage

* Apply suggestions from code review
Co-authored-by: asynchronous rob's avatarRobert Habermeier <rphmeier@gmail.com>

* Rephrase "Dispatchable objects ready to ..."

* Finish the sentence

* Add a note about imprecision of the current weight formula

* Elaborate on potential use-cases for the upward message kinds.

* s/later/below
Co-authored-by: asynchronous rob's avatarRobert Habermeier <rphmeier@gmail.com>
parent 700f86a2
Pipeline #103912 passed with stages
in 25 minutes and 54 seconds
......@@ -17,9 +17,7 @@ digraph {
}
```
Downward Message Passing (DMP) is a mechanism for delivering messages to parachains from the relay chain. Downward
messages may originate not from the relay chain, but rather from another parachain via a mechanism
called HRMP (Will be described later).
Downward Message Passing (DMP) is a mechanism for delivering messages to parachains from the relay chain.
Each parachain has its own queue that stores all pending inbound downward messages. A parachain
doesn't have to process all messages at once, however, there are rules as to how the downward message queue
......@@ -28,16 +26,19 @@ The downward message queue doesn't have a cap on its size and it is up to the re
that prevent spamming in place.
Upward Message Passing (UMP) is a mechanism responsible for delivering messages in the opposite direction:
from a parachain up to the relay chain. Upward messages are dispatched to Runtime entrypoints and
typically used for invoking some actions on the relay chain on behalf of the parachain.
> NOTE: It is conceivable that upward messages will be divided to more fine-grained kinds with a dispatchable upward message
being only one kind of multiple. That would make upward messages inspectable and therefore would allow imposing additional
validity criteria for the candidates that contain these messages.
Semantically, there is a queue of upward messages queue where messages from each parachain are stored. Each parachain
can have a limited number of messages and the total sum of pending messages is also limited. Each parachain can dispatch
multiple upward messages per candidate.
from a parachain up to the relay chain. Upward messages can serve different purposes and can be of different
kinds.
One kind of message is `Dispatchable`. They could be thought of similarly to extrinsics sent to a relay chain: they also
invoke exposed runtime entrypoints, they consume weight and require fees. The difference is that they originate from
a parachain. Each parachain has a queue of dispatchables to be executed. There can be only so many dispatchables at a time.
The weight that processing of the dispatchables can consume is limited by a preconfigured value. Therefore, it is possible
that some dispatchables will be left for later blocks. To make the dispatching more fair, the queues are processed turn-by-turn
in a round robin fashion.
Other kinds of upward messages can be introduced in the future as well. Potential candidates are channel management for
horizontal message passing (XCMP and HRMP, both are to be described below), new validation code signalling, or other
requests to the relay chain.
## Horizontal Message Passing
......
......@@ -68,7 +68,7 @@ All failed checks should lead to an unrecoverable error making the block invalid
1. Ensure that any code upgrade scheduled by the candidate does not happen within `config.validation_upgrade_frequency` of `Paras::last_code_upgrade(para_id, true)`, if any, comparing against the value of `Paras::FutureCodeUpgrades` for the given para ID.
1. Check the collator's signature on the candidate data.
1. check the backing of the candidate using the signatures and the bitfields, comparing against the validators assigned to the groups, fetched with the `group_validators` lookup.
1. check that the upward messages, when combined with the existing queue size, are not exceeding `config.max_upward_queue_count` and `config.watermark_upward_queue_size` parameters.
1. call `Router::check_upward_messages(para, commitments.upward_messages)` to check that the upward messages are valid.
1. call `Router::check_processed_downward_messages(para, commitments.processed_downward_messages)` to check that the DMQ is properly drained.
1. call `Router::check_hrmp_watermark(para, commitments.hrmp_watermark)` for each candidate to check rules of processing the HRMP watermark.
1. check that in the commitments of each candidate the horizontal messages are sorted by ascending recipient ParaId and there is no two horizontal messages have the same recipient.
......@@ -79,7 +79,7 @@ All failed checks should lead to an unrecoverable error making the block invalid
* `enact_candidate(relay_parent_number: BlockNumber, CommittedCandidateReceipt)`:
1. If the receipt contains a code upgrade, Call `Paras::schedule_code_upgrade(para_id, code, relay_parent_number + config.validationl_upgrade_delay)`.
> TODO: Note that this is safe as long as we never enact candidates where the relay parent is across a session boundary. In that case, which we should be careful to avoid with contextual execution, the configuration might have changed and the para may de-sync from the host's understanding of it.
1. call `Router::queue_upward_messages` for each backed candidate, using the [`UpwardMessage`s](../types/messages.md#upward-message) from the [`CandidateCommitments`](../types/candidate.md#candidate-commitments).
1. call `Router::enact_upward_messages` for each backed candidate, using the [`UpwardMessage`s](../types/messages.md#upward-message) from the [`CandidateCommitments`](../types/candidate.md#candidate-commitments).
1. call `Router::queue_outbound_hrmp` with the para id of the candidate and the list of horizontal messages taken from the commitment,
1. call `Router::prune_hrmp` with the para id of the candiate and the candidate's `hrmp_watermark`.
1. call `Router::prune_dmq` with the para id of the candidate and the candidate's `processed_downward_messages`.
......
......@@ -22,4 +22,5 @@ Included: Option<()>,
1. Invoke `Scheduler::schedule(freed)`
1. Invoke the `Inclusion::process_candidates` routine with the parameters `(backed_candidates, Scheduler::scheduled(), Scheduler::group_validators)`.
1. Call `Scheduler::occupied` using the return value of the `Inclusion::process_candidates` call above, first sorting the list of assigned core indices.
1. Call the `Router::process_upward_dispatchables` routine to execute all messages in upward dispatch queues.
1. If all of the above succeeds, set `Included` to `Some(())`.
......@@ -10,16 +10,19 @@ Storage layout:
/// Paras that are to be cleaned up at the end of the session.
/// The entries are sorted ascending by the para id.
OutgoingParas: Vec<ParaId>;
/// Messages ready to be dispatched onto the relay chain.
/// Dispatchable objects ready to be dispatched onto the relay chain. The messages are processed in FIFO order.
/// This is subject to `max_upward_queue_count` and
/// `watermark_queue_size` from `HostConfiguration`.
RelayDispatchQueues: map ParaId => Vec<UpwardMessage>;
RelayDispatchQueues: map ParaId => Vec<RawDispatchable>;
/// Size of the dispatch queues. Caches sizes of the queues in `RelayDispatchQueue`.
/// First item in the tuple is the count of messages and second
/// is the total length (in bytes) of the message payloads.
RelayDispatchQueueSize: map ParaId => (u32, u32);
/// The ordered list of `ParaId`s that have a `RelayDispatchQueue` entry.
NeedsDispatch: Vec<ParaId>;
/// This is the para that gets will get dispatched first during the next upward dispatchable queue
/// execution round.
NextDispatchRoundStartWith: Option<ParaId>;
/// The downward messages addressed for a certain para.
DownwardMessageQueues: map ParaId => Vec<DownwardMessage>;
```
......@@ -158,6 +161,12 @@ The following routines are intended to be invoked by paras' upward messages.
Candidate Acceptance Function:
* `check_upward_messages(P: ParaId, Vec<UpwardMessage>`:
1. Checks that there are at most `config.max_upward_message_num_per_candidate` messages.
1. Checks each upward message individually depending on its kind:
1. If the message kind is `Dispatchable`:
1. Verify that `RelayDispatchQueueSize` for `P` has enough capacity for the message (NOTE that should include all processed
upward messages of the `Dispatchable` kind up to this point!)
* `check_processed_downward_messages(P: ParaId, processed_downward_messages)`:
1. Checks that `DownwardMessageQueues` for `P` is at least `processed_downward_messages` long.
1. Checks that `processed_downward_messages` is at least 1 if `DownwardMessageQueues` for `P` is not empty.
......@@ -190,15 +199,39 @@ Candidate Enactment:
1. Set `HrmpWatermarks` for `P` to be equal to `new_hrmp_watermark`
* `prune_dmq(P: ParaId, processed_downward_messages)`:
1. Remove the first `processed_downward_messages` from the `DownwardMessageQueues` of `P`.
* `queue_upward_messages(ParaId, Vec<UpwardMessage>)`:
1. Updates `NeedsDispatch`, and enqueues upward messages into `RelayDispatchQueue` and modifies the respective entry in `RelayDispatchQueueSize`.
* `enact_upward_messages(P: ParaId, Vec<UpwardMessage>)`:
1. Process all upward messages in order depending on their kinds:
1. If the message kind is `Dispatchable`:
1. Append the message to `RelayDispatchQueues` for `P`
1. Increment the size and the count in `RelayDispatchQueueSize` for `P`.
1. Ensure that `P` is present in `NeedsDispatch`.
The following routine is intended to be called in the same time when `Paras::schedule_para_cleanup` is called.
`schedule_para_cleanup(ParaId)`:
1. Add the para into the `OutgoingParas` vector maintaining the sorted order.
The following routine is meant to execute pending entries in upward dispatchable queues. This function doesn't fail, even if
any of dispatchables return an error.
`process_upward_dispatchables()`:
1. Initialize a cumulative weight counter `T` to 0
1. Initialize a local in memory dictionary `R` that maps `ParaId` to a vector of `DispatchResult`.
1. Iterate over items in `NeedsDispatch` cyclically, starting with `NextDispatchRoundStartWith`. If the item specified is `None` start from the beginning. For each `P` encountered:
1. Dequeue `D` the first dispatchable `D` from `RelayDispatchQueues` for `P`
1. Decrement the size of the message from `RelayDispatchQueueSize` for `P`
1. Decode `D` into a dispatchable. If failed append `DispatchResult::DecodeFailed` into `R` for `P`. Otherwise, if succeeded:
1. If `weight_of(D) > config.dispatchable_upward_message_critical_weight` then append `DispatchResult::CriticalWeightExceeded` into `R` for `P`. Otherwise:
1. Execute `D` and add the actual amount of weight consumed to `T`. Add the `DispatchResult` into `R` for `P`.
1. If `weight_of(D) + T > config.preferred_dispatchable_upward_messages_step_weight`, set `NextDispatchRoundStartWith` to `P` and finish processing.
> NOTE that in practice we would need to approach the weight calculation more thoroughly, i.e. incorporate all operations
> that could take place on the course of handling these dispatchables.
1. If `RelayDispatchQueues` for `P` became empty, remove `P` from `NeedsDispatch`.
1. If `NeedsDispatch` became empty then finish processing and set `NextDispatchRoundStartWith` to `None`.
1. Then, for each `P` and the vector of `DispatchResult` in `R`:
1. Obtain a message by wrapping the vector into `DownwardMessage::DispatchResult`
1. Append the resulting message to `DownwardMessageQueues` for `P`.
## Session Change
1. Drain `OutgoingParas`. For each `P` happened to be in the list:
......@@ -207,8 +240,9 @@ The following routine is intended to be called in the same time when `Paras::sch
1. Remove all `DownwardMessageQueues` of `P`.
1. Remove `RelayDispatchQueueSize` of `P`.
1. Remove `RelayDispatchQueues` of `P`.
1. Remove `P` if it exists in `NeedsDispatch`.
1. If `P` is in `NextDispatchRoundStartWith`, then reset it to `None`
- Note that we don't remove the open/close requests since they are gon die out naturally.
TODO: What happens with the deposits in channels or open requests?
1. For each request `R` in `HrmpOpenChannelRequests`:
1. if `R.confirmed = false`:
1. increment `R.age` by 1.
......@@ -234,7 +268,3 @@ To remove a channel `C` identified with a tuple `(sender, recipient)`:
1. Remove `C` from `HrmpChannelContents`.
1. Remove `recipient` from the set `HrmpEgressChannelsIndex` for `sender`.
1. Remove `sender` from the set `HrmpIngressChannelsIndex` for `recipient`.
## Finalization
1. Dispatch queued upward messages from `RelayDispatchQueues` in a FIFO order applying the `config.watermark_upward_queue_size` and `config.max_upward_queue_count` limits.
......@@ -22,11 +22,24 @@ enum ParachainDispatchOrigin {
Root,
}
struct UpwardMessage {
/// The origin for the message to be sent from.
pub origin: ParachainDispatchOrigin,
/// The message data.
pub data: Vec<u8>,
/// An opaque byte buffer that encodes an entrypoint and the arguments that should be
/// provided to it upon the dispatch.
///
/// NOTE In order to be executable the byte buffer should be decoded which potentially can fail if
/// the encoding was changed.
type RawDispatchable = Vec<u8>;
enum UpwardMessage {
/// This upward message is meant to schedule execution of a provided dispatchable.
Dispatchable {
/// The origin with which the dispatchable should be executed.
origin: ParachainDispatchOrigin,
/// The dispatchable to be executed in its raw form.
dispatchable: RawDispatchable,
},
// Examples:
// HrmpOpenChannel { .. },
// HrmpCloseChannel { .. },
}
```
......@@ -53,11 +66,25 @@ struct InboundHrmpMessage {
## Downward Message
A message that go down from the relay chain to a parachain. Such a message could be initiated either
as a result of an operation took place on the relay chain.
`DownwardMessage`- is a message that goes down from the relay chain to a parachain. Such a message
could be seen as a notification, however, it is conceivable that they might be used by the relay
chain to send a request to the parachain (likely, through the `ParachainSpecific` variant).
```rust,ignore
enum DispatchResult {
Executed {
success: bool,
},
/// Decoding `RawDispatchable` into an executable runtime representation has failed.
DecodeFailed,
/// A dispatchable in question exceeded the maximum amount of weight allowed.
CriticalWeightExceeded,
}
enum DownwardMessage {
/// The parachain receives a dispatch result for each sent dispatchable upward message in order
/// they were sent.
DispatchResult(Vec<DispatchResult>),
/// Some funds were transferred into the parachain's account. The hash is the identifier that
/// was given with the transfer.
TransferInto(AccountId, Balance, Remark),
......
......@@ -39,7 +39,23 @@ struct HostConfiguration {
/// Total size of messages allowed in the parachain -> relay-chain message queue before which
/// no further messages may be added to it. If it exceeds this then the queue may contain only
/// a single message.
pub watermark_upward_queue_size: u32,
pub max_upward_queue_size: u32,
/// The amount of weight we wish to devote to the processing the dispatchable upward messages
/// stage.
///
/// NOTE that this is a soft limit and could be exceeded.
pub preferred_dispatchable_upward_messages_step_weight: u32,
/// Any dispatchable upward message that requests more than the critical amount is rejected
/// with `DispatchResult::CriticalWeightExceeded`.
///
/// The parameter value is picked up so that no dispatchable can make the block weight exceed
/// the total budget. I.e. that the sum of `preferred_dispatchable_upward_messages_step_weight`
/// and `dispatchable_upward_message_critical_weight` doesn't exceed the amount of weight left
/// under a typical worst case (e.g. no upgrades, etc) weight consumed by the required phases of
/// block execution (i.e. initialization, finalization and inherents).
pub dispatchable_upward_message_critical_weight: u32,
/// The maximum number of messages that a candidate can contain.
pub max_upward_message_num_per_candidate: u32,
/// Number of sessions after which an HRMP open channel request expires.
pub hrmp_open_request_ttl: u32,
/// The deposit that the sender should provide for opening an HRMP channel.
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment