Unverified Commit c7708818 authored by Sergey Pepyakin's avatar Sergey Pepyakin Committed by GitHub
Browse files

Breakdown the Router module on Dmp, Ump, Hrmp modules (#1939)



* Guide: Split router module in guide.

Now we have: DMP, UMP and Router module.

* Add a glossary entry for what used to be called Router

* Extract DMP

* Extract UMP

* Extract HRMP

* Switch over to new modules

* Router: goodbye sweet prince

* Link to messaging overview for details.

* Update missed rococo and test runtimes.

* Commit destroyed by rebase changes

* Don't deprecate Router but rather make it a meta-project

Co-authored-by: default avatarBernhard Schuster <bernhard@ahoi.io>

* Fix typos suggestion

Co-authored-by: default avatarBernhard Schuster <bernhard@ahoi.io>

* Fix repetition in the impl guide

* Clarify that processed_downward_messages has the u32 type

* Remove the router subdir.

* Deabbreviate DMP,UMP,HRMP

Co-authored-by: default avatarBernhard Schuster <bernhard@ahoi.io>
parent a5b75278
Pipeline #114131 passed with stages
in 35 minutes and 22 seconds
......@@ -16,7 +16,9 @@
- [Scheduler Module](runtime/scheduler.md)
- [Inclusion Module](runtime/inclusion.md)
- [InclusionInherent Module](runtime/inclusioninherent.md)
- [Router Module](runtime/router.md)
- [DMP Module](runtime/dmp.md)
- [UMP Module](runtime/ump.md)
- [HRMP Module](runtime/hrmp.md)
- [Session Info Module](runtime/session_info.md)
- [Runtime APIs](runtime-api/README.md)
- [Validators](runtime-api/validators.md)
......
......@@ -24,6 +24,7 @@ Here you can find definitions of a bunch of jargon, usually specific to the Polk
- Parathread: A parachain which is scheduled on a pay-as-you-go basis.
- Proof-of-Validity (PoV): A stateless-client proof that a parachain candidate is valid, with respect to some validation function.
- Relay Parent: A block in the relay chain, referred to in a context where work is being done in the context of the state at this block.
- Router: The router module is a meta module that consists of three runtime modules responsible for routing messages between paras and the relay chain. The three separate runtime modules are: Dmp, Ump, Hrmp, each responsible for the respective part of message routing.
- Runtime: The relay-chain state machine.
- Runtime Module: See Module.
- Runtime API: A means for the node-side behavior to access structured information based on the state of a fork of the blockchain.
......
# DMP Module
A module responsible for Downward Message Processing (DMP). See [Messaging Overview](../messaging.md) for more details.
## Storage
General storage entries
```rust
/// Paras that are to be cleaned up at the end of the session.
/// The entries are sorted ascending by the para id.
OutgoingParas: Vec<ParaId>;
```
Storage layout required for implementation of DMP.
```rust
/// The downward messages addressed for a certain para.
DownwardMessageQueues: map ParaId => Vec<InboundDownwardMessage>;
/// A mapping that stores the downward message queue MQC head for each para.
///
/// Each link in this chain has a form:
/// `(prev_head, B, H(M))`, where
/// - `prev_head`: is the previous head hash or zero if none.
/// - `B`: is the relay-chain block number in which a message was appended.
/// - `H(M)`: is the hash of the message being appended.
DownwardMessageQueueHeads: map ParaId => Hash;
```
## Initialization
No initialization routine runs for this module.
## Routines
Candidate Acceptance Function:
* `check_processed_downward_messages(P: ParaId, processed_downward_messages: u32)`:
1. Checks that `DownwardMessageQueues` for `P` is at least `processed_downward_messages` long.
1. Checks that `processed_downward_messages` is at least 1 if `DownwardMessageQueues` for `P` is not empty.
Candidate Enactment:
* `prune_dmq(P: ParaId, processed_downward_messages: u32)`:
1. Remove the first `processed_downward_messages` from the `DownwardMessageQueues` of `P`.
Utility routines.
`queue_downward_message(P: ParaId, M: DownwardMessage)`:
1. Check if the size of `M` exceeds the `config.max_downward_message_size`. If so, return an error.
1. Wrap `M` into `InboundDownwardMessage` using the current block number for `sent_at`.
1. Obtain a new MQC link for the resulting `InboundDownwardMessage` and replace `DownwardMessageQueueHeads` for `P` with the resulting hash.
1. Add the resulting `InboundDownwardMessage` into `DownwardMessageQueues` for `P`.
## Session Change
1. Drain `OutgoingParas`. For each `P` happened to be in the list:
1. Remove all `DownwardMessageQueues` of `P`.
1. Remove `DownwardMessageQueueHeads` for `P`.
# Router Module
# HRMP Module
The Router module is responsible for all messaging mechanisms supported between paras and the relay chain, specifically: UMP, DMP, HRMP and later XCMP.
A module responsible for Horizontally Relay-routed Message Passing (HRMP). See [Messaging Overview](../messaging.md) for more details.
## Storage
......@@ -12,61 +12,6 @@ General storage entries
OutgoingParas: Vec<ParaId>;
```
### Upward Message Passing (UMP)
```rust
/// The messages waiting to be handled by the relay-chain originating from a certain parachain.
///
/// Note that some upward messages might have been already processed by the inclusion logic. E.g.
/// channel management messages.
///
/// The messages are processed in FIFO order.
RelayDispatchQueues: map ParaId => Vec<UpwardMessage>;
/// Size of the dispatch queues. Caches sizes of the queues in `RelayDispatchQueue`.
///
/// First item in the tuple is the count of messages and second
/// is the total length (in bytes) of the message payloads.
///
/// Note that this is an auxilary mapping: it's possible to tell the byte size and the number of
/// messages only looking at `RelayDispatchQueues`. This mapping is separate to avoid the cost of
/// loading the whole message queue if only the total size and count are required.
///
/// Invariant:
/// - The set of keys should exactly match the set of keys of `RelayDispatchQueues`.
RelayDispatchQueueSize: map ParaId => (u32, u32); // (num_messages, total_bytes)
/// The ordered list of `ParaId`s that have a `RelayDispatchQueue` entry.
///
/// Invariant:
/// - The set of items from this vector should be exactly the set of the keys in
/// `RelayDispatchQueues` and `RelayDispatchQueueSize`.
NeedsDispatch: Vec<ParaId>;
/// This is the para that gets dispatched first during the next upward dispatchable queue
/// execution round.
///
/// Invariant:
/// - If `Some(para)`, then `para` must be present in `NeedsDispatch`.
NextDispatchRoundStartWith: Option<ParaId>;
```
### Downward Message Passing (DMP)
Storage layout required for implementation of DMP.
```rust
/// The downward messages addressed for a certain para.
DownwardMessageQueues: map ParaId => Vec<InboundDownwardMessage>;
/// A mapping that stores the downward message queue MQC head for each para.
///
/// Each link in this chain has a form:
/// `(prev_head, B, H(M))`, where
/// - `prev_head`: is the previous head hash or zero if none.
/// - `B`: is the relay-chain block number in which a message was appended.
/// - `H(M)`: is the hash of the message being appended.
DownwardMessageQueueHeads: map ParaId => Hash;
```
### HRMP
HRMP related structs:
```rust
......@@ -189,13 +134,6 @@ No initialization routine runs for this module.
Candidate Acceptance Function:
* `check_upward_messages(P: ParaId, Vec<UpwardMessage>`):
1. Checks that there are at most `config.max_upward_message_num_per_candidate` messages.
1. Checks that no message exceeds `config.max_upward_message_size`.
1. Verify that `RelayDispatchQueueSize` for `P` has enough capacity for the messages
* `check_processed_downward_messages(P: ParaId, processed_downward_messages)`:
1. Checks that `DownwardMessageQueues` for `P` is at least `processed_downward_messages` long.
1. Checks that `processed_downward_messages` is at least 1 if `DownwardMessageQueues` for `P` is not empty.
* `check_hrmp_watermark(P: ParaId, new_hrmp_watermark)`:
1. `new_hrmp_watermark` should be strictly greater than the value of `HrmpWatermarks` for `P` (if any).
1. `new_hrmp_watermark` must not be greater than the context's block number.
......@@ -232,42 +170,12 @@ Candidate Enactment:
> parametrization this shouldn't be a big of a deal.
> If that becomes a problem consider introducing an extra dictionary which says at what block the given sender
> sent a message to the recipient.
* `prune_dmq(P: ParaId, processed_downward_messages)`:
1. Remove the first `processed_downward_messages` from the `DownwardMessageQueues` of `P`.
* `enact_upward_messages(P: ParaId, Vec<UpwardMessage>)`:
1. Process each upward message `M` in order:
1. Append the message to `RelayDispatchQueues` for `P`
1. Increment the size and the count in `RelayDispatchQueueSize` for `P`.
1. Ensure that `P` is present in `NeedsDispatch`.
The following routine is intended to be called in the same time when `Paras::schedule_para_cleanup` is called.
`schedule_para_cleanup(ParaId)`:
1. Add the para into the `OutgoingParas` vector maintaining the sorted order.
The following routine is meant to execute pending entries in upward message queues. This function doesn't fail, even if
dispatcing any of individual upward messages returns an error.
`process_pending_upward_messages()`:
1. Initialize a cumulative weight counter `T` to 0
1. Iterate over items in `NeedsDispatch` cyclically, starting with `NextDispatchRoundStartWith`. If the item specified is `None` start from the beginning. For each `P` encountered:
1. Dequeue the first upward message `D` from `RelayDispatchQueues` for `P`
1. Decrement the size of the message from `RelayDispatchQueueSize` for `P`
1. Delegate processing of the message to the runtime. The weight consumed is added to `T`.
1. If `T >= config.preferred_dispatchable_upward_messages_step_weight`, set `NextDispatchRoundStartWith` to `P` and finish processing.
1. If `RelayDispatchQueues` for `P` became empty, remove `P` from `NeedsDispatch`.
1. If `NeedsDispatch` became empty then finish processing and set `NextDispatchRoundStartWith` to `None`.
> NOTE that in practice we would need to approach the weight calculation more thoroughly, i.e. incorporate all operations
> that could take place on the course of handling these upward messages.
Utility routines.
`queue_downward_message(P: ParaId, M: DownwardMessage)`:
1. Check if the size of `M` exceeds the `config.max_downward_message_size`. If so, return an error.
1. Wrap `M` into `InboundDownwardMessage` using the current block number for `sent_at`.
1. Obtain a new MQC link for the resulting `InboundDownwardMessage` and replace `DownwardMessageQueueHeads` for `P` with the resulting hash.
1. Add the resulting `InboundDownwardMessage` into `DownwardMessageQueues` for `P`.
## Entry-points
The following entry-points are meant to be used for HRMP channel management.
......@@ -336,15 +244,8 @@ the parachain executed the message.
1. Drain `OutgoingParas`. For each `P` happened to be in the list:
1. Remove all inbound channels of `P`, i.e. `(_, P)`,
1. Remove all outbound channels of `P`, i.e. `(P, _)`,
1. Remove all `DownwardMessageQueues` of `P`.
1. Remove `DownwardMessageQueueHeads` for `P`.
1. Remove `RelayDispatchQueueSize` of `P`.
1. Remove `RelayDispatchQueues` of `P`.
1. Remove `HrmpOpenChannelRequestCount` for `P`
1. Remove `HrmpAcceptedChannelRequestCount` for `P`.
1. Remove `P` if it exists in `NeedsDispatch`.
1. If `P` is in `NextDispatchRoundStartWith`, then reset it to `None`
- Note that if we don't remove the open/close requests since they are going to die out naturally at the end of the session.
1. For each channel designator `D` in `HrmpOpenChannelRequestsList` we query the request `R` from `HrmpOpenChannelRequests`:
1. if `R.confirmed = false`:
1. increment `R.age` by 1.
......
......@@ -67,20 +67,20 @@ All failed checks should lead to an unrecoverable error making the block invalid
1. Ensure that any code upgrade scheduled by the candidate does not happen within `config.validation_upgrade_frequency` of `Paras::last_code_upgrade(para_id, true)`, if any, comparing against the value of `Paras::FutureCodeUpgrades` for the given para ID.
1. Check the collator's signature on the candidate data.
1. check the backing of the candidate using the signatures and the bitfields, comparing against the validators assigned to the groups, fetched with the `group_validators` lookup.
1. call `Router::check_upward_messages(para, commitments.upward_messages)` to check that the upward messages are valid.
1. call `Router::check_processed_downward_messages(para, commitments.processed_downward_messages)` to check that the DMQ is properly drained.
1. call `Router::check_hrmp_watermark(para, commitments.hrmp_watermark)` for each candidate to check rules of processing the HRMP watermark.
1. using `Router::check_outbound_hrmp(sender, commitments.horizontal_messages)` ensure that the each candidate sent a valid set of horizontal messages
1. call `Ump::check_upward_messages(para, commitments.upward_messages)` to check that the upward messages are valid.
1. call `Dmp::check_processed_downward_messages(para, commitments.processed_downward_messages)` to check that the DMQ is properly drained.
1. call `Hrmp::check_hrmp_watermark(para, commitments.hrmp_watermark)` for each candidate to check rules of processing the HRMP watermark.
1. using `Hrmp::check_outbound_hrmp(sender, commitments.horizontal_messages)` ensure that the each candidate sent a valid set of horizontal messages
1. create an entry in the `PendingAvailability` map for each backed candidate with a blank `availability_votes` bitfield.
1. create a corresponding entry in the `PendingAvailabilityCommitments` with the commitments.
1. Return a `Vec<CoreIndex>` of all scheduled cores of the list of passed assignments that a candidate was successfully backed for, sorted ascending by CoreIndex.
* `enact_candidate(relay_parent_number: BlockNumber, CommittedCandidateReceipt)`:
1. If the receipt contains a code upgrade, Call `Paras::schedule_code_upgrade(para_id, code, relay_parent_number + config.validationl_upgrade_delay)`.
> TODO: Note that this is safe as long as we never enact candidates where the relay parent is across a session boundary. In that case, which we should be careful to avoid with contextual execution, the configuration might have changed and the para may de-sync from the host's understanding of it.
1. call `Router::enact_upward_messages` for each backed candidate, using the [`UpwardMessage`s](../types/messages.md#upward-message) from the [`CandidateCommitments`](../types/candidate.md#candidate-commitments).
1. call `Router::prune_dmq` with the para id of the candidate and the candidate's `processed_downward_messages`.
1. call `Router::prune_hrmp` with the para id of the candiate and the candidate's `hrmp_watermark`.
1. call `Router::queue_outbound_hrmp` with the para id of the candidate and the list of horizontal messages taken from the commitment,
1. call `Ump::enact_upward_messages` for each backed candidate, using the [`UpwardMessage`s](../types/messages.md#upward-message) from the [`CandidateCommitments`](../types/candidate.md#candidate-commitments).
1. call `Dmp::prune_dmq` with the para id of the candidate and the candidate's `processed_downward_messages`.
1. call `Hrmp::prune_hrmp` with the para id of the candiate and the candidate's `hrmp_watermark`.
1. call `Hrmp::queue_outbound_hrmp` with the para id of the candidate and the list of horizontal messages taken from the commitment,
1. Call `Paras::note_new_head` using the `HeadData` from the receipt and `relay_parent_number`.
* `collect_pending`:
......
......@@ -22,5 +22,5 @@ Included: Option<()>,
1. Invoke `Scheduler::schedule(freed)`
1. Invoke the `Inclusion::process_candidates` routine with the parameters `(backed_candidates, Scheduler::scheduled(), Scheduler::group_validators)`.
1. Call `Scheduler::occupied` using the return value of the `Inclusion::process_candidates` call above, first sorting the list of assigned core indices.
1. Call the `Router::process_pending_upward_messages` routine to execute all messages in upward dispatch queues.
1. Call the `Ump::process_pending_upward_messages` routine to execute all messages in upward dispatch queues.
1. If all of the above succeeds, set `Included` to `Some(())`.
......@@ -23,8 +23,10 @@ The other parachains modules are initialized in this order:
1. Paras
1. Scheduler
1. Inclusion
1. Validity.
1. Router.
1. Validity
1. DMP
1. UMP
1. HRMP
The [Configuration Module](configuration.md) is first, since all other modules need to operate under the same configuration as each other. It would lead to inconsistency if, for example, the scheduler ran first and then the configuration was updated before the Inclusion module.
......
# UMP Module
A module responsible for Upward Message Passing (UMP). See [Messaging Overview](../messaging.md) for more details.
## Storage
General storage entries
```rust
/// Paras that are to be cleaned up at the end of the session.
/// The entries are sorted ascending by the para id.
OutgoingParas: Vec<ParaId>;
```
Storage related to UMP
```rust
/// The messages waiting to be handled by the relay-chain originating from a certain parachain.
///
/// Note that some upward messages might have been already processed by the inclusion logic. E.g.
/// channel management messages.
///
/// The messages are processed in FIFO order.
RelayDispatchQueues: map ParaId => Vec<UpwardMessage>;
/// Size of the dispatch queues. Caches sizes of the queues in `RelayDispatchQueue`.
///
/// First item in the tuple is the count of messages and second
/// is the total length (in bytes) of the message payloads.
///
/// Note that this is an auxilary mapping: it's possible to tell the byte size and the number of
/// messages only looking at `RelayDispatchQueues`. This mapping is separate to avoid the cost of
/// loading the whole message queue if only the total size and count are required.
///
/// Invariant:
/// - The set of keys should exactly match the set of keys of `RelayDispatchQueues`.
RelayDispatchQueueSize: map ParaId => (u32, u32); // (num_messages, total_bytes)
/// The ordered list of `ParaId`s that have a `RelayDispatchQueue` entry.
///
/// Invariant:
/// - The set of items from this vector should be exactly the set of the keys in
/// `RelayDispatchQueues` and `RelayDispatchQueueSize`.
NeedsDispatch: Vec<ParaId>;
/// This is the para that gets dispatched first during the next upward dispatchable queue
/// execution round.
///
/// Invariant:
/// - If `Some(para)`, then `para` must be present in `NeedsDispatch`.
NextDispatchRoundStartWith: Option<ParaId>;
```
## Initialization
No initialization routine runs for this module.
## Routines
Candidate Acceptance Function:
* `check_upward_messages(P: ParaId, Vec<UpwardMessage>`):
1. Checks that there are at most `config.max_upward_message_num_per_candidate` messages.
1. Checks that no message exceeds `config.max_upward_message_size`.
1. Verify that `RelayDispatchQueueSize` for `P` has enough capacity for the messages
Candidate Enactment:
* `enact_upward_messages(P: ParaId, Vec<UpwardMessage>)`:
1. Process each upward message `M` in order:
1. Append the message to `RelayDispatchQueues` for `P`
1. Increment the size and the count in `RelayDispatchQueueSize` for `P`.
1. Ensure that `P` is present in `NeedsDispatch`.
The following routine is intended to be called in the same time when `Paras::schedule_para_cleanup` is called.
`schedule_para_cleanup(ParaId)`:
1. Add the para into the `OutgoingParas` vector maintaining the sorted order.
The following routine is meant to execute pending entries in upward message queues. This function doesn't fail, even if
dispatcing any of individual upward messages returns an error.
`process_pending_upward_messages()`:
1. Initialize a cumulative weight counter `T` to 0
1. Iterate over items in `NeedsDispatch` cyclically, starting with `NextDispatchRoundStartWith`. If the item specified is `None` start from the beginning. For each `P` encountered:
1. Dequeue the first upward message `D` from `RelayDispatchQueues` for `P`
1. Decrement the size of the message from `RelayDispatchQueueSize` for `P`
1. Delegate processing of the message to the runtime. The weight consumed is added to `T`.
1. If `T >= config.preferred_dispatchable_upward_messages_step_weight`, set `NextDispatchRoundStartWith` to `P` and finish processing.
1. If `RelayDispatchQueues` for `P` became empty, remove `P` from `NeedsDispatch`.
1. If `NeedsDispatch` became empty then finish processing and set `NextDispatchRoundStartWith` to `None`.
> NOTE that in practice we would need to approach the weight calculation more thoroughly, i.e. incorporate all operations
> that could take place on the course of handling these upward messages.
## Session Change
1. Drain `OutgoingParas`. For each `P` happened to be in the list:.
1. Remove `RelayDispatchQueueSize` of `P`.
1. Remove `RelayDispatchQueues` of `P`.
1. Remove `P` if it exists in `NeedsDispatch`.
1. If `P` is in `NextDispatchRoundStartWith`, then reset it to `None`
- Note that if we don't remove the open/close requests since they are going to die out naturally at the end of the session.
......@@ -33,7 +33,7 @@ use runtime_parachains::{
self,
ParaGenesisArgs,
},
router,
dmp, ump, hrmp,
ensure_parachain,
Origin,
};
......@@ -41,7 +41,7 @@ use runtime_parachains::{
type BalanceOf<T> =
<<T as Trait>::Currency as Currency<<T as frame_system::Trait>::AccountId>>::Balance;
pub trait Trait: paras::Trait + router::Trait {
pub trait Trait: paras::Trait + dmp::Trait + ump::Trait + hrmp::Trait {
/// The aggregated origin type must support the `parachains` origin. We require that we can
/// infallibly convert between this origin and the system origin, but in reality, they're the
/// same type, we just can't express that to the Rust type system without writing a `where`
......@@ -125,7 +125,7 @@ decl_module! {
parachain: false,
};
<paras::Module<T>>::schedule_para_initialize(id, genesis);
runtime_parachains::schedule_para_initialize::<T>(id, genesis);
Ok(())
}
......@@ -150,8 +150,7 @@ decl_module! {
let debtor = <Debtors<T>>::take(id);
let _ = <T as Trait>::Currency::unreserve(&debtor, T::ParathreadDeposit::get());
<paras::Module<T>>::schedule_para_cleanup(id);
<router::Module::<T>>::schedule_para_cleanup(id);
runtime_parachains::schedule_para_cleanup::<T>(id);
Ok(())
}
......@@ -231,7 +230,7 @@ impl<T: Trait> Module<T> {
parachain: true,
};
<paras::Module<T>>::schedule_para_initialize(id, genesis);
runtime_parachains::schedule_para_initialize::<T>(id, genesis);
Ok(())
}
......@@ -242,8 +241,7 @@ impl<T: Trait> Module<T> {
ensure!(is_parachain, Error::<T>::InvalidChainId);
<paras::Module<T>>::schedule_para_cleanup(id);
<router::Module::<T>>::schedule_para_cleanup(id);
runtime_parachains::schedule_para_cleanup::<T>(id);
Ok(())
}
......@@ -267,7 +265,7 @@ mod tests {
impl_outer_origin, impl_outer_dispatch, assert_ok, parameter_types,
};
use keyring::Sr25519Keyring;
use runtime_parachains::{initializer, configuration, inclusion, router, scheduler};
use runtime_parachains::{initializer, configuration, inclusion, scheduler, dmp, ump, hrmp};
use pallet_session::OneSessionHandler;
impl_outer_origin! {
......@@ -425,8 +423,13 @@ mod tests {
type WeightInfo = ();
}
impl router::Trait for Test {
impl dmp::Trait for Test {}
impl ump::Trait for Test {
type UmpSink = ();
}
impl hrmp::Trait for Test {
type Origin = Origin;
}
......
......@@ -23,13 +23,12 @@ use frame_support::{
};
use frame_system::ensure_root;
use runtime_parachains::{
router,
paras::{self, ParaGenesisArgs},
dmp, ump, hrmp, paras::{self, ParaGenesisArgs},
};
use primitives::v1::Id as ParaId;
/// The module's configuration trait.
pub trait Trait: paras::Trait + router::Trait { }
pub trait Trait: paras::Trait + dmp::Trait + ump::Trait + hrmp::Trait { }
decl_error! {
pub enum Error for Module<T: Trait> { }
......@@ -48,7 +47,7 @@ decl_module! {
genesis: ParaGenesisArgs,
) -> DispatchResult {
ensure_root(origin)?;
paras::Module::<T>::schedule_para_initialize(id, genesis);
runtime_parachains::schedule_para_initialize::<T>(id, genesis);
Ok(())
}
......@@ -56,8 +55,7 @@ decl_module! {
#[weight = (1_000, DispatchClass::Operational)]
pub fn sudo_schedule_para_cleanup(origin, id: ParaId) -> DispatchResult {
ensure_root(origin)?;
paras::Module::<T>::schedule_para_cleanup(id);
router::Module::<T>::schedule_para_cleanup(id);
runtime_parachains::schedule_para_cleanup::<T>(id);
Ok(())
}
}
......
......@@ -14,9 +14,11 @@
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
use super::{Trait, Module, Store};
use crate::configuration::HostConfiguration;
use frame_support::{StorageMap, weights::Weight, traits::Get};
use crate::{
configuration::{self, HostConfiguration},
initializer,
};
use frame_support::{decl_module, decl_storage, StorageMap, weights::Weight, traits::Get};
use sp_std::{fmt, prelude::*};
use sp_runtime::traits::{BlakeTwo256, Hash as HashT, SaturatedConversion};
use primitives::v1::{Id as ParaId, DownwardMessage, InboundDownwardMessage, Hash};
......@@ -60,13 +62,72 @@ impl fmt::Debug for ProcessedDownwardMessagesAcceptanceErr {
}
}
pub trait Trait: frame_system::Trait + configuration::Trait {}
decl_storage! {
trait Store for Module<T: Trait> as Dmp {
/// Paras that are to be cleaned up at the end of the session.
/// The entries are sorted ascending by the para id.
OutgoingParas: Vec<ParaId>;
/// The downward messages addressed for a certain para.
DownwardMessageQueues: map hasher(twox_64_concat) ParaId => Vec<InboundDownwardMessage<T::BlockNumber>>;
/// A mapping that stores the downward message queue MQC head for each para.
///
/// Each link in this chain has a form:
/// `(prev_head, B, H(M))`, where
/// - `prev_head`: is the previous head hash or zero if none.
/// - `B`: is the relay-chain block number in which a message was appended.
/// - `H(M)`: is the hash of the message being appended.
DownwardMessageQueueHeads: map hasher(twox_64_concat) ParaId => Hash;
}
}
decl_module! {
/// The DMP module.
pub struct Module<T: Trait> for enum Call where origin: <T as frame_system::Trait>::Origin { }
}
/// Routines and getters related to downward message passing.
impl<T: Trait> Module<T> {
pub(crate) fn clean_dmp_after_outgoing(outgoing_para: ParaId) {
/// Block initialization logic, called by initializer.
pub(crate) fn initializer_initialize(_now: T::BlockNumber) -> Weight {
0
}
/// Block finalization logic, called by initializer.
pub(crate) fn initializer_finalize() {}
/// Called by the initializer to note that a new session has started.
pub(crate) fn initializer_on_new_session(
_notification: &initializer::SessionChangeNotification<T::BlockNumber>,
) {
Self::perform_outgoing_para_cleanup();
}
/// Iterate over all paras that were registered for offboarding and remove all the data
/// associated with them.
fn perform_outgoing_para_cleanup() {
let outgoing = OutgoingParas::take();
for outgoing_para in outgoing {
Self::clean_dmp_after_outgoing(outgoing_para);
}
}
fn clean_dmp_after_outgoing(outgoing_para: ParaId) {
<Self as Store>::DownwardMessageQueues::remove(&outgoing_para);
<Self as Store>::DownwardMessageQueueHeads::remove(&outgoing_para);
}
/// Schedule a para to be cleaned up at the start of the next session.
pub(crate) fn schedule_para_cleanup(id: ParaId) {
OutgoingParas::mutate(|v| {
if let Err(i) = v.binary_search(&id) {
v.insert(i, id);
}
});
}
/// Enqueue a downward message to a specific recipient para.
///
/// When encoded, the message should not exceed the `config.max_downward_message_size`.
......@@ -165,19 +226,45 @@ impl<T: Trait> Module<T> {
#[cfg(test)]
mod tests {
use super::*;
use crate::mock::{Configuration, Router, new_test_ext};
use crate::router::{
OutgoingParas,
tests::{default_genesis_config, run_to_block},
};
use primitives::v1::BlockNumber;
use frame_support::StorageValue;
use frame_support::traits::{OnFinalize, OnInitialize};
use codec::Encode;
use crate::mock::{Configuration, new_test_ext, System, Dmp, GenesisConfig as MockGenesisConfig};
pub(crate) fn run_to_block(to: BlockNumber, new_session: Option<Vec<BlockNumber>>) {
while System::block_number() < to {
let b = System::block_number();
Dmp::initializer_finalize();
System::on_finalize(b);
System::on_initialize(b + 1);
System::set_block_number(b + 1);
if new_session.as_ref().map_or(false,