Unverified Commit d5f651c1 authored by Andronik Ordian's avatar Andronik Ordian Committed by GitHub
Browse files

some fixes to please cargo-spellcheck (#3550)

* some fixes to please cargo-spellcheck

* some (not all) fixes for the impl guide

* fix
parent a52dca2b
Pipeline #150501 passed with stages
in 40 minutes and 41 seconds
......@@ -271,7 +271,7 @@ impl IsSystem for Sibling {
}
}
/// This type can be converted into and possibly from an AccountId (which itself is generic).
/// This type can be converted into and possibly from an [`AccountId`] (which itself is generic).
pub trait AccountIdConversion<AccountId>: Sized {
/// Convert into an account ID. This is infallible.
fn into_account(&self) -> AccountId;
......@@ -300,7 +300,7 @@ impl<'a> parity_scale_codec::Input for TrailingZeroInput<'a> {
}
/// Format is b"para" ++ encode(parachain ID) ++ 00.... where 00... is indefinite trailing
/// zeroes to fill AccountId.
/// zeroes to fill [`AccountId`].
impl<T: Encode + Decode + Default> AccountIdConversion<T> for Id {
fn into_account(&self) -> T {
(b"para", self)
......
# The Polkadot Parachain Host Implementers' Guide
The implementers' guide is compiled from several source files with [mdBook](https://github.com/rust-lang/mdBook). To view it live, locally, from the repo root:
The implementers' guide is compiled from several source files with [`mdBook`](https://github.com/rust-lang/mdBook).
To view it live, locally, from the repo root:
```sh
cargo install mdbook mdbook-linkcheck mdbook-graphviz
......
......@@ -11,18 +11,18 @@
- [Architecture Overview](architecture.md)
- [Messaging Overview](messaging.md)
- [Runtime Architecture](runtime/README.md)
- [Initializer Module](runtime/initializer.md)
- [Configuration Module](runtime/configuration.md)
- [Shared](runtime/shared.md)
- [Disputes Module](runtime/disputes.md)
- [Paras Module](runtime/paras.md)
- [Scheduler Module](runtime/scheduler.md)
- [Inclusion Module](runtime/inclusion.md)
- [ParaInherent Module](runtime/parainherent.md)
- [DMP Module](runtime/dmp.md)
- [UMP Module](runtime/ump.md)
- [HRMP Module](runtime/hrmp.md)
- [Session Info Module](runtime/session_info.md)
- [`Initializer` Module](runtime/initializer.md)
- [`Configuration` Module](runtime/configuration.md)
- [`Shared`](runtime/shared.md)
- [`Disputes` Module](runtime/disputes.md)
- [`Paras` Module](runtime/paras.md)
- [`Scheduler` Module](runtime/scheduler.md)
- [`Inclusion` Module](runtime/inclusion.md)
- [`ParaInherent` Module](runtime/parainherent.md)
- [`DMP` Module](runtime/dmp.md)
- [`UMP` Module](runtime/ump.md)
- [`HRMP` Module](runtime/hrmp.md)
- [`Session Info` Module](runtime/session_info.md)
- [Runtime APIs](runtime-api/README.md)
- [Validators](runtime-api/validators.md)
- [Validator Groups](runtime-api/validator-groups.md)
......
......@@ -82,7 +82,7 @@ Only peers that already voted shall be queried for the dispute availability data
The peer to be queried for disputes data, must be picked at random.
A validator must retain code, persisted validation data and PoV until a block, that contains the dispute resolution, is finalized - plus an additional 24h.
A validator must retain code, persisted validation data and PoV until a block, that contains the dispute resolution, is finalized - plus an additional 24 hours.
Dispute availability gossip must continue beyond the dispute resolution, until the post resolution timeout expired (equiv to the timeout until which additional late votes are accepted).
......@@ -108,7 +108,7 @@ If the count of votes pro or cons regarding the disputed block, reaches the requ
If a block is found invalid by a dispute resolution, it must be blacklisted to avoid resync or further build on that chain if other chains are available (to be detailed in the grandpa fork choice rule).
A dispute accepts Votes after the dispute is resolved, for 1d.
A dispute accepts Votes after the dispute is resolved, for 1 day.
If a vote is received, after the dispute is resolved, the vote shall still be recorded in the state root, albeit yielding less reward.
......
......@@ -131,7 +131,7 @@ Ensure a vector is present in `pending_known` for each hash in the view that doe
Invoke `unify_with_peer(peer, view)` to catch them up to messages we have.
We also need to use the `view.finalized_number` to remove the `PeerId` from any blocks that it won't be wanting information about anymore. Note that we have to be on guard for peers doing crazy stuff like jumping their 'finalized_number` forward 10 trillion blocks to try and get us stuck in a loop for ages.
We also need to use the `view.finalized_number` to remove the `PeerId` from any blocks that it won't be wanting information about anymore. Note that we have to be on guard for peers doing crazy stuff like jumping their `finalized_number` forward 10 trillion blocks to try and get us stuck in a loop for ages.
One of the safeguards we can implement is to reject view updates from peers where the new `finalized_number` is less than the previous.
......@@ -192,7 +192,7 @@ We maintain a few invariants:
The algorithm is the following:
* Load the BlockEntry using `assignment.block_hash`. If it does not exist, report the source if it is `MessageSource::Peer` and return.
* Load the `BlockEntry` using `assignment.block_hash`. If it does not exist, report the source if it is `MessageSource::Peer` and return.
* Compute a fingerprint for the `assignment` using `claimed_candidate_index`.
* If the source is `MessageSource::Peer(sender)`:
* check if `peer` appears under `known_by` and whether the fingerprint is in the knowledge of the peer. If the peer does not know the block, report for providing data out-of-view and proceed. If the peer does know the block and the `sent` knowledge contains the fingerprint, report for providing replicate data and return, otherwise, insert into the `received` knowledge and return.
......@@ -218,7 +218,7 @@ The algorithm is the following:
Imports an approval signature referenced by block hash and candidate index:
* Load the BlockEntry using `approval.block_hash` and the candidate entry using `approval.candidate_entry`. If either does not exist, report the source if it is `MessageSource::Peer` and return.
* Load the `BlockEntry` using `approval.block_hash` and the candidate entry using `approval.candidate_entry`. If either does not exist, report the source if it is `MessageSource::Peer` and return.
* Compute a fingerprint for the approval.
* Compute a fingerprint for the corresponding assignment. If the `BlockEntry`'s knowledge does not contain that fingerprint, then report the source if it is `MessageSource::Peer` and return. All references to a fingerprint after this refer to the approval's, not the assignment's.
* If the source is `MessageSource::Peer(sender)`:
......
......@@ -13,8 +13,8 @@ In particular this subsystem is responsible for:
this is to ensure availability by at least 2/3+ of all validators, this
happens after a candidate is backed.
- Fetch `PoV` from validators, when requested via `FetchPoV` message from
backing (pov_requester module).
-
backing (`pov_requester` module).
The backing subsystem is responsible of making available data available in the
local `Availability Store` upon validation. This subsystem will serve any
network requests by querying that store.
......@@ -22,27 +22,27 @@ network requests by querying that store.
## Protocol
This subsystem does not handle any peer set messages, but the `pov_requester`
does connecto to validators of the same backing group on the validation peer
does connect to validators of the same backing group on the validation peer
set, to ensure fast propagation of statements between those validators and for
ensuring already established connections for requesting `PoV`s. Other than that
this subsystem drives request/response protocols.
Input:
- OverseerSignal::ActiveLeaves(`[ActiveLeavesUpdate]`)
- AvailabilityDistributionMessage{msg: ChunkFetchingRequest}
- AvailabilityDistributionMessage{msg: PoVFetchingRequest}
- AvailabilityDistributionMessage{msg: FetchPoV}
- `OverseerSignal::ActiveLeaves(ActiveLeavesUpdate)`
- `AvailabilityDistributionMessage{msg: ChunkFetchingRequest}`
- `AvailabilityDistributionMessage{msg: PoVFetchingRequest}`
- `AvailabilityDistributionMessage{msg: FetchPoV}`
Output:
- NetworkBridgeMessage::SendRequests(`[Requests]`, IfDisconnected::TryConnect)
- AvailabilityStore::QueryChunk(candidate_hash, index, response_channel)
- AvailabilityStore::StoreChunk(candidate_hash, chunk)
- AvailabilityStore::QueryAvailableData(candidate_hash, response_channel)
- RuntimeApiRequest::SessionIndexForChild
- RuntimeApiRequest::SessionInfo
- RuntimeApiRequest::AvailabilityCores
- `NetworkBridgeMessage::SendRequests(Requests, IfDisconnected::TryConnect)`
- `AvailabilityStore::QueryChunk(candidate_hash, index, response_channel)`
- `AvailabilityStore::StoreChunk(candidate_hash, chunk)`
- `AvailabilityStore::QueryAvailableData(candidate_hash, response_channel)`
- `RuntimeApiRequest::SessionIndexForChild`
- `RuntimeApiRequest::SessionInfo`
- `RuntimeApiRequest::AvailabilityCores`
## Functionality
......
......@@ -10,14 +10,14 @@ This version of the availability recovery subsystem is based off of direct conne
Input:
- NetworkBridgeUpdateV1(update)
- AvailabilityRecoveryMessage::RecoverAvailableData(candidate, session, backing_group, response)
- `NetworkBridgeUpdateV1(update)`
- `AvailabilityRecoveryMessage::RecoverAvailableData(candidate, session, backing_group, response)`
Output:
- NetworkBridge::SendValidationMessage
- NetworkBridge::ReportPeer
- AvailabilityStore::QueryChunk
- `NetworkBridge::SendValidationMessage`
- `NetworkBridge::ReportPeer`
- `AvailabilityStore::QueryChunk`
## Functionality
......@@ -51,7 +51,7 @@ struct InteractionParams {
validator_authority_keys: Vec<AuthorityId>,
validators: Vec<ValidatorId>,
// The number of pieces needed.
threshold: usize,
threshold: usize,
candidate_hash: Hash,
erasure_root: Hash,
}
......@@ -65,7 +65,7 @@ enum InteractionPhase {
RequestChunks {
// a random shuffling of the validators which indicates the order in which we connect to the validators and
// request the chunk from them.
shuffling: Vec<ValidatorIndex>,
shuffling: Vec<ValidatorIndex>,
received_chunks: Map<ValidatorIndex, ErasureChunk>,
requesting_chunks: FuturesUnordered<Receiver<ErasureChunkRequestResponse>>,
}
......@@ -90,7 +90,7 @@ On `Conclude`, shut down the subsystem.
1. Check the `availability_lru` for the candidate and return the data if so.
1. Check if there is already an interaction handle for the request. If so, add the response handle to it.
1. Otherwise, load the session info for the given session under the state of `live_block_hash`, and initiate an interaction with *launch_interaction*. Add an interaction handle to the state and add the response channel to it.
1. Otherwise, load the session info for the given session under the state of `live_block_hash`, and initiate an interaction with *`launch_interaction`*. Add an interaction handle to the state and add the response channel to it.
1. If the session info is not available, return `RecoveryError::Unavailable` on the response channel.
### From-interaction logic
......@@ -98,7 +98,7 @@ On `Conclude`, shut down the subsystem.
#### `FromInteraction::Concluded`
1. Load the entry from the `interactions` map. It should always exist, if not for logic errors. Send the result to each member of `awaiting`.
1. Add the entry to the availability_lru.
1. Add the entry to the `availability_lru`.
### Interaction logic
......@@ -123,12 +123,12 @@ const N_PARALLEL: usize = 50;
* Request `AvailabilityStoreMessage::QueryAvailableData`. If it exists, return that.
* If the phase is `InteractionPhase::RequestFromBackers`
* Loop:
* If the `requesting_pov` is `Some`, poll for updates on it. If it concludes, set `requesting_pov` to `None`.
* If the `requesting_pov` is `Some`, poll for updates on it. If it concludes, set `requesting_pov` to `None`.
* If the `requesting_pov` is `None`, take the next backer off the `shuffled_backers`.
* If the backer is `Some`, issue a `NetworkBridgeMessage::Requests` with a network request for the `AvailableData` and wait for the response.
* If it concludes with a `None` result, return to beginning.
* If it concludes with available data, attempt a re-encoding.
* If it has the correct erasure-root, break and issue a `Ok(available_data)`.
* If it concludes with a `None` result, return to beginning.
* If it concludes with available data, attempt a re-encoding.
* If it has the correct erasure-root, break and issue a `Ok(available_data)`.
* If it has an incorrect erasure-root, return to beginning.
* If the backer is `None`, set the phase to `InteractionPhase::RequestChunks` with a random shuffling of validators and empty `next_shuffling`, `received_chunks`, and `requesting_chunks` and break the loop.
......
......@@ -10,8 +10,8 @@ There is no dedicated input mechanism for bitfield signing. Instead, Bitfield Si
Output:
- BitfieldDistribution::DistributeBitfield: distribute a locally signed bitfield
- AvailabilityStore::QueryChunk(CandidateHash, validator_index, response_channel)
- `BitfieldDistribution::DistributeBitfield`: distribute a locally signed bitfield
- `AvailabilityStore::QueryChunk(CandidateHash, validator_index, response_channel)`
## Functionality
......
......@@ -114,7 +114,7 @@ fn spawn_validation_work(candidate, parachain head, validation function) {
}
```
### Fetch Pov Block
### Fetch PoV Block
Create a `(sender, receiver)` pair.
Dispatch a [`AvailabilityDistributionMessage`][ADM]`::FetchPoV{ validator_index, pov_hash, candidate_hash, tx, } and listen on the passed receiver for a response. Availability distribution will send the request to the validator specified by `validator_index`, which might not be serving it for whatever reasons, therefore we need to retry with other backing validators in that case.
......
......@@ -8,20 +8,20 @@ The Statement Distribution Subsystem is responsible for distributing statements
Input:
- NetworkBridgeUpdate(update)
- StatementDistributionMessage
- `NetworkBridgeUpdate(update)`
- `StatementDistributionMessage`
Output:
- NetworkBridge::SendMessage(`[PeerId]`, message)
- NetworkBridge::SendRequests (StatementFetching)
- NetworkBridge::ReportPeer(PeerId, cost_or_benefit)
- `NetworkBridge::SendMessage(PeerId, message)`
- `NetworkBridge::SendRequests(StatementFetching)`
- `NetworkBridge::ReportPeer(PeerId, cost_or_benefit)`
## Functionality
Implemented as a gossip protocol. Handle updates to our view and peers' views. Neighbor packets are used to inform peers which chain heads we are interested in data for.
It is responsible for distributing signed statements that we have generated and forwarding them, and for detecting a variety of Validator misbehaviors for reporting to [Misbehavior Arbitration](../utility/misbehavior-arbitration.md). During the Backing stage of the inclusion pipeline, it's the main point of contact with peer nodes. On receiving a signed statement from a peer in the same backing group, assuming the peer receipt state machine is in an appropriate state, it sends the Candidate Receipt to the [Candidate Backing subsystem](candidate-backing.md) to handle the validator's statement. On receiving `StatementDistributionMessage::Share` we make sure to send messages to our backing group in addition to random other peers, to ensure a fast backing process and getting all statements quickly for distribtution.
It is responsible for distributing signed statements that we have generated and forwarding them, and for detecting a variety of Validator misbehaviors for reporting to [Misbehavior Arbitration](../utility/misbehavior-arbitration.md). During the Backing stage of the inclusion pipeline, it's the main point of contact with peer nodes. On receiving a signed statement from a peer in the same backing group, assuming the peer receipt state machine is in an appropriate state, it sends the Candidate Receipt to the [Candidate Backing subsystem](candidate-backing.md) to handle the validator's statement. On receiving `StatementDistributionMessage::Share` we make sure to send messages to our backing group in addition to random other peers, to ensure a fast backing process and getting all statements quickly for distribution.
Track equivocating validators and stop accepting information from them. Establish a data-dependency order:
......@@ -71,7 +71,7 @@ The simple approach is to say that we only receive up to two `Seconded` statemen
With that in mind, this simple approach has a caveat worth digging deeper into.
First: We may be aware of two equivocated `Seconded` statements issued by a validator. A totally honest peer of ours can also be aware of one or two different `Seconded` statements issued by the same validator. And yet another peer may be aware of one or two _more_ `Seconded` statements. And so on. This interacts badly with pre-emptive sending logic. Upon sending a `Seconded` statement to a peer, we will want to pre-emptively follow up with all statements relative to that candidate. Waiting for acknowledgement introduces latency at every hop, so that is best avoided. What can happen is that upon receipt of the `Seconded` statement, the peer will discard it as it falls beyond the bound of 2 that it is allowed to store. It cannot store anything in memory about discarded candidates as that would introduce a DoS vector. Then, the peer would receive from us all of the statements pertaining to that candidate, which, from its perspective, would be undesired - they are data-dependent on the `Seconded` statement we sent them, but they have erased all record of that from their memory. Upon receiving a potential flood of undesired statements, this 100% honest peer may choose to disconnect from us. In this way, an adversary may be able to partition the network with careful distribution of equivocated `Seconded` statements.
First: We may be aware of two equivocated `Seconded` statements issued by a validator. A totally honest peer of ours can also be aware of one or two different `Seconded` statements issued by the same validator. And yet another peer may be aware of one or two _more_ `Seconded` statements. And so on. This interacts badly with pre-emptive sending logic. Upon sending a `Seconded` statement to a peer, we will want to pre-emptively follow up with all statements relative to that candidate. Waiting for acknowledgment introduces latency at every hop, so that is best avoided. What can happen is that upon receipt of the `Seconded` statement, the peer will discard it as it falls beyond the bound of 2 that it is allowed to store. It cannot store anything in memory about discarded candidates as that would introduce a DoS vector. Then, the peer would receive from us all of the statements pertaining to that candidate, which, from its perspective, would be undesired - they are data-dependent on the `Seconded` statement we sent them, but they have erased all record of that from their memory. Upon receiving a potential flood of undesired statements, this 100% honest peer may choose to disconnect from us. In this way, an adversary may be able to partition the network with careful distribution of equivocated `Seconded` statements.
The fix is to track, per-peer, the hashes of up to 4 candidates per validator (per relay-parent) that the peer is aware of. It is 4 because we may send them 2 and they may send us 2 different ones. We track the data that they are aware of as the union of things we have sent them and things they have sent us. If we receive a 1st or 2nd `Seconded` statement from a peer, we note it in the peer's known candidates even if we do disregard the data locally. And then, upon receipt of any data dependent on that statement, we do not reduce that peer's standing in our eyes, as the data was not undesired.
......
......@@ -120,9 +120,9 @@ Subsequently, once a valid parablock candidate has been seconded, the [`Candidat
Several approaches have been discussed, but all have some issues:
- The current approach is very straightforward. However, that protocol is vulnerable to a single collator which, as an attack or simply through chance, gets its block candidate to the node more often than its fair share of the time.
- If collators produce blocks via Aura, BABE or in future Sassafrass, it may be possible to choose an "Official" collator for the round, but it may be tricky to ensure that the PVF logic is enforced at collator leader election.
- If collators produce blocks via Aura, BABE or in future Sassafras, it may be possible to choose an "Official" collator for the round, but it may be tricky to ensure that the PVF logic is enforced at collator leader election.
- We could use relay-chain BABE randomness to generate some delay `D` on the order of 1 second, +- 1 second. The collator would then second the first valid parablock which arrives after `D`, or in case none has arrived by `2*D`, the last valid parablock which has arrived. This makes it very hard for a collator to game the system to always get its block nominated, but it reduces the maximum throughput of the system by introducing delay into an already tight schedule.
- A variation of that scheme would be to have a fixed acceptance window `D` for parablock candidates and keep track of count `C`: the number of parablock candidates received. At the end of the period `D`, we choose a random number I in the range [0, C) and second the block at Index I. Its drawback is the same: it must wait the full `D` period before seconding any of its received candidates, reducing throughput.
- A variation of that scheme would be to have a fixed acceptance window `D` for parablock candidates and keep track of count `C`: the number of parablock candidates received. At the end of the period `D`, we choose a random number I in the range `[0, C)` and second the block at Index I. Its drawback is the same: it must wait the full `D` period before seconding any of its received candidates, reducing throughput.
- In order to protect against DoS attacks, it may be prudent to run throw out collations from collators that have behaved poorly (whether recently or historically) and subsequently only verify the PoV for the most suitable of collations.
[CB]: ../backing/candidate-backing.md
......
......@@ -4,8 +4,8 @@ This section is for the node-side subsystems that lead to participation in dispu
* Detection. Detect bad parablocks, either during candidate backing or approval checking, and initiate a dispute.
* Participation. Participate in active disputes. When a node becomes aware of a dispute, it should recover the data for the disputed block, check the validity of the parablock, and issue a statement regarding the validity of the parablock.
* Distribution. Validators should notify each other of active disputes and relevant statements over the network.
* Submission. When authoring a block, a validator should inspect the state of the parent block and provide any information about disputes that the chain needs as part of the ParaInherent. This should initialize new disputes on-chain as necessary.
* Fork-choice and Finality. When observing a block issuing a DisputeRollback digest in the header, that branch of the relay chain should be abandoned all the way back to the indicated block. When voting on chains in GRANDPA, no chains that contain blocks that are or have been disputed should be voted on.
* Submission. When authoring a block, a validator should inspect the state of the parent block and provide any information about disputes that the chain needs as part of the `ParaInherent`. This should initialize new disputes on-chain as necessary.
* Fork-choice and Finality. When observing a block issuing a `DisputeRollback` digest in the header, that branch of the relay chain should be abandoned all the way back to the indicated block. When voting on chains in GRANDPA, no chains that contain blocks that are or have been disputed should be voted on.
## Components
......@@ -17,15 +17,15 @@ This component should track all statements received from all validators over som
This is responsible for tracking and initiating disputes. Disputes will be initiated either externally by another subsystem which has identified an issue with a parablock or internally upon observing two statements which conflict with each other on the validity of a parablock.
No more than one statement by each validator on each side of the dispute needs to be stored. That is, validators are allowed to participate on both sides of the dispute, although we won't write code to do so. Such behavior has negative EV in the runtime.
No more than one statement by each validator on each side of the dispute needs to be stored. That is, validators are allowed to participate on both sides of the dispute, although we won't write code to do so. Such behavior has negative extractable value in the runtime.
This will notify the dispute participation subsystem of a new dispute if the local validator has not issued any statements on the disputed candidate already.
Locally authored statements related to disputes will be forwarded to the dispute distribution subsystem.
This subsystem also provides two further behaviors for the interactions between disputes and fork-choice
- Enhancing the finality voting rule. Given description of a chain and candidates included at different heights in that chain, it returns the BlockHash corresponding to the highest BlockNumber that there are no disputes before. I expect that we will slightly change ApprovalVoting::ApprovedAncestor to return this set and then the target block to vote on will be further constrained by this function.
- Chain roll-backs. Whenever importing new blocks, the header should be scanned for a roll-back digest. If there is one, the chain should be rolled back according to the digest. I expect this would be implemented with a ChainApi function and possibly an ApprovalVoting function to clean up the approval voting DB.
- Enhancing the finality voting rule. Given description of a chain and candidates included at different heights in that chain, it returns the `BlockHash` corresponding to the highest `BlockNumber` that there are no disputes before. I expect that we will slightly change `ApprovalVoting::ApprovedAncestor` to return this set and then the target block to vote on will be further constrained by this function.
- Chain roll-backs. Whenever importing new blocks, the header should be scanned for a roll-back digest. If there is one, the chain should be rolled back according to the digest. I expect this would be implemented with a `ChainApi` function and possibly an `ApprovalVoting` function to clean up the approval voting DB.
### Dispute Participation
......
......@@ -30,7 +30,7 @@ This design should result in a protocol that is:
#### Disputes
Protocol: "/polkadot/send\_dispute/1"
Protocol: `"/polkadot/send_dispute/1"`
Request:
......@@ -86,7 +86,7 @@ enum DisputeResponse {
#### Vote Recovery
Protocol: "/polkadot/req\_votes/1"
Protocol: `"/polkadot/req_votes/1"`
```rust
struct IHaveVotesRequest {
......@@ -246,7 +246,7 @@ even if they did:
So this general rate limit, that we drop requests from same peers if they come
faster than we can import the statements should not cause any problems for
honest nodes and is in their favour.
honest nodes and is in their favor.
Size of `N`: The larger `N` the better we can handle distributed flood attacks
(see previous paragraph), but we also get potentially more availability recovery
......
......@@ -6,14 +6,14 @@ Fortunately, most of that work is handled by other subsystems; this subsystem is
## Protocol
Input: [DisputeParticipationMessage][DisputeParticipationMessage]
Input: [`DisputeParticipationMessage`][DisputeParticipationMessage]
Output:
- [RuntimeApiMessage][RuntimeApiMessage]
- [CandidateValidationMessage][CandidateValidationMessage]
- [AvailabilityRecoveryMessage][AvailabilityRecoveryMessage]
- [AvailabilityStoreMessage][AvailabilityStoreMessage]
- [ChainApiMessage][ChainApiMessage]
- [`RuntimeApiMessage`][RuntimeApiMessage]
- [`CandidateValidationMessage`][CandidateValidationMessage]
- [`AvailabilityRecoveryMessage`][AvailabilityRecoveryMessage]
- [`AvailabilityStoreMessage`][AvailabilityStoreMessage]
- [`ChainApiMessage`][ChainApiMessage]
## Functionality
......@@ -57,7 +57,7 @@ Conclude.
This requires the parameters `{ candidate_receipt, candidate_hash, session, voted_indices }` as well as a choice of either `Valid` or `Invalid`.
Invoke [`DisputeCoordinatorMessage::IssueLocalStatement`][DisputeCoordinatorMessage] with `is_valid` according to the parametrization.
Invoke [`DisputeCoordinatorMessage::IssueLocalStatement`][DisputeCoordinatorMessage] with `is_valid` according to the parameterization,.
[RuntimeApiMessage]: ../../types/overseer-protocol.md#runtime-api-message
[DisputeParticipationMessage]: ../../types/overseer-protocol.md#dispute-participation-message
......
......@@ -80,7 +80,7 @@ When a subsystem wants to communicate with another subsystem, or, more typically
First, the subsystem that spawned a job is responsible for handling the first step of the communication. The overseer is not aware of the hierarchy of tasks within any given subsystem and is only responsible for subsystem-to-subsystem communication. So the sending subsystem must pass on the message via the overseer to the receiving subsystem, in such a way that the receiving subsystem can further address the communication to one of its internal tasks, if necessary.
This communication prevents a certain class of race conditions. When the Overseer determines that it is time for subsystems to begin working on top of a particular relay-parent, it will dispatch a `ActiveLeavesUpdate` message to all subsystems to do so, and those messages will be handled asynchronously by those subsystems. Some subsystems will receive those messsages before others, and it is important that a message sent by subsystem A after receiving `ActiveLeavesUpdate` message will arrive at subsystem B after its `ActiveLeavesUpdate` message. If subsystem A maintaned an independent channel with subsystem B to communicate, it would be possible for subsystem B to handle the side message before the `ActiveLeavesUpdate` message, but it wouldn't have any logical course of action to take with the side message - leading to it being discarded or improperly handled. Well-architectured state machines should have a single source of inputs, so that is what we do here.
This communication prevents a certain class of race conditions. When the Overseer determines that it is time for subsystems to begin working on top of a particular relay-parent, it will dispatch a `ActiveLeavesUpdate` message to all subsystems to do so, and those messages will be handled asynchronously by those subsystems. Some subsystems will receive those messsages before others, and it is important that a message sent by subsystem A after receiving `ActiveLeavesUpdate` message will arrive at subsystem B after its `ActiveLeavesUpdate` message. If subsystem A maintained an independent channel with subsystem B to communicate, it would be possible for subsystem B to handle the side message before the `ActiveLeavesUpdate` message, but it wouldn't have any logical course of action to take with the side message - leading to it being discarded or improperly handled. Well-architectured state machines should have a single source of inputs, so that is what we do here.
One exception is reasonable to make for responses to requests. A request should be made via the overseer in order to ensure that it arrives after any relevant `ActiveLeavesUpdate` message. A subsystem issuing a request as a result of a `ActiveLeavesUpdate` message can safely receive the response via a side-channel for two reasons:
......
......@@ -10,7 +10,7 @@ This subsystem needs to update its information on the unfinalized chain:
* On every `ChainSelectionMessage::Approve`
* Periodically, to detect stagnation.
Simple implementations of these updates do O(n_unfinalized_blocks) disk operations. If the amount of unfinalized blocks is relatively small, the updates should not take very much time. However, in cases where there are hundreds or thousands of unfinalized blocks the naive implementations of these update algorithms would have to be replaced with more sophisticated versions.
Simple implementations of these updates do `O(n_unfinalized_blocks)` disk operations. If the amount of unfinalized blocks is relatively small, the updates should not take very much time. However, in cases where there are hundreds or thousands of unfinalized blocks the naive implementations of these update algorithms would have to be replaced with more sophisticated versions.
### `OverseerSignal::ActiveLeavesUpdate`
......
......@@ -14,6 +14,6 @@ with the `NetworkBridgeMessage::NewGossipTopology` message.
See https://github.com/paritytech/polkadot/issues/3239 for more details.
The gossip topology is used by parachain distribution subsystems,
such as Bitfield Distrubution, (small) Statement Distributuion and
Approval Distibution to limit the amount of peers we send messages to
such as Bitfield Distribution, (small) Statement Distribution and
Approval Distribution to limit the amount of peers we send messages to
and handle view updates.
......@@ -54,7 +54,7 @@ The bulk of the work done by this subsystem is in responding to network events,
Each network event is associated with a particular peer-set.
### Overseer Signal: ActiveLeavesUpdate
### Overseer Signal: `ActiveLeavesUpdate`
The `activated` and `deactivated` lists determine the evolution of our local view over time. A `ProtocolMessage::ViewUpdate` is issued to each connected peer on each peer-set, and a `NetworkBridgeEvent::OurViewChange` is issued to each event handler for each protocol.
......@@ -62,51 +62,51 @@ We only send view updates if the node has indicated that it has finished major b
If we are connected to the same peer on both peer-sets, we will send the peer two view updates as a result.
### Overseer Signal: BlockFinalized
### Overseer Signal: `BlockFinalized`
We update our view's `finalized_number` to the provided one and delay `ProtocolMessage::ViewUpdate` and `NetworkBridgeEvent::OurViewChange` till the next `ActiveLeavesUpdate`.
### Network Event: Peer Connected
### Network Event: `PeerConnected`
Issue a `NetworkBridgeEvent::PeerConnected` for each [Event Handler](#event-handlers) of the peer-set and negotiated protocol version of the peer. Also issue a `NetworkBridgeEvent::PeerViewChange` and send the peer our current view, but only if the node has indicated that it has finished major blockchain synchronization. Otherwise, we only send the peer an empty view.
### Network Event: Peer Disconnected
### Network Event: `PeerDisconnected`
Issue a `NetworkBridgeEvent::PeerDisconnected` for each [Event Handler](#event-handlers) of the peer-set and negotiated protocol version of the peer.
### Network Event: ProtocolMessage
### Network Event: `ProtocolMessage`
Map the message onto the corresponding [Event Handler](#event-handlers) based on the peer-set this message was received on and dispatch via overseer.
### Network Event: ViewUpdate
### Network Event: `ViewUpdate`
- Check that the new view is valid and note it as the most recent view update of the peer on this peer-set.
- Map a `NetworkBridgeEvent::PeerViewChange` onto the corresponding [Event Handler](#event-handlers) based on the peer-set this message was received on and dispatch via overseer.
### ReportPeer
### `ReportPeer`
- Adjust peer reputation according to cost or benefit provided
### DisconnectPeer
### `DisconnectPeer`
- Disconnect the peer from the peer-set requested, if connected.
### SendValidationMessage / SendValidationMessages
### `SendValidationMessage` / `SendValidationMessages`
- Issue a corresponding `ProtocolMessage` to each listed peer on the validation peer-set.
### SendCollationMessage / SendCollationMessages
### `SendCollationMessage` / `SendCollationMessages`
- Issue a corresponding `ProtocolMessage` to each listed peer on the collation peer-set.
### ConnectToValidators
### `ConnectToValidators`
- Determine the DHT keys to use for each validator based on the relay-chain state and Runtime API.
- Recover the Peer IDs of the validators from the DHT. There may be more than one peer ID per validator.
- Send all `(ValidatorId, PeerId)` pairs on the response channel.
- Feed all Peer IDs to peer set manager the underlying network provides.
### NewGossipTopology
### `NewGossipTopology`
- Map all `AuthorityDiscoveryId`s to `PeerId`s and issue a corresponding `NetworkBridgeUpdateV1`
to all validation subsystems.
......
......@@ -10,10 +10,10 @@ Output: None
## Functionality
On receipt of `RuntimeApiMessage::Request(relay_parent, request)`, answer the request using the post-state of the relay_parent provided and provide the response to the side-channel embedded within the request.
On receipt of `RuntimeApiMessage::Request(relay_parent, request)`, answer the request using the post-state of the `relay_parent` provided and provide the response to the side-channel embedded within the request.
> TODO Do some caching. The underlying rocksdb already has a cache of trie nodes so duplicate requests are unlikely to hit disk. Not required for functionality.
## Jobs
> TODO Don't limit requests based on parent hash, but limit caching. No caching should be done for any requests on relay_parents that are not active based on `ActiveLeavesUpdate` messages. Maybe with some leeway for things that have just been stopped.
> TODO Don't limit requests based on parent hash, but limit caching. No caching should be done for any requests on `relay_parent`s that are not active based on `ActiveLeavesUpdate` messages. Maybe with some leeway for things that have just been stopped.
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment