Commit e58858d1 authored by Peter Goodspeed-Niklaus's avatar Peter Goodspeed-Niklaus
Browse files

the `inclusion` inherent call is Operational, not Mandatory

This resolves a lot of the trickiness about this issue, because
we no longer need to override or supplant any existing proposer
logic; the existing logic should exhibit these behaviors:

- the `inclusion` inherent is prioritized over standard transactions
- but if it's too heavy, i.e. in case of runtime upgrade, it'll be
  dropped in favor of that.

It is my belief that allowing the proposer to just not include
this data won't have any adverse effects: it's equivalent to replacing
them with empty versions of themselves, which the `ProvideInherent`
impl already does.
parent 0f373fdf
......@@ -193,56 +193,12 @@ where
record_proof: RecordProof,
) -> Self::Proposal {
async move {
// TODO: how can we tell, here, if we expect a heavy block?
//
// The runtime exposes `frame_system::Module::block_weight`
// (https://substrate.dev/rustdocs/v2.0.0/frame_system/struct.Module.html?search=#method.block_weight)
// and the `MaximumBlockWeight`
// (https://substrate.dev/rustdocs/v2.0.0/frame_system/trait.Trait.html#associatedtype.MaximumBlockWeight),
// so fundamentally this is a question of propagating the necessary runtime information
// through the Runtime API. At that point, it's a simple inequality:
//
// ```rust
// let expect_heavy_block = block_weight > maximum_block_weight - MARGIN;
// ```
//
// Unfortunately, it's not quite that simple, because the whole point of this proposer
// is to inject the provisioner data before the substrate proposer runs. Before it runs,
// the `block_weight` function isn't going to give us any useful information, beacuse
// nothing has yet been proposed to be included in the block.
//
// The naive option is very simple: run the proposer, then weigh the block. Either add a
// dry-run mode to the internal proposer, or run the internal proposer and then revert
// all state changes that it's made. The downside of this approach is that it runs
// everything twice, cutting runtime performance literally in half. That would be
// suboptimal.
//
// A somewhat more sophisticated approach takes advantage of the fact that Substrate's
// proposer is greedy: if it is possible to include all proposed transactions, then it
// will do so. This means that we can just compute the weight of all the transactions in
// the pool, and use essentially the same inequality:
//
// ```rust
// let expect_heavy_block = sum_of_tx_weights > maximum_block_weight - MARGIN;
// ```
//
// This is complicated by the fact that transactions are code, not data: in principle,
// it would be possible for an attacker to craft a transaction which is heavy and looks
// valid to the transaction pool, but which aborts cheaply when it is executed,
// preventing its costs from being deducted from the attacker. Spamming the relay chain
// with sufficient of these transactions would prevent all parachain progress.
let expect_heavy_block = false;
let provisioner_data = if !expect_heavy_block {
match self.get_provisioner_data().await {
Ok(pd) => pd,
Err(err) => {
tracing::warn!(err = ?err, "could not get provisioner inherent data; injecting default data");
Default::default()
}
let provisioner_data = match self.get_provisioner_data().await {
Ok(pd) => pd,
Err(err) => {
tracing::warn!(err = ?err, "could not get provisioner inherent data; injecting default data");
Default::default()
}
} else {
Default::default()
};
inherent_data.put_data(
......
......@@ -39,8 +39,6 @@ use crate::{
};
use inherents::{InherentIdentifier, InherentData, MakeFatalError, ProvideInherent};
const BLOCK_WEIGHT_MARGIN: u64 = 1000;
pub trait Config: inclusion::Config + scheduler::Config {}
decl_storage! {
......@@ -49,8 +47,6 @@ decl_storage! {
///
/// The `Option<()>` is effectively a bool, but it never hits storage in the `None` variant
/// due to the guarantees of FRAME's storage APIs.
///
/// If this is `None` at the end of the block, we panic and render the block invalid.
Included: Option<()>;
}
}
......@@ -72,13 +68,11 @@ decl_module! {
}
fn on_finalize() {
if Included::take().is_none() {
panic!("Bitfields and heads must be included every block");
}
Included::take();
}
/// Include backed candidates and bitfields.
#[weight = (1_000_000_000, DispatchClass::Mandatory)]
#[weight = (1_000_000_000, DispatchClass::Operational)]
pub fn inclusion(
origin,
signed_bitfields: SignedAvailabilityBitfields,
......@@ -129,22 +123,16 @@ decl_module! {
}
}
// At the point that this is run, the provisioner has already chosen a set of extrinsics,
// so we can rely on calculations like block weight.
fn inclusion_inherents_would_overload_block<T: 'static + frame_system::Config>() -> bool {
frame_system::Module::<T>::block_weight().total() > <T as frame_system::Config>::MaximumBlockWeight::get() - BLOCK_WEIGHT_MARGIN
}
/// We should only include the inherent under certain circumstances.
///
/// 1. The inherent is itself valid. It may not be, for example, in the event of a session change.
/// 2. It would not overload the block, which might already be heavy.
/// Most importantly, we check that the inherent is itself valid. It may not be, for example, in the
/// event of a session change.
fn should_include_inherent<T: Config>(
signed_bitfields: &SignedAvailabilityBitfields,
backed_candidates: &[BackedCandidate<T::Hash>],
) -> bool {
!inclusion_inherents_would_overload_block::<T>() &&
// Sanity check: session changes can invalidate an inherent, and we _really_ don't want that to happen.
//
// See github.com/paritytech/polkadot/issues/1327
Module::<T>::inclusion(
frame_system::RawOrigin::None.into(),
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment