Skip to content
Snippets Groups Projects
  1. May 24, 2024
    • Oliver Tale-Yazdi's avatar
      Polkadot-SDK Umbrella Crate (#3935) · 1c7a1a58
      Oliver Tale-Yazdi authored
      
      # Umbrella Crate
      
      The Polkadot-SDK "umbrella" is a crate that re-exports all other
      published crates. This makes it
      possible to have a very small `Cargo.toml` file that only has one
      dependency, the umbrella
      crate. This helps with selecting the right combination of crate
      versions, since otherwise 3rd
      party tools are needed to select a compatible set of versions.
      
      ## Features
      
      The umbrella crate supports no-std builds and can therefore be used in
      the runtime and node.
      There are two main features: `runtime` and `node`. The `runtime` feature
      enables all `no-std`
      crates, while the `node` feature enables all `std` crates. It should be
      used like any other
      crate in the repo, with `default-features = false`.
      
      For more fine-grained control, additionally, each crate can be enabled
      selectively. The umbrella
      exposes one feature per dependency. For example, if you only want to use
      the `frame-support`
      crate, you can enable the `frame-support` feature.
      
      The umbrella exposes a few more general features:
      - `tuples-96`: Needs to be enabled for runtimes that have more than 64
      pallets.
      - `serde`: Specifically enable `serde` en/decoding support.
      - `experimental`: Experimental enable experimental features - should not
      yet used in production.
      - `with-tracing`: Enable tracing support.
      - `try-runtime`, `runtime-benchmarks` and `std`: These follow the
      standard conventions.
      - `runtime`: As described above, enable all `no-std` crates.
      - `node`: As described above, enable all `std` crates.
      - There does *not* exist a dedicated docs feature. To generate docs,
      enable the `runtime` and
      `node` feature. For docs.rs the manifest contains specific configuration
      to make it show up
        all re-exports.
      
      There is a specific `zepter` check in place to ensure that the features
      of the umbrella are
      correctly configured. This check is run in CI and locally when running
      `zepter`.
      
      ## Generation
      
      The umbrella crate needs to be updated every time when a new crate is
      added or removed from the
      workspace. It is checked in CI by calling its generation script. The
      generation script is
      located in `./scripts/generate-umbrella.py` and needs dependency
      `cargo_workspace`.
      
      Example: `python3 scripts/generate-umbrella.py --sdk . --version 1.9.0`
      
      ## Usage
      
      > Note: You can see a live example in the `staging-node-cli` and
      `kitchensink-runtime` crates.
      
      The umbrella crate can be added to your runtime crate like this:
      
      `polkadot-sdk = { path = "../../../../umbrella", features = ["runtime"],
      default-features =
      false}`
      
      or for a node:
      
      `polkadot-sdk = { path = "../../../../umbrella", features = ["node"],
      default-features = false
      }`
      
      In the code, it is then possible to bring all dependencies into scope
      via:
      
      `use polkadot_sdk::*;`
      
      ### Known Issues
      
      The only known issue so far is the fact that the `use` statement brings
      the dependencies only
      into the outer module scope - not the global crate scope. For example,
      the following code would
      need to be adjusted:
      
      ```rust
      use polkadot_sdk::*;
      
      mod foo {
         // This does sadly not compile:
         frame_support::parameter_types! { }
      
         // Instead, we need to do this (or add an equivalent `use` statement):
         polkadot_sdk::frame_support::parameter_types! { }
      }
      ```
      
      Apart from this, no issues are known. There could be some bugs with how
      macros locate their own
      re-exports. Please compile issues that arise from using this crate.
      
      ## Dependencies
      
      The umbrella crate re-exports all published crates, with a few
      exceptions:
      - Runtime crates like `rococo-runtime` etc are not exported. This
      otherwise leads to very weird
        compile errors and should not be needed anyway.
      - Example and fuzzing crates are not exported. This is currently
      detected by checking the name
      of the crate for these magic words. In the future, it will utilize
      custom metadata, as it is
        done in the `rococo-runtime` crate.
      - The umbrella crate itself. Should be obvious :)
      
      ## Follow Ups
      - [ ] Re-writing the generator in Rust - the python script is at its
      limit.
      - [ ] Using custom metadata to exclude some crates instead of filtering
      by names.
      - [ ] Finding a way to setting the version properly. Currently its
      locked in the CI script.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      1c7a1a58
    • Oliver Tale-Yazdi's avatar
      Remove litep2p git dependency (#4560) · 49bd6a6e
      Oliver Tale-Yazdi authored
      @serban300
      
       could you please do the same for the MMR crate? Am not sure
      what commit was released since there are no release tags in the repo.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      49bd6a6e
    • Branislav Kontur's avatar
      Attempt to avoid specifying `BlockHashCount` for different... · ef144b1a
      Branislav Kontur authored
      Attempt to avoid specifying `BlockHashCount` for different `mocking::{MockBlock, MockBlockU32, MockBlockU128}` (#4543)
      
      While doing some migration/rebase I came in to the situation, where I
      needed to change `mocking::MockBlock` to `mocking::MockBlockU32`:
      ```
      #[derive_impl(frame_system::config_preludes::TestDefaultConfig)]
      impl frame_system::Config for TestRuntime {
      	type Block = frame_system::mocking::MockBlockU32<TestRuntime>;
      	type AccountData = pallet_balances::AccountData<ThisChainBalance>;
      }
      ```
      But actual `TestDefaultConfig` for `frame_system` is using `ConstU64`
      for `type BlockHashCount = frame_support::traits::ConstU64<10>;`
      [here](https://github.com/paritytech/polkadot-sdk/blob/master/substrate/frame/system/src/lib.rs#L303).
      Because of this, it force me to specify and add override for `type
      BlockHashCount = ConstU32<10>`.
      
      This PR tries to fix this with `TestBlockHashCount` implementation for
      `TestDefaultConfig` which supports `u32`, `u64` and `u128` as a
      `BlockNumber`.
      
      ### How to simulate error
      Just by removing `type BlockHashCount = ConstU32<250>;`
      [here](https://github.com/paritytech/polkadot-sdk/blob/master/substrate/frame/multisig/src/tests.rs#L44)
      ```
      :~/parity/olkadot-sdk$ cargo test -p pallet-multisig
         Compiling pallet-multisig v28.0.0 (/home/bparity/parity/aaa/polkadot-sdk/substrate/frame/multisig)
      error[E0277]: the trait bound `ConstU64<10>: frame_support::traits::Get<u32>` is not satisfied
         --> substrate/frame/multisig/src/tests.rs:41:1
          |
      41  | #[derive_impl(frame_system::config_preludes::TestDefaultConfig)]
          | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `frame_support::traits::Get<u32>` is not implemented for `ConstU64<10>`
          |
          = help: the following other types implement trait `frame_support::traits::Get<T>`:
                    <ConstU64<T> as frame_support::traits::Get<u64>>
                    <ConstU64<T> as frame_support::traits::Get<std::option::Option<u64>>>
      note: required by a bound in `frame_system::Config::BlockHashCount`
         --> /home/bparity/parity/aaa/polkadot-sdk/substrate/frame/system/src/lib.rs:535:24
          |
      535 |         type BlockHashCount: Get<BlockNumberFor<Self>>;
          |                              ^^^^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `Config::BlockHashCount`
          = note: this error originates in the attribute macro `derive_impl` which comes from the expansion of the macro `frame_support::macro_magic::forward_tokens_verbatim` (in Nightly builds, run with -Z macro-backtrace for more info)
      
      For more information about this error, try `rustc --explain E0277`.
      error: could not compile `pallet-multisig` (lib test) due to 1 previous error 
      ```
      
      
      
      
      ## For reviewers:
      
      (If there is a better solution, please let me know!)
      
      The first commit contains actual attempt to fix the problem:
      https://github.com/paritytech/polkadot-sdk/commit/3c5499e5
      
      .
      The second commit is just removal of `BlockHashCount` from all other
      places where not needed by default.
      
      Closes: https://github.com/paritytech/polkadot-sdk/issues/1657
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      ef144b1a
    • Serban Iorga's avatar
      Use polkadot-ckb-merkle-mountain-range dependency (#4562) · 700d5910
      Serban Iorga authored
      We need to use the `polkadot-ckb-merkle-mountain-range` dependency
      published on `crates.io` in order to unblock the release of the
      `sp-mmr-primitives` crate
      700d5910
  2. May 23, 2024
    • Francisco Aguirre's avatar
      Mention new XCM docs in sdk docs (#4558) · 48d4f654
      Francisco Aguirre authored
      The XCM docs were pretty much moved to the new rust docs format in
      https://github.com/paritytech/polkadot-sdk/pull/2633, with the addition
      of the XCM cookbook, which I plan to add more examples to shortly.
      
      These docs were not mentioned in the polkadot-sdk rust docs, this PR
      just mentions them there, so people can actually find them.
      48d4f654
    • Serban Iorga's avatar
      Define `OpaqueValue` (#4550) · 03bbc17e
      Serban Iorga authored
      
      Define `OpaqueValue` and use it instead of
      `grandpa::OpaqueKeyOwnershipProof` and `beefy:OpaqueKeyOwnershipProof`
      
      Related to
      https://github.com/paritytech/polkadot-sdk/pull/4522#discussion_r1608278279
      
      We'll need to introduce a runtime API method that calls the
      `report_fork_voting_unsigned()` extrinsic. This method will need to
      receive the ancestry proof as a paramater. I'm still not sure, but there
      is a chance that we'll send the ancestry proof as an opaque type.
      
      So let's introduce this `OpaqueValue`. We can already use it to replace
      `grandpa::OpaqueKeyOwnershipProof` and `beefy:OpaqueKeyOwnershipProof`
      and maybe we'll need it for the ancestry proof as well.
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      03bbc17e
    • PG Herveou's avatar
      Contracts: Rework host fn benchmarks (#4233) · 493ba5e2
      PG Herveou authored
      
      fix https://github.com/paritytech/polkadot-sdk/issues/4163
      
      This PR does the following:
      Update to pallet-contracts-proc-macro: 
      - Parse #[cfg] so we can add a dummy noop host function for benchmark.
      - Generate BenchEnv::<host_fn> so we can call host functions directly in
      the benchmark.
      - Add the weight of the noop host function before calling the host
      function itself
      
      Update benchmarks:
      - Update all host function benchmark, a host function benchmark now
      simply call the host function, instead of invoking the function n times
      from within a contract.
      - Refactor RuntimeCosts & Schedule, for most host functions, we can now
      use the generated weight function directly instead of computing the diff
      with the cost! macro
      
      ```rust
      // Before
      #[benchmark(pov_mode = Measured)]
      fn seal_input(r: Linear<0, API_BENCHMARK_RUNS>) {
          let code = WasmModule::<T>::from(ModuleDefinition {
              memory: Some(ImportedMemory::max::<T>()),
              imported_functions: vec![ImportedFunction {
                  module: "seal0",
                  name: "seal_input",
                  params: vec![ValueType::I32, ValueType::I32],
                  return_type: None,
              }],
              data_segments: vec![DataSegment { offset: 0, value: 0u32.to_le_bytes().to_vec() }],
              call_body: Some(body::repeated(
                  r,
                  &[
                      Instruction::I32Const(4), // ptr where to store output
                      Instruction::I32Const(0), // ptr to length
                      Instruction::Call(0),
                  ],
              )),
              ..Default::default()
          });
      
          call_builder!(func, code);
      
          let res;
          #[block]
          {
              res = func.call();
          }
          assert_eq!(res.did_revert(), false);
      }
      ```
      
      ```rust
      // After
      fn seal_input(n: Linear<0, { code::max_pages::<T>() * 64 * 1024 - 4 }>) {
          let mut setup = CallSetup::<T>::default();
          let (mut ext, _) = setup.ext();
          let mut runtime = crate::wasm::Runtime::new(&mut ext, vec![42u8; n as usize]);
          let mut memory = memory!(n.to_le_bytes(), vec![0u8; n as usize],);
          let result;
          #[block]
          {
              result = BenchEnv::seal0_input(&mut runtime, &mut memory, 4, 0)
          }
          assert_ok!(result);
          assert_eq!(&memory[4..], &vec![42u8; n as usize]);
      }
      ``` 
      
      [Weights
      compare](https://weights.tasty.limo/compare?unit=weight&ignore_errors=true&threshold=10&method=asymptotic&repo=polkadot-sdk&old=master&new=pg%2Frework-host-benchs&path_pattern=substrate%2Fframe%2Fcontracts%2Fsrc%2Fweights.rs%2Cpolkadot%2Fruntime%2F*%2Fsrc%2Fweights%2F**%2F*.rs%2Cpolkadot%2Fbridges%2Fmodules%2F*%2Fsrc%2Fweights.rs%2Ccumulus%2F**%2Fweights%2F*.rs%2Ccumulus%2F**%2Fweights%2Fxcm%2F*.rs%2Ccumulus%2F**%2Fsrc%2Fweights.rs)
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarAlexander Theißen <alex.theissen@me.com>
      493ba5e2
    • Branislav Kontur's avatar
      Fix bridges grandpa benchmarks (#2577) (#4548) · a823d18f
      Branislav Kontur authored
      
      Cherry-picked fix from upcoming
      https://github.com/paritytech/polkadot-sdk/pull/4494
      
      ---------
      
      Co-authored-by: default avatarSvyatoslav Nikolsky <svyatonik@gmail.com>
      Co-authored-by: command-bot <>
      a823d18f
    • Kian Paimani's avatar
      Fix README.md Logo URL (#4546) · fd161917
      Kian Paimani authored
      This one also works and it is easier.
      fd161917
  3. May 22, 2024
  4. May 21, 2024
    • Javier Viola's avatar
      chore: bump zombienet version (#4535) · ec46106c
      Javier Viola authored
      This version includes the latest release of pjs/api
      (https://github.com/polkadot-js/api/releases/tag/v11.1.1).
      Thx!
      ec46106c
    • Dmitry Markin's avatar
      Replace `Multiaddr` & related types with substrate-specific types (#4198) · d05786ff
      Dmitry Markin authored
      This PR introduces custom types / substrate wrappers for `Multiaddr`,
      `multiaddr::Protocol`, `Multihash`, `ed25519::*` and supplementary types
      like errors and iterators.
      
      This is needed to unblock `libp2p` upgrade PR
      https://github.com/paritytech/polkadot-sdk/pull/1631 after
      https://github.com/paritytech/polkadot-sdk/pull/2944 was merged.
      `libp2p` and `litep2p` currently depend on different versions of
      `multiaddr` crate, and introduction of this "common ground" types is
      needed to support independent version upgrades of `multiaddr` and
      dependent crates in `libp2p` & `litep2p`.
      
      While being just convenient to not tie versions of `libp2p` & `litep2p`
      dependencies together, it's currently not even possible to keep `libp2p`
      & `litep2p` dependencies updated to the same versions as `multiaddr` in
      `libp2p` depends on `libp2p-identity` that we can't include as a
      dependency of `litep2p`, which has it's own `PeerId` type. In the
      future, to keep things updated on `litep2p` side, we will likely need to
      fork `multiaddr` and make it use `litep2p` `PeerId` as a payload of
      `/p2p/...` protocol.
      
      With these changes, common code in substrate uses these custom types,
      and `litep2p` & `libp2p` backends use corresponding libraries types.
      d05786ff
    • Svyatoslav Nikolsky's avatar
      Bridge: added force_set_pallet_state call to pallet-bridge-grandpa (#4465) · e0e1f2d6
      Svyatoslav Nikolsky authored
      closes https://github.com/paritytech/parity-bridges-common/issues/2963
      
      See issue above for rationale
      I've been thinking about adding similar calls to other pallets, but:
      - for parachains pallet I haven't been able to think of a case when we
      will need that given how long referendum takes. I.e. if storage proof
      format changes and we want to unstuck the bridge, it'll take a large a
      few weeks to sync a single parachain header, then another weeks for
      another and etc.
      - for messages pallet I've made the similar call initially, but it just
      changes a storage key (`OutboundLanes` and/or `InboundLanes`), so
      there's no any logic here and it may be simply done using
      `system.set_storage`.
      
      ---------
      
      Co-authored-by: command-bot <>
      e0e1f2d6
    • Svyatoslav Nikolsky's avatar
      Fixed RPC subscriptions leak when subscription stream is finished (#4533) · d54feeb1
      Svyatoslav Nikolsky authored
      closes https://github.com/paritytech/parity-bridges-common/issues/3000
      
      Recently we've changed our bridge configuration for Rococo <> Westend
      and our new relayer has started to submit transactions every ~ `30`
      seconds. Eventually, it switches itself into limbo state, where it can't
      submit more transactions - all `author_submitAndWatchExtrinsic` calls
      are failing with the following error: `ERROR bridge Failed to send
      transaction to BridgeHubRococo node: Call(ErrorObject { code:
      ServerError(-32006), message: "Too many subscriptions on the
      connection", data: Some(RawValue("Exceeded max limit of 1024")) })`.
      
      Some links for those who want to explore:
      - server side (node) has a strict limit on a number of active
      subscriptions. It fails to open a new subscription if this limit is hit:
      https://github.com/paritytech/jsonrpsee/blob/a4533966b997e83632509ad97eea010fc7c3efc0/server/src/middleware/rpc/layer/rpc_service.rs#L122-L132.
      The limit is set to `1024` by default;
      - internally this limit is a semaphore with `limit` permits:
      https://github.com/paritytech/jsonrpsee/blob/a4533966b997e83632509ad97eea010fc7c3efc0/core/src/server/subscription.rs#L461-L485;
      - semaphore permit is acquired in the first link;
      - the permit is "returned" when the `SubscriptionSink` is dropped:
      https://github.com/paritytech/jsonrpsee/blob/a4533966b997e83632509ad97eea010fc7c3efc0/core/src/server/subscription.rs#L310-L325;
      - the `SubscriptionSink` is dropped when [this `polkadot-sdk`
      function](https://github.com/paritytech/polkadot-sdk/blob/278486f9/substrate/client/rpc/src/utils.rs#L58-L94)
      returns. In other words - when the connection is closed, the stream is
      finished or internal subscription buffer limit is hit;
      - the subscription has the internal buffer, so sending an item contains
      of two steps: [reading an item from the underlying
      stream](https://github.com/paritytech/polkadot-sdk/blob/278486f9/substrate/client/rpc/src/utils.rs#L125-L141)
      and [sending it over the
      connection](https://github.com/paritytech/polkadot-sdk/blob/278486f9/substrate/client/rpc/src/utils.rs#L111-L116);
      - when the underlying stream is finished, the `inner_pipe_from_stream`
      wants to ensure that all items are sent to the subscriber. So it: [waits
      until the current send operation
      completes](https://github.com/paritytech/polkadot-sdk/blob/278486f9/substrate/client/rpc/src/utils.rs#L146-L148)
      and then [send all remaining items from the internal
      buffer](https://github.com/paritytech/polkadot-sdk/blob/278486f9/substrate/client/rpc/src/utils.rs#L150-L155).
      Once it is done, the function returns, the `SubscriptionSink` is
      dropped, semaphore permit is dropped and we are ready to accept new
      subscriptions;
      - unfortunately, the code just calls the `pending_fut.await.is_err()` to
      ensure that [the current send operation
      completes](https://github.com/paritytech/polkadot-sdk/blob/278486f9/substrate/client/rpc/src/utils.rs#L146-L148).
      But if there are no current send operation (which is normal), then the
      `pending_fut` is set to terminated future and the `await` never
      completes. Hence, no return from the function, no drop of
      `SubscriptionSink`, no drop of semaphore permit, no new subscriptions
      allowed (once number of susbcriptions hits the limit.
      
      I've illustrated the issue with small test - you may ensure that if e.g.
      the stream is initially empty, the
      `subscription_is_dropped_when_stream_is_empty` will hang because
      `pipe_from_stream` never exits.
      d54feeb1
    • Branislav Kontur's avatar
    • Alin Dima's avatar
      Remove the prospective-parachains subsystem from collators (#4471) · 278486f9
      Alin Dima authored
      Implements https://github.com/paritytech/polkadot-sdk/issues/4429
      
      Collators only need to maintain the implicit view for the paraid they
      are collating on.
      In this case, bypass prospective-parachains entirely. It's still useful
      to use the GetMinimumRelayParents message from prospective-parachains
      for validators, because the data is already present there.
      
      This enables us to entirely remove the subsystem from collators, which
      consumed resources needlessly
      
      Aims to resolve https://github.com/paritytech/polkadot-sdk/issues/4167 
      
      TODO:
      - [x] fix unit tests
      278486f9
  5. May 20, 2024
  6. May 19, 2024
  7. May 17, 2024
    • PG Herveou's avatar
      a90d324d
    • Ankan's avatar
      Allow pool to be destroyed with an extra (erroneous) consumer reference on the pool account (#4503) · 2e36f571
      Ankan authored
      addresses https://github.com/paritytech/polkadot-sdk/issues/4440 (will
      close once we have this in prod runtimes).
      related: https://github.com/paritytech/polkadot-sdk/issues/2037.
      
      An extra consumer reference is preventing pools to be destroyed. When a
      pool is ready to be destroyed, we
      can safely clear the consumer references if any. Notably, I only check
      for one extra consumer reference since that is a known bug. Anything
      more indicates possibly another issue and we probably don't want to
      silently absorb those errors as well.
      
      After this change, pools with extra consumer reference should be able to
      destroy normally.
      2e36f571
    • Clara van Staden's avatar
      Snowbridge - Ethereum Client - Public storage items (#4501) · 65c52484
      Clara van Staden authored
      Changes the Ethereum client storage scope to public, so it can be set in
      a migration.
      
      When merged, we should backport to the all other release branches:
      
      - [ ] release-crates-io-v1.7.0 - patch release the fellows BridgeHubs
      runtimes https://github.com/paritytech/polkadot-sdk/pull/4504
      - [ ] release-crates-io-v1.8.0 -
      https://github.com/paritytech/polkadot-sdk/pull/4505
      - [ ] release-crates-io-v1.9.0 -
      https://github.com/paritytech/polkadot-sdk/pull/4506
      - [ ] release-crates-io-v1.10.0 -
      https://github.com/paritytech/polkadot-sdk/pull/4507
      - [ ] release-crates-io-v1.11.0 -
      https://github.com/paritytech/polkadot-sdk/pull/4508
      - [ ] release-crates-io-v1.12.0 (commit soon)
      65c52484
    • Bastian Köcher's avatar
      pallet_balances: Add `try_state` for checking `Holds` and `Freezes` (#4490) · ca0fb0d9
      Bastian Köcher authored
      Co-authored-by: command-bot <>
      ca0fb0d9
    • Svyatoslav Nikolsky's avatar
      Bridge: fixed relayer version metric value (#4492) · 2c48b9dd
      Svyatoslav Nikolsky authored
      Before relayer crates have been moved + merged, the `MetricsParams` type
      has been created from a `substrate-relay` crate (binary) and hence it
      has been setting the `substrate_relay_build_info` metic value properly -
      to the binary version. Now it is created from the
      `substrate-relay-helper` crate, which has the fixed (it isn't published)
      version `0.1.0`, so our relay provides incorrect metric value. This
      'breaks' our monitoring tools - we see that all relayers have that
      incorrect version, which is not cool.
      
      The idea is to have a global static variable (shame on me) that is
      initialized by the binary during initialization like we do with the
      logger initialization already. Was considering some alternative options:
      - adding a separate argument to every relayer subcommand and propagating
      it to the `MetricsParams::new()` causes a lot of changes and introduces
      even more noise to the binary code, which is supposed to be as small as
      possible in the new design. But I could do that if team thinks it is
      better;
      - adding a `structopt(skip) pub relayer_version: RelayerVersion`
      argument to all subcommand params won't work, because it will be
      initialized by default and `RelayerVersion` needs to reside in some util
      crate (not the binary), so it'll have the wrong value again.
      2c48b9dd
    • PG Herveou's avatar
      Contracts: remove kitchensink dynamic parameters (#4489) · f86f2131
      PG Herveou authored
      Using Dynamic Parameters for contracts seems like a bad idea for now.
      
      Given that we have benchmarks for each host function (in addition to our
      extrinsics), parameter storage reads will be counted multiple times. We
      will work on updates to the benchmarking framework to mitigate this
      issue in future iterations.
      
      ---------
      
      Co-authored-by: command-bot <>
      f86f2131
  8. May 16, 2024
    • Jesse Chejieh's avatar
      Adds `MaxRank` Config in `pallet-core-fellowship` (#3393) · d5fe478e
      Jesse Chejieh authored
      
      resolves #3315
      
      ---------
      
      Co-authored-by: default avatardoordashcon <jessechejieh@doordashcon.local>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      d5fe478e
    • Clara van Staden's avatar
      Snowbridge - Ethereum Client - Reject finalized updates without a sync... · 943eb46e
      Clara van Staden authored
      Snowbridge - Ethereum Client - Reject finalized updates without a sync committee in next store period (#4478)
      
      While syncing Ethereum consensus updates to the Snowbridge Ethereum
      light client, the syncing process stalled due to error
      `InvalidSyncCommitteeUpdate` when importing the next sync committee for
      period `1087`.
      
      This bug manifested specifically because our light client checkpoint is
      a few weeks old (submitted to governance weeks ago) and had to catchup
      until a recent block. Since then, we have done thorough testing of the
      catchup sync process.
      
      ### Symptoms
      - Import next sync committee for period `1086` (essentially period
      `1087`). Light client store period = `1086`.
      - Import header in period `1087`. Light client store period = `1087`.
      The current and next sync committee is not updated, and is now in an
      outdated state. (current sync committee = `1086` and current sync
      committee = `1087`, where it should be current sync committee = `1087`
      and current sync committee = `None`)
      - Import next sync committee for period `1087` (essentially period
      `1088`) fails because the expected next sync committee's roots don't
      match.
      
      ### Bug
      The bug here is that the current and next sync committee's didn't
      handover when an update in the next period was received.
      
      ### Fix
      There are two possible fixes here:
      1. Correctly handover sync committees when a header in the next period
      is received.
      2. Reject updates in the next period until the next sync committee
      period is known.
      
      We opted for solution 2, which is more conservative and requires less
      changes.
      
      ### Polkadot-sdk versions
      This fix should be backported in polkadot-sdk versions 1.7 and up.
      
      Snowfork PR: https://github.com/Snowfork/polkadot-sdk/pull/145
      
      ---------
      
      Co-authored-by: default avatarVincent Geddes <117534+vgeddes@users.noreply.github.com>
      943eb46e
    • polka.dom's avatar
      Remove pallet::getter usage from the democracy pallet (#4472) · 04f88f5b
      polka.dom authored
      As per #3326, removes usage of the pallet::getter macro from the
      democracy pallet. The syntax `StorageItem::<T, I>::get()` should be used
      instead.
      
      cc @muraca
      04f88f5b
    • Dmitry Markin's avatar
      Demote per-peer validation slots warning to debug (#4480) · 8d293970
      Dmitry Markin authored
      Demote `Ignored block announcement because all validation slots for this
      peer are occupied.` message to debug level.
      
      This is mostly an indicator of somebody spamming the node or (more
      likely) some node actively keeping up with the network but not
      recognizing it's in a major sync mode, so sending zillions of block
      announcements (have seen this on Versi).
      
      This warning shouldn't be considered an error by the end user, so let's
      make it debug.
      
      Ref. https://github.com/paritytech/polkadot-sdk/issues/1929.
      8d293970
    • Svyatoslav Nikolsky's avatar
      Bridge: drop subscriptions when they are no longer required (#4481) · 453bb18c
      Svyatoslav Nikolsky authored
      The bridge relay is **not** using `tokio`, while `jsonrpsee` does. To
      make it work together, we are spawning a separate tokio task for every
      jsonrpsee subscription, which holds a subscription reference. It looks
      like we are not stopping those tasks when we no longer need it and when
      there are more than `1024` active subscriptions, `jsonrpsee` stops
      opening new subscriptions. This PR adds an `cancel` signal that is sent
      to the background task when we no longer need a subscription.
      453bb18c
    • Alexandru Vasile's avatar
      network/discovery: Add to DHT only peers that support genesis-based protocol (#3833) · 3399bc09
      Alexandru Vasile authored
      
      This PR adds to the DHT only the peers that support the genesis/fork/kad
      protocol.
      Before this PR, any peer that supported the legacy `/kad/[id]` protocol
      was added to the DHT.
      
      This is the first step in removing the support for the legacy kad
      protocols.
      
      While I have adjusted unit tests to validate the appropriate behavior,
      this still needs proper testing in our stack.
      
      Part of https://github.com/paritytech/polkadot-sdk/issues/504.
      
      cc @paritytech/networking
      
      ---------
      
      Signed-off-by: default avatarAlexandru Vasile <alexandru.vasile@parity.io>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      3399bc09
    • Oliver Tale-Yazdi's avatar
      [Runtime] Bound XCMP queue (#3952) · 4adfa37d
      Oliver Tale-Yazdi authored
      
      Re-applying #2302 after increasing the `MaxPageSize`.  
      
      Remove `without_storage_info` from the XCMP queue pallet. Part of
      https://github.com/paritytech/polkadot-sdk/issues/323
      
      Changes:
      - Limit the number of messages and signals a HRMP channel can have at
      most.
      - Limit the number of HRML channels.
      
      A No-OP migration is put in place to ensure that all `BoundedVec`s still
      decode and not truncate after upgrade. The storage version is thereby
      bumped to 5 to have our tooling remind us to deploy that migration.
      
      ## Integration
      
      If you see this error in your try-runtime-cli:  
      ```pre
      Max message size for channel is too large. This means that the V5 migration can be front-run and an
      attacker could place a large message just right before the migration to make other messages un-decodable.
      Please either increase `MaxPageSize` or decrease the `max_message_size` for this channel. Channel max:
      102400, MaxPageSize: 65535
      ```
      
      Then increase the `MaxPageSize` of the `cumulus_pallet_xcmp_queue` to
      something like this:
      ```rust
      type MaxPageSize = ConstU32<{ 103 * 1024 }>;
      ```
      
      There is currently no easy way for on-chain governance to adjust the
      HRMP max message size of all channels, but it could be done:
      https://github.com/paritytech/polkadot-sdk/issues/3145.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarFrancisco Aguirre <franciscoaguirreperez@gmail.com>
      4adfa37d