Skip to content
  1. May 22, 2024
  2. May 21, 2024
    • Javier Viola's avatar
      chore: bump zombienet version (#4535) · ec46106c
      Javier Viola authored
      This version includes the latest release of pjs/api
      (https://github.com/polkadot-js/api/releases/tag/v11.1.1).
      Thx!
      ec46106c
    • Dmitry Markin's avatar
      Replace `Multiaddr` & related types with substrate-specific types (#4198) · d05786ff
      Dmitry Markin authored
      This PR introduces custom types / substrate wrappers for `Multiaddr`,
      `multiaddr::Protocol`, `Multihash`, `ed25519::*` and supplementary types
      like errors and iterators.
      
      This is needed to unblock `libp2p` upgrade PR
      https://github.com/paritytech/polkadot-sdk/pull/1631 after
      https://github.com/paritytech/polkadot-sdk/pull/2944 was merged.
      `libp2p` and `litep2p` currently depend on different versions of
      `multiaddr` crate, and introduction of this "common ground" types is
      needed to support independent version upgrades of `multiaddr` and
      dependent crates in `libp2p` & `litep2p`.
      
      While being just convenient to not tie versions of `libp2p` & `litep2p`
      dependencies together, it's currently not even possible to keep `libp2p`
      & `litep2p` dependencies updated to the same versions as `multiaddr` in
      `libp2p` depends on `libp2p-identity` that we can't include as a
      dependency of `litep2p`, which has it's own `PeerId` type. In the
      future, to keep things updated on `litep2p` side, we will likely need to
      fork `multiaddr` and make it use `litep2p` `PeerId` as a payload of
      `/p2p/...` protocol.
      
      With these changes, common code in substrate uses these custom types,
      and `litep2p` & `libp2p` backends use corresponding libraries types.
      d05786ff
    • Svyatoslav Nikolsky's avatar
      Bridge: added force_set_pallet_state call to pallet-bridge-grandpa (#4465) · e0e1f2d6
      Svyatoslav Nikolsky authored
      closes https://github.com/paritytech/parity-bridges-common/issues/2963
      
      See issue above for rationale
      I've been thinking about adding similar calls to other pallets, but:
      - for parachains pallet I haven't been able to think of a case when we
      will need that given how long referendum takes. I.e. if storage proof
      format changes and we want to unstuck the bridge, it'll take a large a
      few weeks to sync a single parachain header, then another weeks for
      another and etc.
      - for messages pallet I've made the similar call initially, but it just
      changes a storage key (`OutboundLanes` and/or `InboundLanes`), so
      there's no any logic here and it may be simply done using
      `system.set_storage`.
      
      ---------
      
      Co-authored-by: command-bot <>
      e0e1f2d6
    • Svyatoslav Nikolsky's avatar
      Fixed RPC subscriptions leak when subscription stream is finished (#4533) · d54feeb1
      Svyatoslav Nikolsky authored
      closes https://github.com/paritytech/parity-bridges-common/issues/3000
      
      Recently we've changed our bridge configuration for Rococo <> Westend
      and our new relayer has started to submit transactions every ~ `30`
      seconds. Eventually, it switches itself into limbo state, where it can't
      submit more transactions - all `author_submitAndWatchExtrinsic` calls
      are failing with the following error: `ERROR bridge Failed to send
      transaction to BridgeHubRococo node: Call(ErrorObject { code:
      ServerError(-32006), message: "Too many subscriptions on the
      connection", data: Some(RawValue("Exceeded max limit of 1024")) })`.
      
      Some links for those who want to explore:
      - server side (node) has a strict limit on a number of active
      subscriptions. It fails to open a new subscription if this limit is hit:
      https://github.com/paritytech/jsonrpsee/blob/a4533966b997e83632509ad97eea010fc7c3efc0/server/src/middleware/rpc/layer/rpc_service.rs#L122-L132.
      The limit is set to `1024` by default;
      - internally this limit is a semaphore with `limit` permits:
      https://github.com/paritytech/jsonrpsee/blob/a4533966b997e83632509ad97eea010fc7c3efc0/core/src/server/subscription.rs#L461-L485;
      - semaphore permit is acquired in the first link;
      - the permit is "returned" when the `SubscriptionSink` is dropped:
      https://github.com/paritytech/jsonrpsee/blob/a4533966b997e83632509ad97eea010fc7c3efc0/core/src/server/subscription.rs#L310-L325;
      - the `SubscriptionSink` is dropped when [this `polkadot-sdk`
      function](https://github.com/paritytech/polkadot-sdk/blob/278486f9/substrate/client/rpc/src/utils.rs#L58-L94)
      returns. In other words - when the connection is closed, the stream is
      finished or internal subscription buffer limit is hit;
      - the subscription has the internal buffer, so sending an item contains
      of two steps: [reading an item from the underlying
      stream](https://github.com/paritytech/polkadot-sdk/blob/278486f9/substrate/client/rpc/src/utils.rs#L125-L141)
      and [sending it over the
      connection](https://github.com/paritytech/polkadot-sdk/blob/278486f9/substrate/client/rpc/src/utils.rs#L111-L116);
      - when the underlying stream is finished, the `inner_pipe_from_stream`
      wants to ensure that all items are sent to the subscriber. So it: [waits
      until the current send operation
      completes](https://github.com/paritytech/polkadot-sdk/blob/278486f9/substrate/client/rpc/src/utils.rs#L146-L148)
      and then [send all remaining items from the internal
      buffer](https://github.com/paritytech/polkadot-sdk/blob/278486f9/substrate/client/rpc/src/utils.rs#L150-L155).
      Once it is done, the function returns, the `SubscriptionSink` is
      dropped, semaphore permit is dropped and we are ready to accept new
      subscriptions;
      - unfortunately, the code just calls the `pending_fut.await.is_err()` to
      ensure that [the current send operation
      completes](https://github.com/paritytech/polkadot-sdk/blob/278486f9/substrate/client/rpc/src/utils.rs#L146-L148).
      But if there are no current send operation (which is normal), then the
      `pending_fut` is set to terminated future and the `await` never
      completes. Hence, no return from the function, no drop of
      `SubscriptionSink`, no drop of semaphore permit, no new subscriptions
      allowed (once number of susbcriptions hits the limit.
      
      I've illustrated the issue with small test - you may ensure that if e.g.
      the stream is initially empty, the
      `subscription_is_dropped_when_stream_is_empty` will hang because
      `pipe_from_stream` never exits.
      d54feeb1
    • Branislav Kontur's avatar
    • Alin Dima's avatar
      Remove the prospective-parachains subsystem from collators (#4471) · 278486f9
      Alin Dima authored
      Implements https://github.com/paritytech/polkadot-sdk/issues/4429
      
      Collators only need to maintain the implicit view for the paraid they
      are collating on.
      In this case, bypass prospective-parachains entirely. It's still useful
      to use the GetMinimumRelayParents message from prospective-parachains
      for validators, because the data is already present there.
      
      This enables us to entirely remove the subsystem from collators, which
      consumed resources needlessly
      
      Aims to resolve https://github.com/paritytech/polkadot-sdk/issues/4167 
      
      TODO:
      - [x] fix unit tests
      278486f9
  3. May 20, 2024
  4. May 19, 2024
  5. May 17, 2024
    • PG Herveou's avatar
      a90d324d
    • Ankan's avatar
      Allow pool to be destroyed with an extra (erroneous) consumer reference on the pool account (#4503) · 2e36f571
      Ankan authored
      addresses https://github.com/paritytech/polkadot-sdk/issues/4440 (will
      close once we have this in prod runtimes).
      related: https://github.com/paritytech/polkadot-sdk/issues/2037.
      
      An extra consumer reference is preventing pools to be destroyed. When a
      pool is ready to be destroyed, we
      can safely clear the consumer references if any. Notably, I only check
      for one extra consumer reference since that is a known bug. Anything
      more indicates possibly another issue and we probably don't want to
      silently absorb those errors as well.
      
      After this change, pools with extra consumer reference should be able to
      destroy normally.
      2e36f571
    • Clara van Staden's avatar
      Snowbridge - Ethereum Client - Public storage items (#4501) · 65c52484
      Clara van Staden authored
      Changes the Ethereum client storage scope to public, so it can be set in
      a migration.
      
      When merged, we should backport to the all other release branches:
      
      - [ ] release-crates-io-v1.7.0 - patch release the fellows BridgeHubs
      runtimes https://github.com/paritytech/polkadot-sdk/pull/4504
      - [ ] release-crates-io-v1.8.0 -
      https://github.com/paritytech/polkadot-sdk/pull/4505
      - [ ] release-crates-io-v1.9.0 -
      https://github.com/paritytech/polkadot-sdk/pull/4506
      - [ ] release-crates-io-v1.10.0 -
      https://github.com/paritytech/polkadot-sdk/pull/4507
      - [ ] release-crates-io-v1.11.0 -
      https://github.com/paritytech/polkadot-sdk/pull/4508
      - [ ] release-crates-io-v1.12.0 (commit soon)
      65c52484
    • Bastian Köcher's avatar
      pallet_balances: Add `try_state` for checking `Holds` and `Freezes` (#4490) · ca0fb0d9
      Bastian Köcher authored
      Co-authored-by: command-bot <>
      ca0fb0d9
    • Svyatoslav Nikolsky's avatar
      Bridge: fixed relayer version metric value (#4492) · 2c48b9dd
      Svyatoslav Nikolsky authored
      Before relayer crates have been moved + merged, the `MetricsParams` type
      has been created from a `substrate-relay` crate (binary) and hence it
      has been setting the `substrate_relay_build_info` metic value properly -
      to the binary version. Now it is created from the
      `substrate-relay-helper` crate, which has the fixed (it isn't published)
      version `0.1.0`, so our relay provides incorrect metric value. This
      'breaks' our monitoring tools - we see that all relayers have that
      incorrect version, which is not cool.
      
      The idea is to have a global static variable (shame on me) that is
      initialized by the binary during initialization like we do with the
      logger initialization already. Was considering some alternative options:
      - adding a separate argument to every relayer subcommand and propagating
      it to the `MetricsParams::new()` causes a lot of changes and introduces
      even more noise to the binary code, which is supposed to be as small as
      possible in the new design. But I could do that if team thinks it is
      better;
      - adding a `structopt(skip) pub relayer_version: RelayerVersion`
      argument to all subcommand params won't work, because it will be
      initialized by default and `RelayerVersion` needs to reside in some util
      crate (not the binary), so it'll have the wrong value again.
      2c48b9dd
    • PG Herveou's avatar
      Contracts: remove kitchensink dynamic parameters (#4489) · f86f2131
      PG Herveou authored
      Using Dynamic Parameters for contracts seems like a bad idea for now.
      
      Given that we have benchmarks for each host function (in addition to our
      extrinsics), parameter storage reads will be counted multiple times. We
      will work on updates to the benchmarking framework to mitigate this
      issue in future iterations.
      
      ---------
      
      Co-authored-by: command-bot <>
      f86f2131
  6. May 16, 2024
    • Jesse Chejieh's avatar
      Adds `MaxRank` Config in `pallet-core-fellowship` (#3393) · d5fe478e
      Jesse Chejieh authored
      
      
      resolves #3315
      
      ---------
      
      Co-authored-by: default avatardoordashcon <[email protected]>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      d5fe478e
    • Clara van Staden's avatar
      Snowbridge - Ethereum Client - Reject finalized updates without a sync... · 943eb46e
      Clara van Staden authored
      
      Snowbridge - Ethereum Client - Reject finalized updates without a sync committee in next store period (#4478)
      
      While syncing Ethereum consensus updates to the Snowbridge Ethereum
      light client, the syncing process stalled due to error
      `InvalidSyncCommitteeUpdate` when importing the next sync committee for
      period `1087`.
      
      This bug manifested specifically because our light client checkpoint is
      a few weeks old (submitted to governance weeks ago) and had to catchup
      until a recent block. Since then, we have done thorough testing of the
      catchup sync process.
      
      ### Symptoms
      - Import next sync committee for period `1086` (essentially period
      `1087`). Light client store period = `1086`.
      - Import header in period `1087`. Light client store period = `1087`.
      The current and next sync committee is not updated, and is now in an
      outdated state. (current sync committee = `1086` and current sync
      committee = `1087`, where it should be current sync committee = `1087`
      and current sync committee = `None`)
      - Import next sync committee for period `1087` (essentially period
      `1088`) fails because the expected next sync committee's roots don't
      match.
      
      ### Bug
      The bug here is that the current and next sync committee's didn't
      handover when an update in the next period was received.
      
      ### Fix
      There are two possible fixes here:
      1. Correctly handover sync committees when a header in the next period
      is received.
      2. Reject updates in the next period until the next sync committee
      period is known.
      
      We opted for solution 2, which is more conservative and requires less
      changes.
      
      ### Polkadot-sdk versions
      This fix should be backported in polkadot-sdk versions 1.7 and up.
      
      Snowfork PR: https://github.com/Snowfork/polkadot-sdk/pull/145
      
      ---------
      
      Co-authored-by: default avatarVincent Geddes <[email protected]>
      943eb46e
    • polka.dom's avatar
      Remove pallet::getter usage from the democracy pallet (#4472) · 04f88f5b
      polka.dom authored
      As per #3326, removes usage of the pallet::getter macro from the
      democracy pallet. The syntax `StorageItem::<T, I>::get()` should be used
      instead.
      
      cc @muraca
      04f88f5b
    • Dmitry Markin's avatar
      Demote per-peer validation slots warning to debug (#4480) · 8d293970
      Dmitry Markin authored
      Demote `Ignored block announcement because all validation slots for this
      peer are occupied.` message to debug level.
      
      This is mostly an indicator of somebody spamming the node or (more
      likely) some node actively keeping up with the network but not
      recognizing it's in a major sync mode, so sending zillions of block
      announcements (have seen this on Versi).
      
      This warning shouldn't be considered an error by the end user, so let's
      make it debug.
      
      Ref. https://github.com/paritytech/polkadot-sdk/issues/1929.
      8d293970
    • Svyatoslav Nikolsky's avatar
      Bridge: drop subscriptions when they are no longer required (#4481) · 453bb18c
      Svyatoslav Nikolsky authored
      The bridge relay is **not** using `tokio`, while `jsonrpsee` does. To
      make it work together, we are spawning a separate tokio task for every
      jsonrpsee subscription, which holds a subscription reference. It looks
      like we are not stopping those tasks when we no longer need it and when
      there are more than `1024` active subscriptions, `jsonrpsee` stops
      opening new subscriptions. This PR adds an `cancel` signal that is sent
      to the background task when we no longer need a subscription.
      453bb18c
    • Alexandru Vasile's avatar
      network/discovery: Add to DHT only peers that support genesis-based protocol (#3833) · 3399bc09
      Alexandru Vasile authored
      
      
      This PR adds to the DHT only the peers that support the genesis/fork/kad
      protocol.
      Before this PR, any peer that supported the legacy `/kad/[id]` protocol
      was added to the DHT.
      
      This is the first step in removing the support for the legacy kad
      protocols.
      
      While I have adjusted unit tests to validate the appropriate behavior,
      this still needs proper testing in our stack.
      
      Part of https://github.com/paritytech/polkadot-sdk/issues/504.
      
      cc @paritytech/networking
      
      ---------
      
      Signed-off-by: default avatarAlexandru Vasile <[email protected]>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      3399bc09
    • Oliver Tale-Yazdi's avatar
      [Runtime] Bound XCMP queue (#3952) · 4adfa37d
      Oliver Tale-Yazdi authored
      
      
      Re-applying #2302 after increasing the `MaxPageSize`.  
      
      Remove `without_storage_info` from the XCMP queue pallet. Part of
      https://github.com/paritytech/polkadot-sdk/issues/323
      
      Changes:
      - Limit the number of messages and signals a HRMP channel can have at
      most.
      - Limit the number of HRML channels.
      
      A No-OP migration is put in place to ensure that all `BoundedVec`s still
      decode and not truncate after upgrade. The storage version is thereby
      bumped to 5 to have our tooling remind us to deploy that migration.
      
      ## Integration
      
      If you see this error in your try-runtime-cli:  
      ```pre
      Max message size for channel is too large. This means that the V5 migration can be front-run and an
      attacker could place a large message just right before the migration to make other messages un-decodable.
      Please either increase `MaxPageSize` or decrease the `max_message_size` for this channel. Channel max:
      102400, MaxPageSize: 65535
      ```
      
      Then increase the `MaxPageSize` of the `cumulus_pallet_xcmp_queue` to
      something like this:
      ```rust
      type MaxPageSize = ConstU32<{ 103 * 1024 }>;
      ```
      
      There is currently no easy way for on-chain governance to adjust the
      HRMP max message size of all channels, but it could be done:
      https://github.com/paritytech/polkadot-sdk/issues/3145.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      Co-authored-by: default avatarFrancisco Aguirre <[email protected]>
      4adfa37d
    • polka.dom's avatar
      Remove pallet::getter usage from the bounties and child-bounties pallets (#4392) · 6487ac1e
      polka.dom authored
      
      
      As per #3326, removes pallet::getter usage from the bounties and
      child-bounties pallets. The syntax `StorageItem::<T, I>::get()` should
      be used instead.
      
      Changes to one pallet involved changes in the other, so I figured it'd
      be best to combine these two.
      
      cc @muraca
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      6487ac1e
    • Oliver Tale-Yazdi's avatar
      Deprecate `dmp-queue` pallet (#4475) · 76230a15
      Oliver Tale-Yazdi authored
      `cumulus-pallet-dmp-queue` is not needed anymore since
      https://github.com/paritytech/polkadot-sdk/pull/1246.
      
      The only logic that remains in the pallet is a lazy migration in the
      [`on_idle`](https://github.com/paritytech/polkadot-sdk/blob/8d62c13b
      
      /cumulus/pallets/dmp-queue/src/lib.rs#L158)
      hook.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      76230a15
    • Alexander Samusev's avatar
      [ci] Fix publish-subsystem-benchmarks (#4479) · 717eb2c4
      Alexander Samusev authored
      Fix after https://github.com/paritytech/polkadot-sdk/pull/4449
      717eb2c4
    • Francisco Aguirre's avatar
      XCM Cookbook (#2633) · 289f5bbf
      Francisco Aguirre authored
      
      
      # Context
      
      XCM docs are currently an md book hosted with github pages:
      https://paritytech.github.io/xcm-docs/.
      While that's fine, it's not in line with the work being done in the
      polkadot-sdk docs.
      
      # Main addition
      
      This PR aims to fix that by bringing the docs back to this repo.
      This does not have all the information currently present in the mdbook
      xcm-docs but aims to be a good chunk of it and fully replace it over
      time.
      
      I also added the sections `guides` and `cookbook` which will be very
      useful for users wanting to get into XCM.
      For now I only added one example to the cookbook, but have ideas for
      guides and more examples.
      Having this docs be in rust docs is very useful for the cookbook.
      
      # TODO
      
      - [x] Use `FungibleAdapter`
      - [x] Improve and relocate mock message queue
      - [x] Fix license issue. Why does docs/sdk/ not have this problem? (Just
      added the licenses)
      
      # Next steps
      
      - More examples in the cookbook
      - End-to-end XCM guide with zombienet testing
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarjoe petrowski <[email protected]>
      289f5bbf
  7. May 15, 2024