Skip to content
  1. Jan 26, 2024
  2. Jan 23, 2024
    • Niklas Adolfsson's avatar
      rpc: backpressured RPC server (bump jsonrpsee 0.20) (#1313) · e16ef086
      Niklas Adolfsson authored
      This is a rather big change in jsonrpsee, the major things in this bump
      are:
      - Server backpressure (the subscription impls are modified to deal with
      that)
      - Allow custom error types / return types (remove jsonrpsee::core::Error
      and jsonrpee::core::CallError)
      - Bug fixes (graceful shutdown in particular not used by substrate
      anyway)
         - Less dependencies for the clients in particular
         - Return type requires Clone in method call responses
         - Moved to tokio channels
         - Async subscription API (not used in this PR)
      
      Major changes in this PR:
      - The subscriptions are now bounded and if subscription can't keep up
      with the server it is dropped
      - CLI: add parameter to configure the jsonrpc server bounded message
      buffer (default is 64)
      - Add our own subscription helper to deal with the unbounded streams in
      substrate
      
      The most important things in this PR to review is the added helpers
      functions in `substrate/client/rpc/src/utils.rs` and the rest is pretty
      much chore.
      
      Regarding the "bounded buffer limit" it may cause the server to handle
      the JSON-RPC calls
      slower than before.
      
      The message size limit is bounded by "--rpc-response-size" thus "by
      default 10MB * 64 = 640MB"
      but the subscription message size is not covered by this limit and could
      be capped as well.
      
      Hopefully the last release prior to 1.0, sorry in advance for a big PR
      
      Previous attempt: https://github.com/paritytech/substrate/pull/13992
      
      Resolves https://github.com/paritytech/polkadot-sdk/issues/748, resolves
      https://github.com/paritytech/polkadot-sdk/issues/627
      e16ef086
  3. Jan 22, 2024
  4. Jan 21, 2024
  5. Jan 18, 2024
  6. Jan 15, 2024
  7. Jan 11, 2024
    • Sebastian Kunert's avatar
      Cumulus test service cleanup (#2887) · c93f5aba
      Sebastian Kunert authored
      closes #2567 
      
      Followup for https://github.com/paritytech/polkadot-sdk/pull/2331
      
      This PR contains multiple internal cleanups:
      
      1. This gets rid of the functionality in `generate_genesis_block` which
      was only used in one benchmark
      2. Fixed `transaction_pool` and `transaction_throughput` benchmarks
      failing since they require a tokio runtime now.
      3. Removed `parachain_id` CLI option from the test parachain
      4. Removed `expect` call from `RuntimeResolver`
      c93f5aba
  8. Jan 09, 2024
  9. Jan 07, 2024
  10. Jan 05, 2024
    • Bastian Köcher's avatar
      `cumulus-primitives-parachain-inherent`: Split into two crates (#2803) · 930c1519
      Bastian Köcher authored
      This splits `cumulus-primitives-parachain-inherent` into two crates, the
      previous `cumulus-primitives-parachain-inherent` and a new
      `cumulus-client-parachain-inherent`. The idea behind this is to move the
      `create_at` logic into the client crate. This removes quite a lot of
      unrelated dependencies from the runtime std build and thus, makes the
      compilation faster. On my Laptop the compilation is goes down by one
      minute for `asset-hub-rococo-runtime`. I also assume that the full build
      of the entire workspace probably can be speed-up a little bit, because
      more stuff can be compiled in parallel.
      
      ---------
      
      Co-authored-by: command-bot <>
      930c1519
  11. Jan 04, 2024
  12. Dec 19, 2023
  13. Dec 18, 2023
  14. Dec 13, 2023
    • Squirrel's avatar
      Set clippy lints in workspace (requires rust 1.74) (#2390) · be8e6268
      Squirrel authored
      
      
      We currently use a bit of a hack in `.cargo/config` to make sure that
      clippy isn't too annoying by specifying the list of lints.
      
      There is now a stable way to define lints for a workspace. The only down
      side is that every crate seems to have to opt into this so there's a
      *few* files modified in this PR.
      
      Dependencies:
      
      - [x] PR that upgrades CI to use rust 1.74 is merged.
      
      ---------
      
      Co-authored-by: default avatarjoe petrowski <[email protected]>
      Co-authored-by: default avatarBranislav Kontur <[email protected]>
      Co-authored-by: default avatarLiam Aharon <[email protected]>
      be8e6268
    • Joshy Orndorff's avatar
      Rename `ExportGenesisStateCommand` to `ExportGenesisHeadCommand` and make it... · 8683bbee
      Joshy Orndorff authored
      Rename `ExportGenesisStateCommand` to `ExportGenesisHeadCommand` and make it respect custom genesis block builders (#2331)
      
      Closes #2326.
      
      This PR both fixes a logic bug and replaces an incorrect name.
      
      ## Bug Fix: Respecting custom genesis builder
      
      Prior to this PR the standard logic for creating a genesis block was
      repeated inside of cumulus. This PR removes that duplicated logic, and
      calls into the proper `BuildGenesisBlock` implementation.
      
      One consequence is that if the genesis block has already been
      initialized, it will not be re-created, but rather read from the
      database like it is for other node invocations. So you need to watch out
      for old unpurged data during the development process. Offchain tools may
      need to be updated accordingly. I've already filed
      https://github.com/paritytech/zombienet/issues/1519
      
      
      
      ## Rename: It doesn't export state. It exports head data.
      
      The name export-genesis-state was always wrong, nad it's never too late
      to right a wrong. I've changed the name of the struct to
      `ExportGenesisHeadCommand`.
      
      There is still the question of what to do with individual nodes' public
      CLIs. I have updated the parachain template to a reasonable default that
      preserves compatibility with tools that will expect
      `export-genesis-state` to still work. And I've chosen not to modify the
      public CLIs of any other nodes in the repo. I'll leave it up to their
      individual owners/maintains to decide whether that is appropriate.
      
      ---------
      
      Co-authored-by: default avatarJoshy Orndorff <[email protected]>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      8683bbee
    • Alexandru Gheorghe's avatar
      Approve multiple candidates with a single signature (#1191) · a84dd0db
      Alexandru Gheorghe authored
      Initial implementation for the plan discussed here: https://github.com/paritytech/polkadot-sdk/issues/701
      Built on top of https://github.com/paritytech/polkadot-sdk/pull/1178
      v0: https://github.com/paritytech/polkadot/pull/7554,
      
      ## Overall idea
      
      When approval-voting checks a candidate and is ready to advertise the
      approval, defer it in a per-relay chain block until we either have
      MAX_APPROVAL_COALESCE_COUNT candidates to sign or a candidate has stayed
      MAX_APPROVALS_COALESCE_TICKS in the queue, in both cases we sign what
      candidates we have available.
      
      This should allow us to reduce the number of approvals messages we have
      to create/send/verify. The parameters are configurable, so we should
      find some values that balance:
      
      - Security of the network: Delaying broadcasting of an approval
      shouldn't but the finality at risk and to make sure that never happens
      we won't delay sending a vote if we are past 2/3 from the no-show time.
      - Scalability of the network: MAX_APPROVAL_COALESCE_COUNT = 1 &
      MAX_APPROVALS_COALESCE_TICKS =0, is what we have now and we know from
      the measurements we did on versi, it bottlenecks
      approval-distribution/approval-voting when increase significantly the
      number of validators and parachains
      - Block storage: In case of disputes we have to import this votes on
      chain and that increase the necessary storage with
      MAX_APPROVAL_COALESCE_COUNT * CandidateHash per vote. Given that
      disputes are not the normal way of the network functioning and we will
      limit MAX_APPROVAL_COALESCE_COUNT in the single digits numbers, this
      should be good enough. Alternatively, we could try to create a better
      way to store this on-chain through indirection, if that's needed.
      
      ## Other fixes:
      - Fixed the fact that we were sending random assignments to
      non-validators, that was wrong because those won't do anything with it
      and they won't gossip it either because they do not have a grid topology
      set, so we would waste the random assignments.
      - Added metrics to be able to debug potential no-shows and
      mis-processing of approvals/assignments.
      
      ## TODO:
      - [x] Get feedback, that this is moving in the right direction. @ordian
      @sandreim @eskimor @burdges, let me know what you think.
      - [x] More and more testing.
      - [x]  Test in versi.
      - [x] Make MAX_APPROVAL_COALESCE_COUNT &
      MAX_APPROVAL_COALESCE_WAIT_MILLIS a parachain host configuration.
      - [x] Make sure the backwards compatibility works correctly
      - [x] Make sure this direction is compatible with other streams of work:
      https://github.com/paritytech/polkadot-sdk/issues/635 &
      https://github.com/paritytech/polkadot-sdk/issues/742
      
      
      - [x] Final versi burn-in before merging
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <[email protected]>
      a84dd0db
  15. Dec 11, 2023
  16. Dec 05, 2023
  17. Dec 01, 2023
  18. Nov 30, 2023
  19. Nov 28, 2023
    • Aaro Altonen's avatar
      Rework the event system of `sc-network` (#1370) · e71c484d
      Aaro Altonen authored
      This commit introduces a new concept called `NotificationService` which
      allows Polkadot protocols to communicate with the underlying
      notification protocol implementation directly, without routing events
      through `NetworkWorker`. This implies that each protocol has its own
      service which it uses to communicate with remote peers and that each
      `NotificationService` is unique with respect to the underlying
      notification protocol, meaning `NotificationService` for the transaction
      protocol can only be used to send and receive transaction-related
      notifications.
      
      The `NotificationService` concept introduces two additional benefits:
        * allow protocols to start using custom handshakes
        * allow protocols to accept/reject inbound peers
      
      Previously the validation of inbound connections was solely the
      responsibility of `ProtocolController`. This caused issues with light
      peers and `SyncingEngine` as `ProtocolController` would accept more
      peers than `SyncingEngine` could accept which caused peers to have
      differing views of their own states. `SyncingEngine` would reject excess
      peers but these rejections were not properly communicated to those peers
      causing them to assume that they were accepted.
      
      With `NotificationService`, the local handshake is not sent to remote
      peer if peer is rejected which allows it to detect that it was rejected.
      
      This commit also deprecates the use of `NetworkEventStream` for all
      notification-related events and going forward only DHT events are
      provided through `NetworkEventStream`. If protocols wish to follow each
      other's events, they must introduce additional abtractions, as is done
      for GRANDPA and transactions protocols by following the syncing protocol
      through `SyncEventStream`.
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/512
      Fixes https://github.com/paritytech/polkadot-sdk/issues/514
      Fixes https://github.com/paritytech/polkadot-sdk/issues/515
      Fixes https://github.com/paritytech/polkadot-sdk/issues/554
      Fixes https://github.com/paritytech/polkadot-sdk/issues/556
      
      ---
      These changes are transferred from
      https://github.com/paritytech/substrate/pull/14197
      
       but there are no
      functional changes compared to that PR
      
      ---------
      
      Co-authored-by: default avatarDmitry Markin <[email protected]>
      Co-authored-by: default avatarAlexandru Vasile <[email protected]>
      e71c484d
    • Sebastian Kunert's avatar
      Remove pov-recovery race condition/Improve zombienet test (#2526) · ec3a61ed
      Sebastian Kunert authored
      The test was a bit flaky on CI. 
      
      There was a race condition in the pov-recovery system. If the timing is
      bad, it can happen that a block waits for a parent that is already
      queued for import. The check if a block has children waiting happens
      when we insert into the import queue. So we need to do an additional
      check once we receive the import notification for the parent block.
      
      Second issue is that `alice` was missing `--in-peers 0` and `--out-peers
      0`, so alice was sometimes still fetching block via sync and the
      assertion on the logs in zombienet would fail.
      
      There is another potential issue that I saw once locally. We have a
      failing pov-recovery queue that fails from time to time to check that
      the retry mechanism does what it should. We now make sure that the same
      candidate is never failed twice, so the tests become more predictable.
      ec3a61ed
    • André Silva's avatar
      polkadot: remove grandpa pause support (#2511) · b0b4451f
      André Silva authored
      This was never used and we probably don't need it anyway.
      b0b4451f
    • André Silva's avatar
      polkadot: disable block authoring backoff on production networks (#2510) · 58a1f9c5
      André Silva authored
      Currently the polkadot node will backoff from block authoring if
      finality starts lagging. This PR disables this mechanism on production
      networks (polkadot and kusama) and adds a flags to optionally force
      enabling it.
      58a1f9c5
  20. Nov 24, 2023
  21. Nov 23, 2023
  22. Nov 22, 2023
  23. Nov 21, 2023
  24. Nov 17, 2023
  25. Nov 14, 2023
  26. Nov 09, 2023
    • Oliver Tale-Yazdi's avatar
      Add descriptions to all published crates (#2029) · 48ea86f0
      Oliver Tale-Yazdi authored
      
      
      Missing descriptions (47):  
      
      - [x] `cumulus/client/collator/Cargo.toml`
      - [x] `cumulus/client/relay-chain-inprocess-interface/Cargo.toml`
      - [x] `cumulus/client/cli/Cargo.toml`
      - [x] `cumulus/client/service/Cargo.toml`
      - [x] `cumulus/client/relay-chain-rpc-interface/Cargo.toml`
      - [x] `cumulus/client/relay-chain-interface/Cargo.toml`
      - [x] `cumulus/client/relay-chain-minimal-node/Cargo.toml`
      - [x] `cumulus/parachains/pallets/parachain-info/Cargo.toml`
      - [x] `cumulus/parachains/pallets/ping/Cargo.toml`
      - [x] `cumulus/primitives/utility/Cargo.toml`
      - [x] `cumulus/primitives/aura/Cargo.toml`
      - [x] `cumulus/primitives/core/Cargo.toml`
      - [x] `cumulus/primitives/parachain-inherent/Cargo.toml`
      - [x] `cumulus/test/relay-sproof-builder/Cargo.toml`
      - [x] `cumulus/pallets/xcmp-queue/Cargo.toml`
      - [x] `cumulus/pallets/dmp-queue/Cargo.toml`
      - [x] `cumulus/pallets/xcm/Cargo.toml`
      - [x] `polkadot/erasure-coding/Cargo.toml`
      - [x] `polkadot/statement-table/Cargo.toml`
      - [x] `polkadot/primitives/Cargo.toml`
      - [x] `polkadot/rpc/Cargo.toml`
      - [x] `polkadot/node/service/Cargo.toml`
      - [x] `polkadot/node/core/parachains-inherent/Cargo.toml`
      - [x] `polkadot/node/core/approval-voting/Cargo.toml`
      - [x] `polkadot/node/core/dispute-coordinator/Cargo.toml`
      - [x] `polkadot/node/core/av-store/Cargo.toml`
      - [x] `polkadot/node/core/chain-api/Cargo.toml`
      - [x] `polkadot/node/core/prospective-parachains/Cargo.toml`
      - [x] `polkadot/node/core/backing/Cargo.toml`
      - [x] `polkadot/node/core/provisioner/Cargo.toml`
      - [x] `polkadot/node/core/runtime-api/Cargo.toml`
      - [x] `polkadot/node/core/bitfield-signing/Cargo.toml`
      - [x] `polkadot/node/network/dispute-distribution/Cargo.toml`
      - [x] `polkadot/node/network/bridge/Cargo.toml`
      - [x] `polkadot/node/network/collator-protocol/Cargo.toml`
      - [x] `polkadot/node/network/approval-distribution/Cargo.toml`
      - [x] `polkadot/node/network/availability-distribution/Cargo.toml`
      - [x] `polkadot/node/network/bitfield-distribution/Cargo.toml`
      - [x] `polkadot/node/network/gossip-support/Cargo.toml`
      - [x] `polkadot/node/network/availability-recovery/Cargo.toml`
      - [x] `polkadot/node/collation-generation/Cargo.toml`
      - [x] `polkadot/node/overseer/Cargo.toml`
      - [x] `polkadot/runtime/parachains/Cargo.toml`
      - [x] `polkadot/runtime/common/slot_range_helper/Cargo.toml`
      - [x] `polkadot/runtime/metrics/Cargo.toml`
      - [x] `polkadot/xcm/pallet-xcm-benchmarks/Cargo.toml`
      - [x] `polkadot/utils/generate-bags/Cargo.toml`
      - [x]  `substrate/bin/minimal/runtime/Cargo.toml`
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      Signed-off-by: default avataralindima <[email protected]>
      Co-authored-by: default avatarordian <[email protected]>
      Co-authored-by: default avatarTsvetomir Dimitrov <[email protected]>
      Co-authored-by: default avatarMarcin S <[email protected]>
      Co-authored-by: default avataralindima <[email protected]>
      Co-authored-by: default avatarSebastian Kunert <[email protected]>
      Co-authored-by: default avatarDmitry Markin <[email protected]>
      Co-authored-by: default avatarjoe petrowski <[email protected]>
      Co-authored-by: default avatarLiam Aharon <[email protected]>
      48ea86f0
  27. Nov 08, 2023
    • Sebastian Kunert's avatar
      Add prospective-parachain subsystem to minimal-relay-node + QoL improvements (#2223) · 69494ea7
      Sebastian Kunert authored
      This PR contains some fixes and cleanups for parachain nodes:
      
      1. When using async backing, node no longer complains about being unable
      to reach the prospective-parachain subsystem.
      2. Parachain warp sync now informs users that the finalized para block
      has been retrieved.
      ```
      2023-11-08 13:24:42 [Parachain] 🎉 Received finalized parachain header #5747719 (0xa0aa…674b) from the relay chain.
      ```
      3. When a user supplied an invalid `--relay-chain-rpc-url`, we were
      crashing with a very verbose message. Removed the `expect` and improved
      the error message.
      ```
      2023-11-08 13:57:56 [Parachain] No valid RPC url found. Stopping RPC worker.
      2023-11-08 13:57:56 [Parachain] Essential task `relay-chain-rpc-worker` failed. Shutting down service.
      Error: Service(Application(WorkerCommunicationError("RPC worker channel closed. This can hint and connectivity issues with the supplied RPC endpoints. Message: oneshot canceled")))
      ```
      69494ea7
  28. Nov 07, 2023
  29. Nov 06, 2023