Skip to content
  1. Feb 22, 2024
  2. Feb 20, 2024
    • Niklas Adolfsson's avatar
      rpc server: make possible to disable/enable batch requests (#3364) · fee810a5
      Niklas Adolfsson authored
      The rationale behind this, is that it may be useful for some users
      actually disable RPC batch requests or limit them by length instead of
      the total size bytes of the batch.
      
      This PR adds two new CLI options:
      
      ```
      --rpc-disable-batch-requests - disable batch requests on the server
      --rpc-max-batch-request-len <LEN> - limit batches to LEN on the server.
      ```
      fee810a5
    • Oliver Tale-Yazdi's avatar
      Lift dependencies to the workspace (Part 2/x) (#3366) · e89d0fca
      Oliver Tale-Yazdi authored
      
      
      Lifting some more dependencies to the workspace. Just using the
      most-often updated ones for now.
      It can be reproduced locally.
      
      ```sh
      # First you can check if there would be semver incompatible bumps (looks good in this case):
      $ zepter transpose dependency lift-to-workspace --ignore-errors syn quote thiserror "regex:^serde.*"
      
      # Then apply the changes:
      $ zepter transpose dependency lift-to-workspace --version-resolver=highest syn quote thiserror "regex:^serde.*" --fix
      
      # And format the changes:
      $ taplo format --config .config/taplo.toml
      ```
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      e89d0fca
  3. Feb 19, 2024
  4. Feb 16, 2024
  5. Feb 14, 2024
    • Niklas Adolfsson's avatar
      rpc: bump jsonrpsee v0.22 and fix race in `rpc v2 chain_head` (#3230) · c7c4fe01
      Niklas Adolfsson authored
      Close #2992 
      
      Breaking changes:
      - rpc server grafana metric `substrate_rpc_requests_started` is removed
      (not possible to implement anymore)
      - rpc server grafana metric `substrate_rpc_requests_finished` is removed
      (not possible to implement anymore)
      - rpc server ws ping/pong not ACK:ed within 30 seconds more than three
      times then the connection will be closed
      
      Added
      - rpc server grafana metric `substrate_rpc_sessions_time` is added to
      get the duration for each websocket session
      c7c4fe01
  6. Feb 13, 2024
    • Bastian Köcher's avatar
      Parachains-Aura: Only produce once per slot (#3308) · aa68ea58
      Bastian Köcher authored
      Given how the block production is driven for Parachains right now, with
      the enabling of async backing we would produce two blocks per slot.
      Until we have a proper collator implementation, the "hack" is to prevent
      the production of multiple blocks per slot.
      
      
      Closes: https://github.com/paritytech/polkadot-sdk/issues/3282
      aa68ea58
    • s0me0ne-unkn0wn's avatar
      Use dynamic aura slot duration in lookahead collator (#3211) · ec6bf5d0
      s0me0ne-unkn0wn authored
      It's a follow-up of #2949. It enables the lookahead collator to
      dynamically adjust the aura slot size, which may change during the
      runtime upgrade. It also addressed a couple of issues with time
      constants I missed in the original PR.
      
      Good news: it works. The parachain successfully switches from sync
      backing with 12s slots to async backing with 6s slots.
      
      Bad news: during the transitional period of a single block in which the
      actual runtime upgrade is performed, it still gets the old slot duration
      of 12s (as it gets it from the best block), resulting in a runtime panic
      (logs follow). That doesn't affect the following block production of the
      parachain. Ideas on how to improve the situation are appreciated.
      
      <details>
      
      ```
      2024-02-05 12:59:36.373  INFO tokio-runtime-worker sc_basic_authorship::basic_authorship: [Parachain] 🙌 Starting consensus session on top of parent 0x6fd2d8f904f12c22531bfabf77b16dc84a6a29e45d9ae358aa6547fbf3f0438b    
      2024-02-05 12:59:36.373 ERROR tokio-runtime-worker runtime: [Parachain] panicked at /home/s0me0ne/wrk/parity/polkadot-sdk/cumulus/pallets/aura-ext/src/consensus_hook.rs:69:9:
      assertion `left == right` failed: slot number mismatch
        left: Slot(142261198)
       right: Slot(284522396)    
      2024-02-05 12:59:36.373  WARN tokio-runtime-worker sp_state_machine::overlayed_changes::changeset: [Parachain] 1 storage transactions are left open by the runtime. Those will be rolled back.    
      2024-02-05 12:59:36.373  WARN tokio-runtime-worker sp_state_machine::overlayed_changes::changeset: [Parachain] 1 storage transactions are left open by the runtime. Those will be rolled back.    
      2024-02-05 12:59:36.373  WARN tokio-runtime-worker basic-authorship: [Parachain]  Inherent extrinsic returned unexpected error: Error at calling runtime api: Execution failed: Execution aborted due to trap: wasm trap: wasm `unreachable` instruction executed
      WASM backtrace:
      error while executing at wasm backtrace:
          0: 0x4e4a3b - <unknown>!rust_begin_unwind
          1: 0x46cf57 - <unknown>!core::panicking::panic_fmt::h3c280dba88683724
          2: 0x46d238 - <unknown>!core::panicking::assert_failed_inner::hebac5970933beb4d
          3: 0x3d00fc - <unknown>!core::panicking::assert_failed::h640a47e2fb5dfb4b
          4: 0xd0db3 - <unknown>!frame_support::storage::transactional::with_transaction::hcbc31515f81b2ee1
          5: 0x34d654 - <unknown>!<cumulus_pallet_parachain_system::pallet::Call<T> as frame_support::traits::dispatch::UnfilteredDispatchable>::dispatch_bypass_filter::{{closure}}::hb7c2c9a11fa88301
          6: 0x3547db - <unknown>!environmental::local_key::LocalKey<T>::with::h783f2605ae27d6d3
          7: 0x7f454 - <unknown>!<asset_hub_rococo_runtime::RuntimeCall as frame_support::traits::dispatch::UnfilteredDispatchable>::dispatch_bypass_filter::h5e11a01ab97c06c7
          8: 0x7f237 - <unknown>!<asset_hub_rococo_runtime::RuntimeCall as sp_runtime::traits::Dispatchable>::dispatch::h7f8ae4a8fede71ca
          9: 0x26a0f3 - <unknown>!frame_executive::Executive<System,Block,Context,UnsignedValidator,AllPalletsWithSystem,COnRuntimeUpgrade>::apply_extrinsic::h75e524ff34738391
         10: 0x282211 - <unknown>!BlockBuilder_apply_extrinsic. Dropping.    
      2024-02-05 12:59:36.374 ERROR tokio-runtime-worker runtime: [Parachain] panicked at /home/s0me0ne/wrk/parity/polkadot-sdk/substrate/frame/aura/src/lib.rs:416:9:
      assertion `left == right` failed: Timestamp slot must match `CurrentSlot`
        left: Slot(142261198)
       right: Slot(284522396)    
      2024-02-05 12:59:36.374  WARN tokio-runtime-worker sp_state_machine::overlayed_changes::changeset: [Parachain] 1 storage transactions are left open by the runtime. Those will be rolled back.    
      2024-02-05 12:59:36.374  WARN tokio-runtime-worker sp_state_machine::overlayed_changes::changeset: [Parachain] 1 storage transactions are left open by the runtime. Those will be rolled back.    
      2024-02-05 12:59:36.374  WARN tokio-runtime-worker basic-authorship: [Parachain] 
      
       Inherent extrinsic returned unexpected error: Error at calling runtime api: Execution failed: Execution aborted due to trap: wasm trap: wasm `unreachable` instruction executed
      WASM backtrace:
      error while executing at wasm backtrace:
          0: 0x4e4a3b - <unknown>!rust_begin_unwind
          1: 0x46cf57 - <unknown>!core::panicking::panic_fmt::h3c280dba88683724
          2: 0x46d238 - <unknown>!core::panicking::assert_failed_inner::hebac5970933beb4d
          3: 0x3d00fc - <unknown>!core::panicking::assert_failed::h640a47e2fb5dfb4b
          4: 0x9ece6 - <unknown>!frame_support::storage::transactional::with_transaction::h26f75cb9f9462088
          5: 0x356d7e - <unknown>!environmental::local_key::LocalKey<T>::with::hbcf2d4e17b48fdb5
          6: 0x7f507 - <unknown>!<asset_hub_rococo_runtime::RuntimeCall as frame_support::traits::dispatch::UnfilteredDispatchable>::dispatch_bypass_filter::h5e11a01ab97c06c7
          7: 0x7f237 - <unknown>!<asset_hub_rococo_runtime::RuntimeCall as sp_runtime::traits::Dispatchable>::dispatch::h7f8ae4a8fede71ca
          8: 0x26a0f3 - <unknown>!frame_executive::Executive<System,Block,Context,UnsignedValidator,AllPalletsWithSystem,COnRuntimeUpgrade>::apply_extrinsic::h75e524ff34738391
          9: 0x282211 - <unknown>!BlockBuilder_apply_extrinsic. Dropping.    
      2024-02-05 12:59:36.374 DEBUG tokio-runtime-worker runtime::xcmp-queue-migration: [Parachain] Lazy migration finished: item gone    
      2024-02-05 12:59:36.374 ERROR tokio-runtime-worker runtime: [Parachain] panicked at /home/s0me0ne/wrk/parity/polkadot-sdk/cumulus/pallets/parachain-system/src/lib.rs:265:18:
      set_validation_data inherent needs to be present in every block!    
      2024-02-05 12:59:36.374 ERROR tokio-runtime-worker aura::cumulus: [Parachain] err=Error { inner: Proposing
      
      Caused by:
          0: Error at calling runtime api: Execution failed: Execution aborted due to trap: wasm trap: wasm `unreachable` instruction executed
             WASM backtrace:
             error while executing at wasm backtrace:
                 0: 0x4e4a3b - <unknown>!rust_begin_unwind
                 1: 0x46cf57 - <unknown>!core::panicking::panic_fmt::h3c280dba88683724
                 2: 0x46da8b - <unknown>!core::option::expect_failed::hdf18d99c3adabca7
                 3: 0x2134cb - <unknown>!<cumulus_pallet_parachain_system::pallet::Pallet<T> as frame_support::traits::hooks::OnFinalize<<<<T as frame_system::pallet::Config>::Block as sp_runtime::traits::HeaderProvider>::HeaderT as sp_runtime::traits::Header>::Number>>::on_finalize::hf98aac39802896ba
                 4: 0x26a9d6 - <unknown>!frame_executive::Executive<System,Block,Context,UnsignedValidator,AllPalletsWithSystem,COnRuntimeUpgrade>::idle_and_finalize_hook::h32775c0df0749d92
                 5: 0x26ad9f - <unknown>!frame_executive::Executive<System,Block,Context,UnsignedValidator,AllPalletsWithSystem,COnRuntimeUpgrade>::finalize_block::h15e5a1a6b9ca8032
                 6: 0x2822bd - <unknown>!BlockBuilder_finalize_block
          1: Execution failed: Execution aborted due to trap: wasm trap: wasm `unreachable` instruction executed
             WASM backtrace:
             error while executing at wasm backtrace:
                 0: 0x4e4a3b - <unknown>!rust_begin_unwind
                 1: 0x46cf57 - <unknown>!core::panicking::panic_fmt::h3c280dba88683724
                 2: 0x46da8b - <unknown>!core::option::expect_failed::hdf18d99c3adabca7
                 3: 0x2134cb - <unknown>!<cumulus_pallet_parachain_system::pallet::Pallet<T> as frame_support::traits::hooks::OnFinalize<<<<T as frame_system::pallet::Config>::Block as sp_runtime::traits::HeaderProvider>::HeaderT as sp_runtime::traits::Header>::Number>>::on_finalize::hf98aac39802896ba
                 4: 0x26a9d6 - <unknown>!frame_executive::Executive<System,Block,Context,UnsignedValidator,AllPalletsWithSystem,COnRuntimeUpgrade>::idle_and_finalize_hook::h32775c0df0749d92
                 5: 0x26ad9f - <unknown>!frame_executive::Executive<System,Block,Context,UnsignedValidator,AllPalletsWithSystem,COnRuntimeUpgrade>::finalize_block::h15e5a1a6b9ca8032
                 6: 0x2822bd - <unknown>!BlockBuilder_finalize_block }
      ```
      
      </details>
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      ec6bf5d0
  7. Feb 12, 2024
  8. Jan 29, 2024
    • s0me0ne-unkn0wn's avatar
      Do not run unneeded subsystems on collator and its alongside node (#3061) · 3e8139e7
      s0me0ne-unkn0wn authored
      Currently, collators and their alongside nodes spin up a full-scale
      overseer running a bunch of subsystems that are not needed if the node
      is not a validator. That was considered to be harmless; however, we've
      got problems with unused subsystems getting stalled for a reason not
      currently known, resulting in the overseer exiting and bringing down the
      whole node.
      
      This PR aims to only run needed subsystems on such nodes, replacing the
      rest with `DummySubsystem`.
      
      It also enables collator-optimized availability recovery subsystem
      implementation.
      
      Partially solves #1730.
      3e8139e7
  9. Jan 26, 2024
  10. Jan 23, 2024
    • Niklas Adolfsson's avatar
      rpc: backpressured RPC server (bump jsonrpsee 0.20) (#1313) · e16ef086
      Niklas Adolfsson authored
      This is a rather big change in jsonrpsee, the major things in this bump
      are:
      - Server backpressure (the subscription impls are modified to deal with
      that)
      - Allow custom error types / return types (remove jsonrpsee::core::Error
      and jsonrpee::core::CallError)
      - Bug fixes (graceful shutdown in particular not used by substrate
      anyway)
         - Less dependencies for the clients in particular
         - Return type requires Clone in method call responses
         - Moved to tokio channels
         - Async subscription API (not used in this PR)
      
      Major changes in this PR:
      - The subscriptions are now bounded and if subscription can't keep up
      with the server it is dropped
      - CLI: add parameter to configure the jsonrpc server bounded message
      buffer (default is 64)
      - Add our own subscription helper to deal with the unbounded streams in
      substrate
      
      The most important things in this PR to review is the added helpers
      functions in `substrate/client/rpc/src/utils.rs` and the rest is pretty
      much chore.
      
      Regarding the "bounded buffer limit" it may cause the server to handle
      the JSON-RPC calls
      slower than before.
      
      The message size limit is bounded by "--rpc-response-size" thus "by
      default 10MB * 64 = 640MB"
      but the subscription message size is not covered by this limit and could
      be capped as well.
      
      Hopefully the last release prior to 1.0, sorry in advance for a big PR
      
      Previous attempt: https://github.com/paritytech/substrate/pull/13992
      
      Resolves https://github.com/paritytech/polkadot-sdk/issues/748, resolves
      https://github.com/paritytech/polkadot-sdk/issues/627
      e16ef086
  11. Jan 22, 2024
  12. Jan 21, 2024
  13. Jan 18, 2024
  14. Jan 15, 2024
  15. Jan 11, 2024
    • Sebastian Kunert's avatar
      Cumulus test service cleanup (#2887) · c93f5aba
      Sebastian Kunert authored
      closes #2567 
      
      Followup for https://github.com/paritytech/polkadot-sdk/pull/2331
      
      This PR contains multiple internal cleanups:
      
      1. This gets rid of the functionality in `generate_genesis_block` which
      was only used in one benchmark
      2. Fixed `transaction_pool` and `transaction_throughput` benchmarks
      failing since they require a tokio runtime now.
      3. Removed `parachain_id` CLI option from the test parachain
      4. Removed `expect` call from `RuntimeResolver`
      c93f5aba
  16. Jan 09, 2024
  17. Jan 07, 2024
  18. Jan 05, 2024
    • Bastian Köcher's avatar
      `cumulus-primitives-parachain-inherent`: Split into two crates (#2803) · 930c1519
      Bastian Köcher authored
      This splits `cumulus-primitives-parachain-inherent` into two crates, the
      previous `cumulus-primitives-parachain-inherent` and a new
      `cumulus-client-parachain-inherent`. The idea behind this is to move the
      `create_at` logic into the client crate. This removes quite a lot of
      unrelated dependencies from the runtime std build and thus, makes the
      compilation faster. On my Laptop the compilation is goes down by one
      minute for `asset-hub-rococo-runtime`. I also assume that the full build
      of the entire workspace probably can be speed-up a little bit, because
      more stuff can be compiled in parallel.
      
      ---------
      
      Co-authored-by: command-bot <>
      930c1519
  19. Jan 04, 2024
  20. Dec 19, 2023
  21. Dec 18, 2023
  22. Dec 13, 2023
    • Squirrel's avatar
      Set clippy lints in workspace (requires rust 1.74) (#2390) · be8e6268
      Squirrel authored
      
      
      We currently use a bit of a hack in `.cargo/config` to make sure that
      clippy isn't too annoying by specifying the list of lints.
      
      There is now a stable way to define lints for a workspace. The only down
      side is that every crate seems to have to opt into this so there's a
      *few* files modified in this PR.
      
      Dependencies:
      
      - [x] PR that upgrades CI to use rust 1.74 is merged.
      
      ---------
      
      Co-authored-by: default avatarjoe petrowski <[email protected]>
      Co-authored-by: default avatarBranislav Kontur <[email protected]>
      Co-authored-by: default avatarLiam Aharon <[email protected]>
      be8e6268
    • Joshy Orndorff's avatar
      Rename `ExportGenesisStateCommand` to `ExportGenesisHeadCommand` and make it... · 8683bbee
      Joshy Orndorff authored
      Rename `ExportGenesisStateCommand` to `ExportGenesisHeadCommand` and make it respect custom genesis block builders (#2331)
      
      Closes #2326.
      
      This PR both fixes a logic bug and replaces an incorrect name.
      
      ## Bug Fix: Respecting custom genesis builder
      
      Prior to this PR the standard logic for creating a genesis block was
      repeated inside of cumulus. This PR removes that duplicated logic, and
      calls into the proper `BuildGenesisBlock` implementation.
      
      One consequence is that if the genesis block has already been
      initialized, it will not be re-created, but rather read from the
      database like it is for other node invocations. So you need to watch out
      for old unpurged data during the development process. Offchain tools may
      need to be updated accordingly. I've already filed
      https://github.com/paritytech/zombienet/issues/1519
      
      
      
      ## Rename: It doesn't export state. It exports head data.
      
      The name export-genesis-state was always wrong, nad it's never too late
      to right a wrong. I've changed the name of the struct to
      `ExportGenesisHeadCommand`.
      
      There is still the question of what to do with individual nodes' public
      CLIs. I have updated the parachain template to a reasonable default that
      preserves compatibility with tools that will expect
      `export-genesis-state` to still work. And I've chosen not to modify the
      public CLIs of any other nodes in the repo. I'll leave it up to their
      individual owners/maintains to decide whether that is appropriate.
      
      ---------
      
      Co-authored-by: default avatarJoshy Orndorff <[email protected]>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      8683bbee
    • Alexandru Gheorghe's avatar
      Approve multiple candidates with a single signature (#1191) · a84dd0db
      Alexandru Gheorghe authored
      Initial implementation for the plan discussed here: https://github.com/paritytech/polkadot-sdk/issues/701
      Built on top of https://github.com/paritytech/polkadot-sdk/pull/1178
      v0: https://github.com/paritytech/polkadot/pull/7554,
      
      ## Overall idea
      
      When approval-voting checks a candidate and is ready to advertise the
      approval, defer it in a per-relay chain block until we either have
      MAX_APPROVAL_COALESCE_COUNT candidates to sign or a candidate has stayed
      MAX_APPROVALS_COALESCE_TICKS in the queue, in both cases we sign what
      candidates we have available.
      
      This should allow us to reduce the number of approvals messages we have
      to create/send/verify. The parameters are configurable, so we should
      find some values that balance:
      
      - Security of the network: Delaying broadcasting of an approval
      shouldn't but the finality at risk and to make sure that never happens
      we won't delay sending a vote if we are past 2/3 from the no-show time.
      - Scalability of the network: MAX_APPROVAL_COALESCE_COUNT = 1 &
      MAX_APPROVALS_COALESCE_TICKS =0, is what we have now and we know from
      the measurements we did on versi, it bottlenecks
      approval-distribution/approval-voting when increase significantly the
      number of validators and parachains
      - Block storage: In case of disputes we have to import this votes on
      chain and that increase the necessary storage with
      MAX_APPROVAL_COALESCE_COUNT * CandidateHash per vote. Given that
      disputes are not the normal way of the network functioning and we will
      limit MAX_APPROVAL_COALESCE_COUNT in the single digits numbers, this
      should be good enough. Alternatively, we could try to create a better
      way to store this on-chain through indirection, if that's needed.
      
      ## Other fixes:
      - Fixed the fact that we were sending random assignments to
      non-validators, that was wrong because those won't do anything with it
      and they won't gossip it either because they do not have a grid topology
      set, so we would waste the random assignments.
      - Added metrics to be able to debug potential no-shows and
      mis-processing of approvals/assignments.
      
      ## TODO:
      - [x] Get feedback, that this is moving in the right direction. @ordian
      @sandreim @eskimor @burdges, let me know what you think.
      - [x] More and more testing.
      - [x]  Test in versi.
      - [x] Make MAX_APPROVAL_COALESCE_COUNT &
      MAX_APPROVAL_COALESCE_WAIT_MILLIS a parachain host configuration.
      - [x] Make sure the backwards compatibility works correctly
      - [x] Make sure this direction is compatible with other streams of work:
      https://github.com/paritytech/polkadot-sdk/issues/635 &
      https://github.com/paritytech/polkadot-sdk/issues/742
      
      
      - [x] Final versi burn-in before merging
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <[email protected]>
      a84dd0db
  23. Dec 11, 2023
  24. Dec 05, 2023
  25. Dec 01, 2023
  26. Nov 30, 2023
  27. Nov 28, 2023
    • Aaro Altonen's avatar
      Rework the event system of `sc-network` (#1370) · e71c484d
      Aaro Altonen authored
      This commit introduces a new concept called `NotificationService` which
      allows Polkadot protocols to communicate with the underlying
      notification protocol implementation directly, without routing events
      through `NetworkWorker`. This implies that each protocol has its own
      service which it uses to communicate with remote peers and that each
      `NotificationService` is unique with respect to the underlying
      notification protocol, meaning `NotificationService` for the transaction
      protocol can only be used to send and receive transaction-related
      notifications.
      
      The `NotificationService` concept introduces two additional benefits:
        * allow protocols to start using custom handshakes
        * allow protocols to accept/reject inbound peers
      
      Previously the validation of inbound connections was solely the
      responsibility of `ProtocolController`. This caused issues with light
      peers and `SyncingEngine` as `ProtocolController` would accept more
      peers than `SyncingEngine` could accept which caused peers to have
      differing views of their own states. `SyncingEngine` would reject excess
      peers but these rejections were not properly communicated to those peers
      causing them to assume that they were accepted.
      
      With `NotificationService`, the local handshake is not sent to remote
      peer if peer is rejected which allows it to detect that it was rejected.
      
      This commit also deprecates the use of `NetworkEventStream` for all
      notification-related events and going forward only DHT events are
      provided through `NetworkEventStream`. If protocols wish to follow each
      other's events, they must introduce additional abtractions, as is done
      for GRANDPA and transactions protocols by following the syncing protocol
      through `SyncEventStream`.
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/512
      Fixes https://github.com/paritytech/polkadot-sdk/issues/514
      Fixes https://github.com/paritytech/polkadot-sdk/issues/515
      Fixes https://github.com/paritytech/polkadot-sdk/issues/554
      Fixes https://github.com/paritytech/polkadot-sdk/issues/556
      
      ---
      These changes are transferred from
      https://github.com/paritytech/substrate/pull/14197
      
       but there are no
      functional changes compared to that PR
      
      ---------
      
      Co-authored-by: default avatarDmitry Markin <[email protected]>
      Co-authored-by: default avatarAlexandru Vasile <[email protected]>
      e71c484d
    • Sebastian Kunert's avatar
      Remove pov-recovery race condition/Improve zombienet test (#2526) · ec3a61ed
      Sebastian Kunert authored
      The test was a bit flaky on CI. 
      
      There was a race condition in the pov-recovery system. If the timing is
      bad, it can happen that a block waits for a parent that is already
      queued for import. The check if a block has children waiting happens
      when we insert into the import queue. So we need to do an additional
      check once we receive the import notification for the parent block.
      
      Second issue is that `alice` was missing `--in-peers 0` and `--out-peers
      0`, so alice was sometimes still fetching block via sync and the
      assertion on the logs in zombienet would fail.
      
      There is another potential issue that I saw once locally. We have a
      failing pov-recovery queue that fails from time to time to check that
      the retry mechanism does what it should. We now make sure that the same
      candidate is never failed twice, so the tests become more predictable.
      ec3a61ed
    • André Silva's avatar
      polkadot: remove grandpa pause support (#2511) · b0b4451f
      André Silva authored
      This was never used and we probably don't need it anyway.
      b0b4451f
    • André Silva's avatar
      polkadot: disable block authoring backoff on production networks (#2510) · 58a1f9c5
      André Silva authored
      Currently the polkadot node will backoff from block authoring if
      finality starts lagging. This PR disables this mechanism on production
      networks (polkadot and kusama) and adds a flags to optionally force
      enabling it.
      58a1f9c5
  28. Nov 24, 2023
  29. Nov 23, 2023