Skip to content
Snippets Groups Projects
  1. Aug 12, 2024
    • Elias Rad's avatar
      Fix spelling issues (#5206) · bc22f086
      Elias Rad authored
      Hello
      I found several spelling errors.
      Br, Elias.
    • dependabot[bot]'s avatar
      Bump the known_good_semver group across 1 directory with 3 updates (#5315) · 79e9aa58
      dependabot[bot] authored
      
      Bumps the known_good_semver group with 2 updates in the / directory:
      [serde](https://github.com/serde-rs/serde) and
      [serde_json](https://github.com/serde-rs/json).
      
      Updates `serde` from 1.0.204 to 1.0.206
      <details>
      <summary>Release notes</summary>
      <p><em>Sourced from <a
      href="https://github.com/serde-rs/serde/releases">serde's
      releases</a>.</em></p>
      <blockquote>
      <h2>v1.0.206</h2>
      <ul>
      <li>Improve support for <code>flatten</code> attribute inside of enums
      (<a
      href="https://redirect.github.com/serde-rs/serde/issues/2567">#2567</a>,
      thanks <a
      href="https://github.com/Mingun"><code>@​Mingun</code></a>)</li>
      </ul>
      <h2>v1.0.205</h2>
      <ul>
      <li>Use serialize_entry instead of serialize_key + serialize_value when
      serialize flattened newtype enum variants (<a
      href="https://redirect.github.com/serde-rs/serde/issues/2785">#2785</a>,
      thanks <a
      href="https://github.com/Mingun"><code>@​Mingun</code></a>)</li>
      <li>Avoid triggering a collection_is_never_read lint in the
      deserialization of enums containing flattened fields (<a
      href="https://redirect.github.com/serde-rs/serde/issues/2791">#2791</a>)</li>
      </ul>
      </blockquote>
      </details>
      <details>
      <summary>Commits</summary>
      <ul>
      <li><a
      href="https://github.com/serde-rs/serde/commit/85c73ef8dea8966d88a03876e6f0dc9359e68cc9"><code>85c73ef</code></a>
      Release 1.0.206</li>
      <li><a
      href="https://github.com/serde-rs/serde/commit/5ba1796a7e639839d4e18c3ae23b9bb32b0700b5"><code>5ba1796</code></a>
      Resolve doc_markdown pedantic lint on regression test function</li>
      <li><a
      href="https://github.com/serde-rs/serde/commit/e52b7b380f88e0112c9f84e6258bdd34ad132352"><code>e52b7b3</code></a>
      Touch up PR 2567</li>
      <li><a
      href="https://github.com/serde-rs/serde/commit/84c7419652161bf88f88eb26302b26debfff8a8c"><code>84c7419</code></a>
      Merge pull request <a
      href="https://redirect.github.com/serde-rs/serde/issues/2794">#2794</a>
      from dtolnay/neverread</li>
      <li><a
      href="https://github.com/serde-rs/serde/commit/536221b1f93a5dcf97352c7d1e3b93a5a56bf747"><code>536221b</code></a>
      Temporarily ignore collection_is_never_read on
      FlattenSkipDeserializing</li>
      <li><a
      href="https://github.com/serde-rs/serde/commit/fc55ac70d34221b38672b1583e496011fbae92aa"><code>fc55ac7</code></a>
      Merge pull request <a
      href="https://redirect.github.com/serde-rs/serde/issues/2567">#2567</a>
      from Mingun/fix-2565</li>
      <li><a
      href="https://github.com/serde-rs/serde/commit/2afe5b4ef9d0e89587ec564eadbc7583fd1f0673"><code>2afe5b4</code></a>
      Add regression test for issue <a
      href="https://redirect.github.com/serde-rs/serde/issues/2792">#2792</a></li>
      <li><a
      href="https://github.com/serde-rs/serde/commit/b4ec2595c9dd8e380227043eba42ff85beb780c2"><code>b4ec259</code></a>
      Correctly process flatten fields in enum variants</li>
      <li><a
      href="https://github.com/serde-rs/serde/commit/c3ac7b675a38a73170879992976acb0009834ac0"><code>c3ac7b6</code></a>
      Add regression test for issue <a
      href="https://redirect.github.com/serde-rs/serde/issues/1904">#1904</a></li>
      <li><a
      href="https://github.com/serde-rs/serde/commit/24614e44bff5466057e46c55394bac3ae20142c4"><code>24614e4</code></a>
      Add regression test for issue <a
      href="https://redirect.github.com/serde-rs/serde/issues/2565">#2565</a></li>
      <li>Additional commits viewable in <a
      href="https://github.com/serde-rs/serde/compare/v1.0.204...v1.0.206">compare
      view</a></li>
      </ul>
      </details>
      <br />
      
      Updates `serde_derive` from 1.0.204 to 1.0.206
      <details>
      <summary>Release notes</summary>
      <p><em>Sourced from <a
      href="https://github.com/serde-rs/serde/releases">serde_derive's
      releases</a>.</em></p>
      <blockquote>
      <h2>v1.0.206</h2>
      <ul>
      <li>Improve support for <code>flatten</code> attribute inside of enums
      (<a
      href="https://redirect.github.com/serde-rs/serde/issues/2567">#2567</a>,
      thanks <a
      href="https://github.com/Mingun"><code>@​Mingun</code></a>)</li>
      </ul>
      <h2>v1.0.205</h2>
      <ul>
      <li>Use serialize_entry instead of serialize_key + serialize_value when
      serialize flattened newtype enum variants (<a
      href="https://redirect.github.com/serde-rs/serde/issues/2785">#2785</a>,
      thanks <a
      href="https://github.com/Mingun"><code>@​Mingun</code></a>)</li>
      <li>Avoid triggering a collection_is_never_read lint in the
      deserialization of enums containing flattened fields (<a
      href="https://redirect.github.com/serde-rs/serde/issues/2791">#2791</a>)</li>
      </ul>
      </blockquote>
      </details>
      <details>
      <summary>Commits</summary>
      <ul>
      <li><a
      href="https://github.com/serde-rs/serde/commit/85c73ef8dea8966d88a03876e6f0dc9359e68cc9"><code>85c73ef</code></a>
      Release 1.0.206</li>
      <li><a
      href="https://github.com/serde-rs/serde/commit/5ba1796a7e639839d4e18c3ae23b9bb32b0700b5"><code>5ba1796</code></a>
      Resolve doc_markdown pedantic lint on regression test function</li>
      <li><a
      href="https://github.com/serde-rs/serde/commit/e52b7b380f88e0112c9f84e6258bdd34ad132352"><code>e52b7b3</code></a>
      Touch up PR 2567</li>
      <li><a
      href="https://github.com/serde-rs/serde/commit/84c7419652161bf88f88eb26302b26debfff8a8c"><code>84c7419</code></a>
      Merge pull request <a
      href="https://redirect.github.com/serde-rs/serde/issues/2794">#2794</a>
      from dtolnay/neverread</li>
      <li><a
      href="https://github.com/serde-rs/serde/commit/536221b1f93a5dcf97352c7d1e3b93a5a56bf747"><code>536221b</code></a>
      Temporarily ignore collection_is_never_read on
      FlattenSkipDeserializing</li>
      <li><a
      href="https://github.com/serde-rs/serde/commit/fc55ac70d34221b38672b1583e496011fbae92aa"><code>fc55ac7</code></a>
      Merge pull request <a
      href="https://redirect.github.com/serde-rs/serde/issues/2567">#2567</a>
      from Mingun/fix-2565</li>
      <li><a
      href="https://github.com/serde-rs/serde/commit/2afe5b4ef9d0e89587ec564eadbc7583fd1f0673"><code>2afe5b4</code></a>
      Add regression test for issue <a
      href="https://redirect.github.com/serde-rs/serde/issues/2792">#2792</a></li>
      <li><a
      href="https://github.com/serde-rs/serde/commit/b4ec2595c9dd8e380227043eba42ff85beb780c2"><code>b4ec259</code></a>
      Correctly process flatten fields in enum variants</li>
      <li><a
      href="https://github.com/serde-rs/serde/commit/c3ac7b675a38a73170879992976acb0009834ac0"><code>c3ac7b6</code></a>
      Add regression test for issue <a
      href="https://redirect.github.com/serde-rs/serde/issues/1904">#1904</a></li>
      <li><a
      href="https://github.com/serde-rs/serde/commit/24614e44bff5466057e46c55394bac3ae20142c4"><code>24614e4</code></a>
      Add regression test for issue <a
      href="https://redirect.github.com/serde-rs/serde/issues/2565">#2565</a></li>
      <li>Additional commits viewable in <a
      href="https://github.com/serde-rs/serde/compare/v1.0.204...v1.0.206">compare
      view</a></li>
      </ul>
      </details>
      <br />
      
      Updates `serde_json` from 1.0.121 to 1.0.124
      <details>
      <summary>Release notes</summary>
      <p><em>Sourced from <a
      href="https://github.com/serde-rs/json/releases">serde_json's
      releases</a>.</em></p>
      <blockquote>
      <h2>v1.0.124</h2>
      <ul>
      <li>Fix a bug in processing string escapes in big-endian architectures
      (<a
      href="https://redirect.github.com/serde-rs/json/issues/1173">#1173</a>,
      thanks <a
      href="https://github.com/purplesyringa"><code>@​purplesyringa</code></a>)</li>
      </ul>
      <h2>v1.0.123</h2>
      <ul>
      <li>Optimize string parsing by applying SIMD-within-a-register: 30.3%
      improvement on <a
      href="https://github.com/miloyip/nativejson-benchmark/blob/v1.0.0/data/twitter.json">twitter.json</a>
      from 613 MB/s to 799 MB/s (<a
      href="https://redirect.github.com/serde-rs/json/issues/1161">#1161</a>,
      thanks <a
      href="https://github.com/purplesyringa"><code>@​purplesyringa</code></a>)</li>
      </ul>
      <h2>v1.0.122</h2>
      <ul>
      <li>Support using <code>json!</code> in no-std crates (<a
      href="https://redirect.github.com/serde-rs/json/issues/1166">#1166</a>)</li>
      </ul>
      </blockquote>
      </details>
      <details>
      <summary>Commits</summary>
      <ul>
      <li><a
      href="https://github.com/serde-rs/json/commit/cf771a0471dd797b6fead77e767f2f7943740c98"><code>cf771a0</code></a>
      Release 1.0.124</li>
      <li><a
      href="https://github.com/serde-rs/json/commit/8b314a77bf57ad8d6089536fea1b3c3b303cba92"><code>8b314a7</code></a>
      Merge pull request <a
      href="https://redirect.github.com/serde-rs/json/issues/1173">#1173</a>
      from iex-rs/fix-big-endian</li>
      <li><a
      href="https://github.com/serde-rs/json/commit/8eba7863b126584f4b9a5b1d3cc4cbc0d0f59976"><code>8eba786</code></a>
      Fix skip_to_escape on BE architectures</li>
      <li><a
      href="https://github.com/serde-rs/json/commit/2cab07e68607ab0e11c3a8b0461a472c37886210"><code>2cab07e</code></a>
      Release 1.0.123</li>
      <li><a
      href="https://github.com/serde-rs/json/commit/346189a524694b98b92ccccb07775868d34b144c"><code>346189a</code></a>
      Fix needless_borrow clippy lint in new control character test</li>
      <li><a
      href="https://github.com/serde-rs/json/commit/859ead8e6d60f4eaed97f7ac2b18f879bec5afe5"><code>859ead8</code></a>
      Merge pull request <a
      href="https://redirect.github.com/serde-rs/json/issues/1161">#1161</a>
      from iex-rs/vectorized-string-parsing</li>
      <li><a
      href="https://github.com/serde-rs/json/commit/e43da5ee0e64819972f08254e8ce799796238791"><code>e43da5e</code></a>
      Immediately bail-out on empty strings</li>
      <li><a
      href="https://github.com/serde-rs/json/commit/8389d8a11293616ce5a4358651aede271871248d"><code>8389d8a</code></a>
      Don't run the slow algorithm from the beginning</li>
      <li><a
      href="https://github.com/serde-rs/json/commit/1f0dcf791ab1756d7ad07c20889e50bd9a7887fb"><code>1f0dcf7</code></a>
      Allow clippy::items_after_statements</li>
      <li><a
      href="https://github.com/serde-rs/json/commit/a95d6df9d08611c9a11ac6524903d693921b8eae"><code>a95d6df</code></a>
      Big endian support</li>
      <li>Additional commits viewable in <a
      href="https://github.com/serde-rs/json/compare/v1.0.121...v1.0.124">compare
      view</a></li>
      </ul>
      </details>
      <br />
      
      
      Dependabot will resolve any conflicts with this PR as long as you don't
      alter it yourself. You can also trigger a rebase manually by commenting
      `@dependabot rebase`.
      
      [//]: # (dependabot-automerge-start)
      [//]: # (dependabot-automerge-end)
      
      ---
      
      <details>
      <summary>Dependabot commands and options</summary>
      <br />
      
      You can trigger Dependabot actions by commenting on this PR:
      - `@dependabot rebase` will rebase this PR
      - `@dependabot recreate` will recreate this PR, overwriting any edits
      that have been made to it
      - `@dependabot merge` will merge this PR after your CI passes on it
      - `@dependabot squash and merge` will squash and merge this PR after
      your CI passes on it
      - `@dependabot cancel merge` will cancel a previously requested merge
      and block automerging
      - `@dependabot reopen` will reopen this PR if it is closed
      - `@dependabot close` will close this PR and stop Dependabot recreating
      it. You can achieve the same result by closing it manually
      - `@dependabot show <dependency name> ignore conditions` will show all
      of the ignore conditions of the specified dependency
      - `@dependabot ignore <dependency name> major version` will close this
      group update PR and stop Dependabot creating any more for the specific
      dependency's major version (unless you unignore this specific
      dependency's major version or upgrade to it yourself)
      - `@dependabot ignore <dependency name> minor version` will close this
      group update PR and stop Dependabot creating any more for the specific
      dependency's minor version (unless you unignore this specific
      dependency's minor version or upgrade to it yourself)
      - `@dependabot ignore <dependency name>` will close this group update PR
      and stop Dependabot creating any more for the specific dependency
      (unless you unignore this specific dependency or upgrade to it yourself)
      - `@dependabot unignore <dependency name>` will remove all of the ignore
      conditions of the specified dependency
      - `@dependabot unignore <dependency name> <ignore condition>` will
      remove the ignore condition of the specified dependency and ignore
      conditions
      
      
      </details>
      
      Signed-off-by: default avatardependabot[bot] <support@github.com>
      Co-authored-by: default avatardependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
    • Alexander Theißen's avatar
      `polkadot-node-core-pvf-common`: Fix test compilation error (#5310) · 8e8dc618
      Alexander Theißen authored
      This crate only uses `tempfile` on linux but includes it unconditionally
      in its `Cargo.toml`. It also sets `#![deny(unused_crate_dependencies)]`.
      This leads to an hard error to anything that is not Linux.
      
      This PR fixes this error. I am wondering why CI didn't catch that.
      Shouldn't the test at least be compiled (but not run) on macOS?
    • Javyer's avatar
      ci: Paused `cmd-action` commenter (#5287) · aca25a00
      Javyer authored
      Paused the action which comments on every command starting with `bot `
      until we can fix all the commands which are not working.
    • Nazar Mokrynskyi's avatar
      Remove unnecessary mut (#5318) · bcc96733
      Nazar Mokrynskyi authored
      
      Trivial leftover from
      https://github.com/paritytech/polkadot-sdk/pull/4844
      
      Co-authored-by: default avatarAdrian Catangiu <adrian@parity.io>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
    • Michal Kucharczyk's avatar
      chain-spec: minor clarification on the genesis config patch (#5324) · b52cfc26
      Michal Kucharczyk authored
      Added minor clarification on the genesis config patch
      ([link](https://substrate.stackexchange.com/questions/11813/in-the-genesis-config-what-does-the-patch-key-do/11825#11825))
      
      ---------
      
      Co-authored-by: command-bot <>
    • Alin Dima's avatar
      fix av-distribution Jaeger spans mem leak (#5321) · fc906d5d
      Alin Dima authored
      Fixes https://github.com/paritytech/polkadot-sdk/issues/5258
    • Dónal Murray's avatar
      Fix favicon link to fix CI (#5319) · 1f49358d
      Dónal Murray authored
      The polkadot.network website was recently refreshed and the
      `favicon-32x32.png` was removed. It was linked in some docs and so the
      docs have been updated to point to a working favicon on the new website.
      
      Previously the lychee link checker was failing on all PRs.
    • jserrat's avatar
      xcm-executor: allow deposit of multiple assets if at least one of them satisfies ED (#4460) · ebcbca3f
      jserrat authored
      
      Closes #4242
      
      XCM programs that deposit assets to some new (empty) account will now
      succeed if at least one of the deposited assets satisfies ED. Before
      this change, the requirement was that the _first_ asset had to satisfy
      ED, but assets order can be changed during reanchoring so it is not
      reliable.
      
      With this PR, ordering doesn't matter, any one(s) of them can satisfy ED
      for the whole deposit to work.
      
      Kusama address: FkB6QEo8VnV3oifugNj5NeVG3Mvq1zFbrUu4P5YwRoe5mQN
      
      ---------
      
      Co-authored-by: default avatarAdrian Catangiu <adrian@parity.io>
      Co-authored-by: default avatarFrancisco Aguirre <franciscoaguirreperez@gmail.com>
      Co-authored-by: command-bot <>
    • Alin Dima's avatar
      prospective-parachains rework: take II (#4937) · 0b52a2c1
      Alin Dima authored
      Resolves https://github.com/paritytech/polkadot-sdk/issues/4800
      
      # Problem
      In https://github.com/paritytech/polkadot-sdk/pull/4035, we removed
      support for parachain forks and cycles and added support for backing
      unconnected candidates (candidates for which we don't yet know the full
      path to the latest included block), which is useful for elastic scaling
      (parachains using multiple cores).
      
      Removing support for backing forks turned out to be a bad idea, as there
      are legitimate cases for a parachain to fork (if they have other
      consensus mechanism for example, like BABE or PoW). This leads to
      validators getting lower backing rewards (depending on whether they back
      the winning fork or not) and a higher pressure on only the half of the
      backing group (during availability-distribution for example). Since we
      don't yet have approval voting rewards, backing rewards are a pretty big
      deal (which may change in the future).
      
      # Description
      
      A backing group is now allowed to back forks. Once a candidate becomes
      backed (has the minimum backing votes), we don't accept new forks unless
      they adhere to the new fork selection rule (have a lower candidate
      hash).
      This helps with keeping the implementation simpler, since forks will
      only be taken into account for candidates which are not backed yet (only
      seconded).
      Having this fork selection rule also helps with reducing the work
      backing validators need to do, since they have a shared way of picking
      the winning fork. Once they see a candidate backed, they can all decide
      to back a fork and not accept new ones.
      But they still accept new ones during the seconding phase (until the
      backing quorum is reached).
      
      Therefore, a block author which is not part of the backing group will
      likely not even see the forks (only the winning one).
      
      Just as before, a parachain producing forks will still not be able to
      leverage elastic scaling but will still work with a single core. Also,
      cycles are still not accepted.
      
      ## Some implementation details
      
      `CandidateStorage` is no longer a subsystem-wide construct. It was
      previously holding candidates from all relay chain forks and complicated
      the code. Each fragment chain now holds their candidate chain and their
      potential candidates. This should not increase the storage consumption
      since the heavy candidate data is already wrapped in an Arc and shared.
      It however allows for great simplifications and increase readability.
      
      `FragmentChain`s are now only creating a chain with backed candidates
      and the fork selection rule. As said before, `FragmentChain`s are now
      also responsible for maintaining their own potential candidate storage.
      
      Since we no longer have the subsytem-wide `CandidateStorage`, when
      getting a new leaf update, we use the storage of our latest ancestor,
      which may contain candidates seconded/backed that are still in scope.
      
      When a candidate is backed, the fragment chains which hold it are
      recreated (due to the fork selection rule, it could trigger a "reorg" of
      the fragment chain).
      
      I generally tried to simplify the subsystem and not introduce
      unneccessary optimisations that would otherwise complicate the code and
      not gain us much (fragment chains wouldn't realistically ever hold many
      candidates)
      
      TODO:
      - [x] update metrics
      - [x] update docs and comments
      - [x] fix and add unit tests
      - [x] tested with fork-producing parachain
      - [x] tested with cycle-producing parachain
      - [x] versi test
      - [x] prdoc
  2. Aug 09, 2024
  3. Aug 08, 2024
  4. Aug 07, 2024
  5. Aug 06, 2024
  6. Aug 05, 2024
    • Przemek Rzad's avatar
      Remove unused feature gated code from the minimal template (#5237) · 035211d7
      Przemek Rzad authored
      - Progresses https://github.com/paritytech/polkadot-sdk/issues/5226
      
      There is no actual `try-runtime` or `runtime-benchmarks` functionality
      in the minimal template at the moment.
    • Alexandru Gheorghe's avatar
      make polkadot-parachain startup errors pretty (#5214) · 0cc3e170
      Alexandru Gheorghe authored
      
      The errors on polkadot-parachain are not printed with their full display
      context(what is marked with `#[error(`) because main returns plain
      Result and the error will be shown in its Debug format, that's not
      consistent with how the polkadot binary behave and is not user friendly
      since it does not tell them why they got the error.
      
      Fix it by using `color_eyre` as polkadot already does it. 
      
      Fixes: https://github.com/paritytech/polkadot-sdk/issues/5211
      
      ## Output before
      ```
      Error: NetworkKeyNotFound("/acala/data/Collator2/chains/mandala-tc9/network/secret_ed25519")
      ```
      
      ## Output after
      ```
      Error: 
         0: Starting an authorithy without network key in /home/alexggh/.local/share/polkadot-parachain/chains/asset-hub-kusama/network/secret_ed25519.
            
             This is not a safe operation because other authorities in the network may depend on your node having a stable identity.
            
             Otherwise these other authorities may not being able to reach you.
            
             If it is the first time running your node you could use one of the following methods:
            
             1. [Preferred] Separately generate the key with: <NODE_BINARY> key generate-node-key --base-path <YOUR_BASE_PATH>
            
             2. [Preferred] Separately generate the key with: <NODE_BINARY> key generate-node-key --file <YOUR_PATH_TO_NODE_KEY>
            
             3. [Preferred] Separately generate the key with: <NODE_BINARY> key generate-node-key --default-base-path
            
             4. [Unsafe] Pass --unsafe-force-node-key-generation and make sure you remove it for subsequent node restarts
      
      ```
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
    • Sergej Sakac's avatar
      Coretime auto-renew (#4424) · f170af61
      Sergej Sakac authored
      
      This PR adds functionality that allows tasks to enable auto-renewal.
      Each task eligible for renewal can enable auto-renewal.
      
      A new storage value is added to track all the cores with auto-renewal
      enabled and the associated task running on the core. The `BoundedVec` is
      sorted by `CoreIndex` to make disabling auto-renewal more efficient.
      
      Cores are renewed at the start of a new bulk sale. If auto-renewal
      fails(e.g. due to the sovereign account of the task not holding
      sufficient balance), an event will be emitted, and the renewal will
      continue for the other cores.
      
      The two added extrinsics are:
      - `enable_auto_renew`: Extrinsic for enabling auto renewal.
      - `disable_auto_renew`: Extrinsic for disabling auto renewal.
      
      TODOs:
      - [x] Write benchmarks for the newly added extrinsics.
      
      Closes: #4351
      
      ---------
      
      Co-authored-by: default avatarDónal Murray <donalm@seadanda.dev>
    • Alexandru Vasile's avatar
      network/strategy: Backoff and ban overloaded peers to avoid submitting the... · 6619277b
      Alexandru Vasile authored
      network/strategy: Backoff and ban overloaded peers to avoid submitting the same request multiple times (#5029)
      
      This PR avoids submitting the same block or state request multiple times
      to the same slow peer.
      
      Previously, we submitted the same request to the same slow peer, which
      resulted in reputation bans on the slow peer side.
      Furthermore, the strategy selected the same slow peer multiple times to
      submit queries to, although a better candidate may exist.
      
      Instead, in this PR we:
      - introduce a `DisconnectedPeers` via LRU with 512 peer capacity to only
      track the state of disconnected peers with a request in flight
      - when the `DisconnectedPeers` detects a peer disconnected with a
      request in flight, the peer is backed off
        - on the first disconnection: 60 seconds
        - on second disconnection: 120 seconds
      - on the third disconnection the peer is banned, and the peer remains
      banned until the peerstore decays its reputation
        
      This PR lifts the pressure from overloaded nodes that cannot process
      requests in due time.
      And if a peer is detected to be slow after backoffs, the peer is banned.
      
      Theoretically, submitting the same request multiple times can still
      happen when:
      - (a) we backoff and ban the peer 
      - (b) the network does not discover other peers -- this may also be a
      test net
      - (c) the peer gets reconnected after the reputation decay and is still
      slow to respond
      
      
      
      Aims to improve:
      - https://github.com/paritytech/polkadot-sdk/issues/4924
      - https://github.com/paritytech/polkadot-sdk/issues/531
      
      Next Steps:
      - Investigate the network after this is deployed, possibly bumping the
      keep-alive timeout or seeing if there's something else misbehaving
      
      
      
      
      This PR builds on top of:
      - https://github.com/paritytech/polkadot-sdk/pull/4987
      
      
      ### Testing Done
      - Added a couple of unit tests where test harness were set in place
      
      - Local testnet
      
      ```bash
      13:13:25.102 DEBUG tokio-runtime-worker sync::persistent_peer_state: Added first time peer 12D3KooWHdiAxVd8uMQR1hGWXccidmfCwLqcMpGwR6QcTP6QRMuD
      
      13:14:39.102 DEBUG tokio-runtime-worker sync::persistent_peer_state: Remove known peer 12D3KooWHdiAxVd8uMQR1hGWXccidmfCwLqcMpGwR6QcTP6QRMuD state: DisconnectedPeerState { num_disconnects: 2, last_disconnect: Instant { tv_sec: 93355, tv_nsec: 942016062 } }, should ban: false
      
      13:16:49.107 DEBUG tokio-runtime-worker sync::persistent_peer_state: Remove known peer 12D3KooWHdiAxVd8uMQR1hGWXccidmfCwLqcMpGwR6QcTP6QRMuD state: DisconnectedPeerState { num_disconnects: 3, last_disconnect: Instant { tv_sec: 93485, tv_nsec: 947551051 } }, should ban: true
      
      13:16:49.108  WARN tokio-runtime-worker peerset: Report 12D3KooWHdiAxVd8uMQR1hGWXccidmfCwLqcMpGwR6QcTP6QRMuD: -2147483648 to -2147483648. Reason: Slow peer after backoffs. Banned, disconnecting.
      ```
      
      cc @paritytech/networking
      
      ---------
      
      Signed-off-by: default avatarAlexandru Vasile <alexandru.vasile@parity.io>
    • Kian Paimani's avatar
      Fix frame crate usage doc (#5222) · ad1e556e
      Kian Paimani authored
    • Sebastian Kunert's avatar
      beefy: Tolerate pruned state on runtime API call (#5197) · 2abd03ef
      Sebastian Kunert authored
      While working on #5129 I noticed that after warp sync, nodes would
      print:
      ```
      2024-07-29 17:59:23.898 ERROR ⋮beefy: 🥩 Error: ConsensusReset. Restarting voter.    
      ```
      
      After some debugging I found that we enter the following loop:
      1. Wait for beefy pallet to be available: Pallet is detected available
      directly after warp sync since we are at the tip.
      2. Wait for headers from tip to beefy genesis to be available: During
      this time we don't process finality notifications, since we later want
      to inspect all the headers for authority set changes.
      3. Gap sync finishes, route to beefy genesis is available.
      4. The worker starts acting, tries to fetch beefy genesis block. It
      fails, since we are acting on old finality notifications where the state
      is already pruned.
      5. Whole beefy subsystem is being restarted, loading the state from db
      again and iterating a lot of headers.
      
      This already happened before #5129.