Skip to content
Snippets Groups Projects
  1. Dec 11, 2024
  2. Dec 10, 2024
  3. Nov 27, 2024
  4. Nov 14, 2024
  5. Nov 12, 2024
  6. Nov 05, 2024
  7. Oct 18, 2024
  8. Oct 16, 2024
  9. Oct 15, 2024
  10. Oct 01, 2024
  11. Sep 26, 2024
  12. Sep 25, 2024
  13. Sep 24, 2024
  14. Sep 16, 2024
    • ordian's avatar
      [stable2409]: Backport #5688 (#5727) · a45d2034
      ordian authored
      As requested here:
      https://github.com/paritytech/polkadot-sdk/pull/5688#issuecomment-2352939516
      
      I don't think it need to be backported to 2407, as the issue was not
      present there yet.
      a45d2034
  15. Sep 09, 2024
    • Alexandru Gheorghe's avatar
      [backport] Add benchmark for the number of minimum cpu cores (#5127) (#5613) · 823ecee0
      Alexandru Gheorghe authored
      This backports https://github.com/paritytech/polkadot-sdk/pull/5127, to
      the stable branch.
      
      Unfortunately https://polkadot.subsquare.io/referenda/1051 passed after
      the cut-off deadline and I missed the window of getting this PR merged.
      
      The change itself is super low-risk it just prints a new message to
      validators that starting with January 2025 the required minimum of
      hardware cores will be 8, I see value in getting this in front of the
      validators as soon as possible.
      
      Since we did not release things yet and it does not invalidate any QA we
      already did, it should be painless to include it in the current release.
      
      (cherry picked from commit a947cb83)
      823ecee0
  16. Sep 05, 2024
    • github-actions[bot]'s avatar
      [stable2409] Backport #5581 (#5604) · 1c6da61f
      github-actions[bot] authored
      Backport #5581 into `stable2409` (cc @franciscoaguirre
      
      ).
      
      The dry-run shows in `forwarded_xcms` all the messages in the queues
      at the time of calling the API.
      Each time the API is called, the result could be different.
      You could get messages even if you dry-run something that doesn't send
      a message, like a `System::remark`.
      
      This commit fixes this by clearing the message queues before doing the
      dry-run, so the only messages left are the ones the users of the API actually
      care about.
      
      Co-authored-by: default avatarFrancisco Aguirre <franciscoaguirreperez@gmail.com>
      1c6da61f
  17. Sep 02, 2024
    • EgorPopelyaev's avatar
      Bump spec_version to 1_016_000 · 45b72c1b
      EgorPopelyaev authored
      45b72c1b
    • EgorPopelyaev's avatar
      202f3496
    • Bastian Köcher's avatar
      collator-protocol: Handle unknown validator heads (#5538) · f58e2b80
      Bastian Köcher authored
      There is a race condition when a validator sends its heads to the
      collator, but the collator doesn't yet know these heads. Before it is
      aware of these heads by importing the block(s), any collation registered
      on the collator is not announced to the validators. The collations
      aren't advertised, because the collator doesn't know yet that these
      heads of the validator are descendants of the collations relay parent.
      
      The solution is to store these unknown heads of the validators and to
      handle them when the collator updates its own view.
      f58e2b80
    • Nazar Mokrynskyi's avatar
      Improve `sc-service` API (#5364) · da654103
      Nazar Mokrynskyi authored
      
      This improves `sc-service` API by not requiring the whole
      `&Configuration`, using specific configuration options instead.
      `RpcConfiguration` was also extracted from `Configuration` to group all
      RPC options together.
      
      We don't use Substrate's CLI and would rather not use `Configuration`
      either, but some key public functions require it even though they
      ignored most of the fields anyway.
      
      `RpcConfiguration` is very helpful not just for consolidation of the
      fields, but also to finally make RPC optional for our use case, while
      Substrate still runs RPC server on localhost even if listening address
      is explicitly set to `None`, which is annoying (and I suspect there is a
      reason for it, so didn't want to change the default just yet).
      
      While this is a breaking change, most developers will not notice it if
      they use higher-level APIs.
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/2897
      
      ---------
      
      Co-authored-by: default avatarNiklas Adolfsson <niklasadolfsson1@gmail.com>
      da654103
    • Francisco Aguirre's avatar
      Swaps for XCM delivery fees (#5131) · 5291412e
      Francisco Aguirre authored
      # Context
      
      Fees can already be paid in other assets locally thanks to the Trader
      implementations we have.
      This doesn't work when sending messages because delivery fees go through
      a different mechanism altogether.
      The idea is to fix this leveraging the `AssetExchanger` config item
      that's able to turn the asset the user wants to pay fees in into the
      asset the router expects for delivery fees.
      
      # Main addition
      
      An adapter was needed to use `pallet-asset-conversion` for exchanging
      assets in XCM.
      This was created in
      https://github.com/paritytech/polkadot-sdk/pull/5130.
      
      The XCM executor was modified to use `AssetExchanger` (when available)
      to swap assets to pay for delivery fees.
      
      ## Limitations
      
      We can only pay for delivery fees in different assets in intermediate
      hops. We can't pay in different assets locally. The first hop will
      always need the native token of the chain (or whatever is specified in
      the `XcmRouter`).
      This is a byproduct of using the `BuyExecution` instruction to know
      which asset should be used for delivery fee payment.
      Since this instruction is not present when executing an XCM locally, we
      are left with this limitation.
      To illustrate this limitation, I'll show two scenarios. All chains
      involved have pools.
      
      ### Scenario 1
      
      Parachain A --> Parachain B
      
      Here, parachain A can use any asset in a pool with its native asset to
      pay for local execution fees.
      However, as of now we can't use those for local delivery fees.
      This means transfers from A to B need some amount of A's native token to
      pay for delivery fees.
      
      ### Scenario 2
      
      Parachain A --> Parachain C --> Parachain B
      
      Here, Parachain C's remote delivery fees can be paid with any asset in a
      pool with its native asset.
      This allows a reserve asset transfer between A and B with C as the
      reserve to only need A's native token at the starting hop.
      After that, it could all be pool assets.
      
      ## Future work
      
      The fact that delivery fees go through a totally different mechanism
      results in a lot of bugs and pain points.
      Unfortunately, this is not so easy to solve in a backwards compatible
      manner.
      Delivery fees will be integrated into the language in future XCM
      versions, following
      https://github.com/polkadot-fellows/xcm-format/pull/53.
      
      Old PR: https://github.com/paritytech/polkadot-sdk/pull/4375.
      5291412e
    • Alexandru Gheorghe's avatar
      [3 / 5] Move crypto checks in the approval-distribution (#4928) · 6b854acc
      Alexandru Gheorghe authored
      
      # Prerequisite 
      This is part of the work to further optimize the approval subsystems, if
      you want to understand the full context start with reading
      https://github.com/paritytech/polkadot-sdk/pull/4849#issue-2364261568,
      
      # Description
      This PR contain changes, so that the crypto checks are performed by the
      approval-distribution subsystem instead of the approval-voting one. The
      benefit for these, is twofold:
      1. Approval-distribution won't have to wait every single time for the
      approval-voting to finish its job, so the work gets to be pipelined
      between approval-distribution and approval-voting.
      
      2. By running in parallel multiple instances of approval-distribution as
      described here
      https://github.com/paritytech/polkadot-sdk/pull/4849#issue-2364261568,
      this significant body of work gets to run in parallel.
      
      ## Changes:
      1. When approval-voting send `ApprovalDistributionMessage::NewBlocks` it
      needs to pass the core_index and candidate_hash of the candidates.
      2. ApprovalDistribution needs to use `RuntimeInfo` to be able to fetch
      the SessionInfo from the runtime.
      3. Move `approval-voting` logic that checks VRF assignment into
      `approval-distribution`
      4. Move `approval-voting` logic that checks vote is correctly signed
      into `approval-distribution`
      5. Plumb `approval-distribution` and `approval-voting` tests to support
      the new logic.
      
      ## Benefits
      Even without parallelisation the gains are significant, for example on
      my machine if we run approval subsystem bench for 500 validators and 100
      cores and trigger all 89 tranches of assignments and approvals, the
      system won't fall behind anymore because of late processing of messages.
      ```
      Before change
      Chain selection approved  after 11500 ms hash=0x0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a
      
      After change
      
      Chain selection approved  after 5500 ms hash=0x0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a
      ```
      
      ## TODO:
      - [x] Run on versi.
      - [x] Update parachain host documentation.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
      6b854acc
  18. Aug 30, 2024
    • zjb0807's avatar
      Add more logs for AcceptanceCheckErr (#5513) · 9cdf3d99
      zjb0807 authored
      # Description
      
      The error message should be logged out when the check method returns an
      error.
      
      Because specific information is lost when `UmpAcceptanceCheckErr`,
      `ProcessedDownwardMessagesAcceptanceErr`, `HrmpWatermarkAcceptanceErr`,
      `OutboundHrmpAcceptanceErr` are converted to `AcceptanceCheckErr`, a log
      is added to each check.
      
      ## Integration
      
      ## Review Notes
      
      # Checklist
      
      * [ ] My PR includes a detailed description as outlined in the
      "Description" and its two subsections above.
      * [ ] My PR follows the [labeling requirements](
      
      https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process
      ) of this project (at minimum one label for `T` required)
      * External contributors: ask maintainers to put the right label on your
      PR.
      * [ ] I have made corresponding changes to the documentation (if
      applicable)
      * [ ] I have added tests that prove my fix is effective or that my
      feature works (if applicable)
      9cdf3d99
    • Andrei Sandu's avatar
      Polkadot Primitives v8 (#5525) · 09035a7d
      Andrei Sandu authored
      
      As Runtime release 1.3.0 includes all of the remaining staging
      primitives and APIs we can now release primitives version 8.
      No other changes other than renaming/moving done here.
      
      ---------
      
      Signed-off-by: default avatarAndrei Sandu <andrei-mihail@parity.io>
      09035a7d
    • Alexandru Gheorghe's avatar
      Add support for memory-profiling on subsystem-bench (#5522) · c32160e3
      Alexandru Gheorghe authored
      
      Add support in subsystem-benchmarks to profile memory usage using the
      jemalloc builting profiler, this allows us to run each benchmark with
      profiling enabled and determine if the memory usage patters are in
      conformance with our expectations.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
      c32160e3
  19. Aug 29, 2024
    • ordian's avatar
      inclusion: bench `enact_candidate` weight (#5270) · ddd58c15
      ordian authored
      On top of #5082.
      
      ## Background
      
      Previously, before #3479, we would
      [include](https://github.com/paritytech/polkadot-sdk/blame/75074952/polkadot/runtime/parachains/src/builder.rs#L508C12-L508C44)
      the cost enacting the candidate into the cost of processing a single
      bitfield.
      [Now](https://github.com/paritytech/polkadot-sdk/blame/dd48544a/polkadot/runtime/parachains/src/builder.rs#L529)
      it is different, although the benchmarks seems to be not-up-to date.
      Including the cost of enacting a candidate into a processing a single
      bitfield cost was incorrect, since we multiple that by the number of
      bitfields we have. Instead, we should separate calculate the cost of
      processing a single bitfield without enactment, and multiple the cost of
      enactment by the actual number of processed candidates (which is limited
      by the number cores, not validators).
      
      ## Bench
      
      Previously, the weight of `enact_candidate` was calculated manually
      (without a benchmark) and then neglected:
      https://github.com/paritytech/polkadot-sdk/blob/dd48544a
      
      /polkadot/runtime/parachains/src/inclusion/mod.rs#L584
      
      In this PR, we have a benchmark for it and it's based on the number of
      ump and sent hrmp messages as well as whether the candidate has a
      runtime upgrade (new_validation_code).
      The differences from the previous attempt
      https://github.com/paritytech/polkadot/pull/6929 are that
      * we don't include the cost of enactment into the cost of processing a
      backed candidate.
      The reason for it is that enactment happens not in the same block as
      backing (typically the next one), since we process bitfields before
      backing votes.
      * we don't take into account the size of the runtime upgrade, the
      benchmark weight doesn't seem to depend much on it, but rather whether
      there was one or not.
      
      Similarly to the previous attempt, we don't account for dmp messages
      (fixed cost). Also we don't account properly for received hrmp messages
      (hrmp_watermark) because the cost of it depends on the runtime state and
      can't be statically deduced in the benchmark (unless we pass the
      information about channels as benchmark u32 arguments).
      
      The total weight cost of processing a parainherent now includes the cost
      of enactment of each candidate, but we don't do filtering based on that
      (because we enact after processing bitfields and making other changes to
      the storage).
      
      ## Numbers
      
      ```
      Reads = 7 + (0 * u) + (3 * h) + (8 * c)
      Writes = 10 + (1 * u) + (3 * h) + (7 * c)
      ```
      In addition, there is a fixed cost of a few of ms (!) per candidate. 
      
      This might result a full block slightly overflowing its weight with 200
      enacted candidates, which in turn could prevent non-mandatory
      transactions from being included in a block.
      
      Given our modest limits on max ump and hrmp messages:
      ```
        maxUpwardMessageNumPerCandidate: 16
        hrmpMaxMessageNumPerCandidate: 10
      ```
      and the fact that runtime upgrades are can't happen very frequently
      (`validation_upgrade_cooldown`), we might only go over the limits in
      case of many disputes.
      
      TODOs:
      - [x] Fix the overweight test
      - [x] Generate the weights for Westend and Rococo
      - [x] PRDoc
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarAlin Dima <alin@parity.io>
      ddd58c15
    • ordian's avatar
      short-term fix for para inherent weight overestimation (#5082) · cc7ebe05
      ordian authored
      closes #849
      
      ## Context
      
      For the background on this and the long-term fix, see
      https://github.com/paritytech/polkadot-sdk/issues/849#issuecomment-2247895862.
      
      ## Changes
      
      * The weigh files are renamed from `runtime_(parachains|common).*` to
      `polkadot_runtime_(parachains|common).*`. The reason for it is the
      renaming introduced in #4633. The new weight command and files are
      generated now include `polkadot_` prefix.
      * The WeightInfo for `paras_inherent` now includes `enter_empty` which
      calculates the cost of processing an empty parachains inherent. This
      cost is subtracted dynamically when calculating other weights (so the
      other weights remain the same)
      
      ## Benefits
      
      See
      https://github.com/paritytech/polkadot-sdk/issues/849#issuecomment-2247895862,
      but TL;DR is that we are not blocked on weights for scaling the number
      of validators and cores further.
      
      Resolved questions:
      - [x] why new benchmarks for westend are doing fewer db IOPS?
      Is it due polkadot-sdk update (db IOPS diff)?
      or the bench setup is no longer valid?
      
      https://github.com/polkadot-fellows/runtimes/blob/7723274a2c5cbb10213379271094d5180716ca7d/relay/polkadot/src/weights/runtime_parachains_paras_inherent.rs#L131-L196
      Answer: see background section of #5270 
      
      TODOs:
      - [x] Rerun benchmarks for Rococo and Westend
      - [x] PRDoc
      
      ---------
      
      Co-authored-by: command-bot <>
      cc7ebe05
  20. Aug 28, 2024
    • Niklas Adolfsson's avatar
      rpc server: listen to `ipv6 socket` if available and... · 09254eb9
      Niklas Adolfsson authored
      rpc server: listen to `ipv6 socket` if available and `--experimental-rpc-endpoint` CLI option (#4792)
      
      Close https://github.com/paritytech/polkadot-sdk/issues/3488,
      https://github.com/paritytech/polkadot-sdk/issues/4331
      
      This changes/adds the following:
      
      1. The default setting is that substrate starts a rpc server that
      listens to localhost both Ipv4 and Ipv6 on the same port. Ipv6 is
      allowed to fail because some platforms may not support it
      2. A new RPC CLI option `--experimental-rpc-endpoint` which allow to
      configure arbitrary listen addresses including the port, if this is
      enabled no other interfaces are enabled.
      3. If the local addr is not found for any of the sockets the server is
      not started throws an error.
      4. Remove the deny_unsafe from the RPC implementations instead this is
      an extension to allow different polices for different interfaces/sockets
      such one may enable unsafe on local interface and safe on only the
      external interface.
      
      So for instance in this PR it's now possible to start up three RPC
      endpoints as follows:
      ```
      $ polkadot --experimental-rpc-endpoint "listen-addr=127.0.0.1:9944,rpc-methods=unsafe" --experimental-rpc-endpoint "listen-addr=0.0.0.0:9945,rpc-methods=safe,rate-limit=100" --experimental-rpc-endpoint "listen-addr=[::1]:9944,optional=true"
      ```
      
      #### Needs to be addressed
      
      ~1. Support binding to a random port if it's fails with the default
      stuff for backward compatible reasons~
      ~2. How to sync that the rpc CLI params and that the rpc-listen-addr
      align, hard to maintain...~
      ~3. Add similar warning prints for exposing unsafe methods on external
      interfaces..~
      ~4. Inline todos + the hacky String conversion from rpc params.~
      
      #### Cons with this PR
      
      Manual strings parsing impl more error-prone than relying on clap....
      
      //cc @jsdw @BulatSaif @PierreBesson @bkchr
      
      
      
      ---------
      
      Co-authored-by: default avatarSebastian Kunert <skunert49@gmail.com>
      09254eb9