Skip to content
Snippets Groups Projects
  1. Dec 18, 2024
  2. Dec 17, 2024
    • Frazz's avatar
      adding stkd bootnodes (#6912) · 08bfa860
      Frazz authored
      
      # Description
      
      Opening this PR to add our bootnodes for the IBP. These nodes are
      located in Santiago Chile, we own and manage the underlying hardware. If
      you need any more information please let me know.
      
      
      ## Integration
      
      ```
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain asset-hub-westend \
        --reserved-only \
        --reserved-nodes "/dns/asset-hub-westend-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWDUPyF2q8b6fVFEuwxBbRV3coAy1kzuCPU3D9TRiLnUfE"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain bridge-hub-westend \
        --reserved-only \
        --reserved-nodes "/dns/bridge-hub-westend-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWJEfDZxrEKehoPbW2Mfg6rypttMXCMgMiybmapKqcByc1"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain collectives-westend \
        --reserved-only \
        --reserved-nodes "/dns/collectives-westend-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWFH7UZnWESzuRSgrLvNSfALjtpr9PmG7QGyRNCizWEHcd"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain people-westend \
        --reserved-only \
        --reserved-nodes "/dns/people-westend-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWJzL4R3kq9Ms88gsV6bS9zGT8DHySdqwau5SHNqTzToNM"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain coretime-westend \
        --reserved-only \
        --reserved-nodes "/dns/coretime-westend-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWCFNzjaiq45ZpW2qStmQdG5w7ZHrmi3RWUeG8cV2pPc2Y"
      
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain asset-hub-kusama \
        --reserved-only \
        --reserved-nodes "/dns/asset-hub-kusama-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWNCg821LyWDVrAJ2mG6ScDeeBFuDPiJtLYc9jCGNCyMoq"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain bridge-hub-kusama \
        --reserved-only \
        --reserved-nodes "/dns/bridge-hub-kusama-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWBE1ZhrYqMC3ECFK6qbufS9kgKuF57XpvvZU6LKsPUSnF"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain coretime-kusama \
        --reserved-only \
        --reserved-nodes "/dns/coretime-kusama-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWMPc6jEjzFLRCK7QgbcNh3gvxCzGvDKhU4F66QWf2kZmq"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain people-kusama \
        --reserved-only \
        --reserved-nodes "/dns/people-kusama-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWN32MmhPgZN8e1Dmc8DzEUKsfC2hga3Lqekko4VWvrbhq"
      
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain bridge-hub-polkadot \
        --reserved-only \
        --reserved-nodes "/dns/bridge-hub-polkadot-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWSBpo6fYU8CUr4fwA14CKSDUSj5jSgZzQDBNL1B8Dnmaw"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain collectives-polkadot \
        --reserved-only \
        --reserved-nodes "/dns/collectives-polkadot-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWNscpobBzjPEdjbbjjKRYh9j1whYJvagRJwb9UH68zCPC"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain people-polkadot \
        --reserved-only \
        --reserved-nodes "/dns/people-polkadot-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWDf2aLDKHQyLkDzdEGs6exNzWWw62s2EK9g1wrujJzRZt"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain coretime-polkadot \
        --reserved-only \
        --reserved-nodes "/dns/coretime-polkadot-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWFG9WQQTf3MX3YQypZjJtoJM5zCQgJcqYdxxTStsbhZGU"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain asset-hub-polkadot \
        --reserved-only \
        --reserved-nodes "/dns/asset-hub-polkadot-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWJUhizuk3crSvpyKLGycHBtnP93rwjksVueveU6x6k6RY"
      
      ```
      
      ## Review Notes
      
      None
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      08bfa860
    • Sebastian Kunert's avatar
      omni-node: Tolerate failing metadata check (#6923) · e6ddd392
      Sebastian Kunert authored
      #6450 introduced metadata checks. Supported are metadata v14 and higher.
      
      However, of course old chain-specs have a genesis code blob that might
      be on older version. This needs to be tolerated. We should just skip the
      checks in that case.
      
      Fixes #6921
      
      ---------
      
      Co-authored-by: command-bot <>
      e6ddd392
    • Alexander Theißen's avatar
      Remove unused dependencies from pallet_revive (#6917) · 05589737
      Alexander Theißen authored
      Removing apparently unused dependencies from `pallet_revive` and related
      crates.
      
      ---------
      
      Co-authored-by: command-bot <>
      05589737
    • Alexander Samusev's avatar
      ci: 5 retries for cargo (#6903) · 31179c40
      Alexander Samusev authored
      cc https://github.com/paritytech/ci_cd/issues/1038
      31179c40
  3. Dec 16, 2024
    • Jun Jiang's avatar
      Upgrade nix and reqwest (#6898) · 5b04b459
      Jun Jiang authored
      # Description
      
      Upgrade `nix` and `reqwest` to reduce outdated dependencies and speed up
      compilation.
      5b04b459
    • Iulian Barbu's avatar
      polkadot-omni-node-lib: remove unused dep (#6889) · adc0178f
      Iulian Barbu authored
      # Description
      
      Redundant dep that made its way in #6450 . :sweat_smile:
      
      . It can be
      brought up when using `cargo udeps`. Added a github action that runs
      `cargo udeps` on the repo too.
      
      ## Integration
      
      N/A
      
      ## Review Notes
      
      N/A
      
      ---------
      
      Signed-off-by: default avatarIulian Barbu <iulian.barbu@parity.io>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      adc0178f
    • Sebastian Kunert's avatar
      Omni-node: Detect pending code in storage and send go ahead signal in dev-mode. (#6885) · cee63ac0
      Sebastian Kunert authored
      We check if there is a pending validation code in storage. If there is,
      add the go-ahead signal in the relay chain storage proof.
      
      Not super elegant, but should get the job done for development.
      
      ---------
      
      Co-authored-by: command-bot <>
      cee63ac0
    • Nazar Mokrynskyi's avatar
      Upgrade libp2p from 0.52.4 to 0.54.1 (#6248) · c8812883
      Nazar Mokrynskyi authored
      
      # Description
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/5996
      
      https://github.com/libp2p/rust-libp2p/releases/tag/libp2p-v0.53.0
      https://github.com/libp2p/rust-libp2p/blob/master/CHANGELOG.md
      
      ## Integration
      
      Nothing special is needed, just note that `yamux_window_size` is no
      longer applicable to libp2p (litep2p seems to still have it though).
      
      ## Review Notes
      
      There are a few simplifications and improvements done in libp2p 0.53
      regarding swarm interface, I'll list a few key/applicable here.
      
      https://github.com/libp2p/rust-libp2p/pull/4788 removed
      `write_length_prefixed` function, so I inlined its code instead.
      
      https://github.com/libp2p/rust-libp2p/pull/4120 introduced new
      `libp2p::SwarmBuilder` instead of now deprecated
      `libp2p::swarm::SwarmBuilder`, the transition is straightforward and
      quite ergonomic (can be seen in tests).
      
      https://github.com/libp2p/rust-libp2p/pull/4581 is the most annoying
      change I have seen that basically makes many enums `#[non_exhaustive]`.
      I mapped some, but those that couldn't be mapped I dealt with by
      printing log messages once they are hit (the best solution I could come
      up with, at least with stable Rust).
      
      https://github.com/libp2p/rust-libp2p/issues/4306 makes connection close
      as soon as there are no handler using it, so I had to replace
      `KeepAlive::Until` with an explicit future that flips internal boolean
      after timeout, achieving the old behavior, though it should ideally be
      removed completely at some point.
      
      `yamux_window_size` is no longer used by libp2p thanks to
      https://github.com/libp2p/rust-libp2p/pull/4970 and generally Yamux
      should have a higher performance now.
      
      I have resolved and cleaned up all deprecations related to libp2p except
      `BandwidthSinks`. Libp2p deprecated it (though it is still present in
      0.54.1, which is why I didn't handle it just yet). Ideally Substrate
      would finally [switch to the official Prometheus
      client](https://github.com/paritytech/substrate/issues/12699), in which
      case we'd get metrics for free. Otherwise a bit of code will need to be
      copy-pasted to maintain current behavior with `BandwidthSinks` gone,
      which I left a TODO about.
      
      The biggest change in 0.54.0 is
      https://github.com/libp2p/rust-libp2p/pull/4568 that changed transport
      APIs and enabled unconditional potential port reuse, which can lead to
      very confusing errors if running two Substrate nodes on the same machine
      without changing listening port explicitly.
      
      Overall nothing scary here, but testing is always appreciated.
      
      # Checklist
      
      * [x] My PR includes a detailed description as outlined in the
      "Description" and its two subsections above.
      * [x] My PR follows the [labeling requirements](
      
      https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process
      ) of this project (at minimum one label for `T` required)
      * External contributors: ask maintainers to put the right label on your
      PR.
      
      ---
      
      Polkadot Address: 1vSxzbyz2cJREAuVWjhXUT1ds8vBzoxn2w4asNpusQKwjJd
      
      ---------
      
      Co-authored-by: default avatarDmitry Markin <dmitry@markin.tech>
      c8812883
  4. Dec 15, 2024
    • Alexander Theißen's avatar
      Fix flaky `build-runtimes-polkavm` CI job (#6893) · 88d255c2
      Alexander Theißen authored
      The timeout was too low which made the job not finish in time sometimes:
      
      Hence:
      - Bumping the timeout to 60 minutes which is in line with other jobs
      which are building substantial parts of the repo.
      - Roll all the runtime builds into a single cargo invocation so that it
      aborts after the first failure. It also allows for more parallel
      compiling.
      88d255c2
  5. Dec 14, 2024
  6. Dec 13, 2024
    • davidk-pt's avatar
      Add `unstable-api` feature flag to `pallet-revive` (#6866) · ec69b612
      davidk-pt authored
      Follow up refactor to
      https://github.com/paritytech/polkadot-sdk/pull/6844#pullrequestreview-2497225717
      
      I still need to finish adding `#[cfg(feature = "unstable-api")]` to the
      rest of the tests and make sure all tests pass, I want to make sure I'm
      moving into right direction first
      
      @athei @xermicus
      
      
      
      ---------
      
      Co-authored-by: default avatarDavidK <davidk@parity.io>
      Co-authored-by: default avatarAlexander Theißen <alex.theissen@me.com>
      ec69b612
    • Shawn Tabrizi's avatar
      Only one ParaId variable in the Parachain Template (#6744) · 482bf082
      Shawn Tabrizi authored
      Many problems can occur when building and testing a Parachain caused by
      misconfiguring the paraid.
      
      This can happen when there are 3 different places you need to update!
      
      This PR makes it so a SINGLE location is the source of truth for the
      ParaId.
      482bf082
    • Bastian Köcher's avatar
      Update merkleized-metadata to 0.2.0 (#6863) · 6d92ded5
      Bastian Köcher authored
      0.1.2 was yanked as it was breaking semver.
      
      ---------
      
      Co-authored-by: command-bot <>
      6d92ded5
    • Alexandru Gheorghe's avatar
      Fix approval-voting canonicalize off by one (#6864) · 2dd2bb5a
      Alexandru Gheorghe authored
      
      Approval voting canonicalize is off by one that means if we are
      finalizing blocks one by one, approval-voting cleans it up every other
      block for example:
      
      - With 1, 2, 3, 4, 5, 6 blocks created, the stored range would be
      StoredBlockRange(1,7)
      - When block 3 is finalized the canonicalize works and StoredBlockRange
      is (4,7)
      - When block 4 is finalized the canonicalize exists early because of the
      `if range.0 > canon_number` break clause, so blocks are not cleaned up.
      - When block 5 is finalized the canonicalize works and StoredBlockRange
      becomes (6,7) and both block 4 and 5 are cleaned up.
      
      The consequences of this is that sometimes we keep block entries around
      after they are finalized, so at restart we consider this blocks and send
      them to approval-distribution.
      
      In most cases this is not a problem, but in the case when finality is
      lagging on restart approval-distribution will receive 4 as being the
      oldest block it needs to work on, and since BlockFinalized is never
      resent for block 4 after restart it won't get the opportunity to clean
      that up. Therefore it will end running approval-distribution aggression
      on block 4, because that is the oldest block it received from
      approval-voting for which it did not see a BlockFinalized signal.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
      2dd2bb5a
    • Bastian Köcher's avatar
      slot-based-collator: Implement dedicated block import (#6481) · b8da8faa
      Bastian Köcher authored
      
      The `SlotBasedBlockImport` job is to collect the storage proofs of all
      blocks getting imported. These storage proofs alongside the block are
      being forwarded to the collation task. Right now they are just being
      thrown away. More logic will follow later. Basically this will be
      required to include multiple blocks into one `PoV` which will then be
      done by the collation task.
      
      ---------
      
      Co-authored-by: default avatarMichal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com>
      Co-authored-by: default avatarGitHub Action <action@github.com>
      b8da8faa
    • Dmitry Markin's avatar
      Expose DHT content providers API from `sc-network` (#6711) · 4b054c60
      Dmitry Markin authored
      
      Expose the Kademlia content providers API for the use by `sc-network`
      client code:
      1. Extend the `NetworkDHTProvider` trait with functions to start/stop
      providing content and query the DHT for the list of content providers
      for a given key.
      2. Extend the `DhtEvent` enum with events reporting the found providers
      or query failures.
      3. Implement the above for libp2p & litep2p network backends.
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <action@github.com>
      Co-authored-by: default avatarAlexandru Vasile <60601340+lexnv@users.noreply.github.com>
      4b054c60
    • Niklas Adolfsson's avatar
      rpc: re-use server builder per rpc interface (#6652) · e1add3e8
      Niklas Adolfsson authored
      This PR changes that the server builder is created once and
      shared/cloned for each connection to avoid some extra overhead to
      construct this for each connection (as it was before).
      
      I don't know why I constructed a new builder for each connection because
      it's not needed but shouldn't make a big difference to my understanding.
      
      ---------
      
      Co-authored-by: command-bot <>
      e1add3e8
    • Cyrill Leutwiler's avatar
      [pallet-revive] implement the call data size API (#6857) · 03497895
      Cyrill Leutwiler authored
      
      This PR adds an API method to query the contract call data input size.
      
      Part of #6770
      
      ---------
      
      Signed-off-by: default avatarCyrill Leutwiler <bigcyrill@hotmail.com>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarAlexander Theißen <alex.theissen@me.com>
      03497895
    • Alexander Theißen's avatar
      Rename PanicInfo to PanicHookInfo (#6865) · 9ce80f68
      Alexander Theißen authored
      Starting with Rust 1.82 `PanicInfo` is deprecated and will throw
      warnings when used. The new type is available since Rust 1.81 and should
      be available on our CI.
      
      ---------
      
      Co-authored-by: command-bot <>
      9ce80f68
    • Tsvetomir Dimitrov's avatar
      Collation fetching fairness (#4880) · 5153e2b5
      Tsvetomir Dimitrov authored
      
      Related to https://github.com/paritytech/polkadot-sdk/issues/1797
      
      # The problem
      When fetching collations in collator protocol/validator side we need to
      ensure that each parachain has got a fair core time share depending on
      its assignments in the claim queue. This means that the number of
      collations fetched per parachain should ideally be equal to (but
      definitely not bigger than) the number of claims for the particular
      parachain in the claim queue.
      
      # Why the current implementation is not good enough
      The current implementation doesn't guarantee such fairness. For each
      relay parent there is a `waiting_queue` (PerRelayParent -> Collations ->
      waiting_queue) which holds any unfetched collations advertised to the
      validator. The collations are fetched on first in first out principle
      which means that if two parachains share a core and one of the
      parachains is more aggressive it might starve the second parachain. How?
      At each relay parent up to `max_candidate_depth` candidates are accepted
      (enforced in `fn is_seconded_limit_reached`) so if one of the parachains
      is quick enough to fill in the queue with its advertisements the
      validator will never fetch anything from the rest of the parachains
      despite they are scheduled. This doesn't mean that the aggressive
      parachain will occupy all the core time (this is guaranteed by the
      runtime) but it will deny the rest of the parachains sharing the same
      core to have collations backed.
      
      # How to fix it
      The solution I am proposing is to limit fetches and advertisements based
      on the state of the claim queue. At each relay parent the claim queue
      for the core assigned to the validator is fetched. For each parachain a
      fetch limit is calculated (equal to the number of entries in the claim
      queue). Advertisements are not fetched for a parachain which has
      exceeded its claims in the claim queue. This solves the problem with
      aggressive parachains advertising too much collations.
      
      The second part is in collation fetching logic. The collator will keep
      track on which collations it has fetched so far. When a new collation
      needs to be fetched instead of popping the first entry from the
      `waiting_queue` the validator examines the claim queue and looks for the
      earliest claim which hasn't got a corresponding fetch. This way the
      collator will always try to prioritise the most urgent entries.
      
      ## How the 'fair share of coretime' for each parachain is determined?
      Thanks to async backing we can accept more than one candidate per relay
      parent (with some constraints). We also have got the claim queue which
      gives us a hint which parachain will be scheduled next on each core. So
      thanks to the claim queue we can determine the maximum number of claims
      per parachain.
      
      For example the claim queue is [A A A] at relay parent X so we know that
      at relay parent X we can accept three candidates for parachain A. There
      are two things to consider though:
      1. If we accept more than one candidate at relay parent X we are
      claiming the slot of a future relay parent. So accepting two candidates
      for relay parent X means that we are claiming the slot at rp X+1 or rp
      X+2.
      2. At the same time the slot at relay parent X could have been claimed
      by a previous relay parent(s). This means that we need to accept less
      candidates at X or even no candidates.
      
      There are a few cases worth considering:
      1. Slot claimed by previous relay parent.
          CQ @ rp X: [A A A]
          Advertisements at X-1 for para A: 2
          Advertisements at X-2 for para A: 2
      Outcome - at rp X we can accept only 1 advertisement since our slots
      were already claimed.
      2. Slot in our claim queue already claimed at future relay parent
          CQ @ rp X: [A A A]
          Advertisements at X+1 for para A: 1
          Advertisements at X+2 for para A: 1
      Outcome: at rp X we can accept only 1 advertisement since the slots in
      our relay parents were already claimed.
      
      The situation becomes more complicated with multiple leaves (forks).
      Imagine we have got a fork at rp X:
      ```
      CQ @ rp X: [A A A]
      (rp X) -> (rp X+1) -> rp(X+2)
               \-> (rp X+1')
      ```
      Now when we examine the claim queue at RP X we need to consider both
      forks. This means that accepting a candidate at X means that we should
      have a slot for it in *BOTH* leaves. If for example there are three
      candidates accepted at rp X+1' we can't accept any candidates at rp X
      because there will be no slot for it in one of the leaves.
      
      ## How the claims are counted
      There are two solutions for counting the claims at relay parent X:
      1. Keep a state for the claim queue (number of claims and which of them
      are claimed) and look it up when accepting a collation. With this
      approach we need to keep the state up to date with each new
      advertisement and each new leaf update.
      2. Calculate the state of the claim queue on the fly at each
      advertisement. This way we rebuild the state of the claim queue at each
      advertisements.
      
      Solution 1 is hard to implement with forks. There are too many variants
      to keep track of (different state for each leaf) and at the same time we
      might never need to use them. So I decided to go with option 2 -
      building claim queue state on the fly.
      
      To achieve this I've extended `View` from backing_implicit_view to keep
      track of the outer leaves. I've also added a method which accepts a
      relay parent and return all paths from an outer leaf to it. Let's call
      it `paths_to_relay_parent`.
      
      So how the counting works for relay parent X? First we examine the
      number of seconded and pending advertisements (more on pending in a
      second) from relay parent X to relay parent X-N (inclusive) where N is
      the length of the claim queue. Then we use `paths_to_relay_parent` to
      obtain all paths from outer leaves to relay parent X. We calculate the
      claims at relay parents X+1 to X+N (inclusive) for each leaf and get the
      maximum value. This way we guarantee that the candidate at rp X can be
      included in each leaf. This is the state of the claim queue which we use
      to decide if we can fetch one more advertisement at rp X or not.
      
      ## What is a pending advertisement
      I mentioned that we count seconded and pending advertisements at relay
      parent X. A pending advertisement is:
      1. An advertisement which is being fetched right now.
      2. An advertisement pending validation at backing subsystem.
      3. An advertisement blocked for seconding by backing because we don't
      know on of its parent heads.
      
      Any of these is considered a 'pending fetch' and a slot for it is kept.
      All of them are already tracked in `State`.
      
      ---------
      
      Co-authored-by: default avatarMaciej <maciej.zyszkiewicz@parity.io>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarAlin Dima <alin@parity.io>
      5153e2b5
  7. Dec 12, 2024
  8. Dec 11, 2024
    • Alexander Theißen's avatar
      pallet-revive: Statically verify imports on code deployment (#6759) · f0b5c3e6
      Alexander Theißen authored
      
      Previously, we failed at runtime if an unknown or unstable host function
      was called. This requires us to keep track of when a host function was
      added and when a code was deployed. We used the `api_version` to track
      at which API version each code was deployed. This made sure that when a
      new host function was added that old code won't have access to it. This
      is necessary as otherwise the behavior of a contract that made calls to
      this previously non existent host function would change from "trap" to
      "do something".
      
      In this PR we remove the API version. Instead, we statically verify on
      upload that no non-existent host function is ever used in the code. This
      will allow us to add new host function later without needing to keep
      track when they were added.
      
      This simplifies the code and also gives an immediate feedback if unknown
      host functions are used.
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <action@github.com>
      f0b5c3e6
    • Francisco Aguirre's avatar
      Add aliasers to westend chains (#6814) · 48c6574b
      Francisco Aguirre authored
      
      `InitiateTransfer`, the new instruction introduced in XCMv5, allows
      preserving the origin after a cross-chain transfer via the usage of the
      `AliasOrigin` instruction. The receiving chain needs to be configured to
      allow such this instruction to have its intended effect and not just
      throw an error.
      
      In this PR, I add the alias rules specified in the [RFC for origin
      preservation](https://github.com/polkadot-fellows/RFCs/blob/main/text/0122-alias-origin-on-asset-transfers.md)
      to westend chains so we can test these scenarios in the testnet.
      
      The new scenarios include:
      - Sending a cross-chain transfer from one system chain to another and
      doing a Transact on the same message (1 hop)
      - Sending a reserve asset transfer from one chain to another going
      through asset hub and doing Transact on the same message (2 hops)
      
      The updated chains are:
      - Relay: added `AliasChildLocation`
      - Collectives: added `AliasChildLocation` and
      `AliasOriginRootUsingFilter<AssetHubLocation, Everything>`
      - People: added `AliasChildLocation` and
      `AliasOriginRootUsingFilter<AssetHubLocation, Everything>`
      - Coretime: added `AliasChildLocation` and
      `AliasOriginRootUsingFilter<AssetHubLocation, Everything>`
      
      AssetHub already has `AliasChildLocation` and doesn't need the other
      config item.
      BridgeHub is not intended to be used by end users so I didn't add any
      config item.
      Only added `AliasChildOrigin` to the relay since we intend for it to be
      used less.
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <action@github.com>
      Co-authored-by: command-bot <>
      48c6574b
    • Alexander Theißen's avatar
      snowbridge: Update alloy-core (#6808) · da2dd9b7
      Alexander Theißen authored
      I am planning to use `alloy_core` to implement precompile support in
      `pallet_revive`. I noticed that it is already used by snowbridge. In
      order to unify the dependencies I did the following:
      
      1. Switch to the `alloy-core` umbrella crate so that we have less
      individual dependencies to update.
      2. Bump the latest version and fixup the resulting compile errors.
      da2dd9b7
    • Alexandru Gheorghe's avatar
      Make approval-distribution aggression a bit more robust and less spammy (#6696) · 85dd228d
      Alexandru Gheorghe authored
      
      After finality started lagging on kusama around `2025-11-25 15:55:40`
      nodes started being overloaded with messages and some restarted with
      ```
      Subsystem approval-distribution-subsystem appears unresponsive when sending a message of type polkadot_node_subsystem_types::messages::ApprovalDistributionMessage. origin=polkadot_service::relay_chain_selection::SelectRelayChainInner<sc_client_db::Backend<sp_runtime::generic::block::Block<sp_runtime::generic::header::Header<u32, sp_runtime::traits::BlakeTwo256>, sp_runtime::OpaqueExtrinsic>>, polkadot_overseer::Handle>
      ```
      
      I think this happened because our aggression in the current form is way
      too spammy and create problems in situation where we already constructed
      blocks with a load of candidates to check which what happened around
      `#25933682` before and after. However aggression, does help in the
      nightmare scenario where the network is segmented and sparsely
      connected, so I tend to think we shouldn't completely remove it.
      
      The current configuration is:
      ```
      l1_threshold: Some(16),
      l2_threshold: Some(28),
      resend_unfinalized_period: Some(8),
      ```
      The way aggression works right now :
      1. After L1 is triggered all nodes send all messages they created to all
      the other nodes and all messages they would have they already send
      according to the topology.
      2. Because of resend_unfinalized_period for each block all messages at
      step 1) are sent every 8 blocks, so for example let's say we have blocks
      1 to 24 unfinalized, then at block 25, all messages for block 1, 9 will
      be resent, and consequently at block 26, all messages for block 2, 10
      will be resent, this becomes worse as more blocks are created if backing
      backpressure did not kick in yet. In total this logic makes that each
      node receive 3 * total_number_of messages_per_block
      3. L2 aggression is way too spammy, when L2 aggression is enabled all
      nodes sends all messages of a block on GridXY, that means that all
      messages are received and sent by node at least 2*sqrt(num_validators),
      so on kusama would be 66 * NUM_MESSAGES_AT_FIRST_UNFINALIZED_BLOCK, so
      even with a reasonable number of messages like 10K, which you can have
      if you escalated because of no shows, you end-up sending and receiving
      ~660k messages at once, I think that's what makes the
      approval-distribution to appear unresponsive on some nodes.
      4. Duplicate messages are received by the nodes which turn, mark the
      node as banned, which may create more no-shows.
      
      ## Proposed improvements:
      1. Make L2 trigger way later 28 blocks, instead of 64, this should
      literally the last resort, until then we should try to let the
      approval-voting escalation mechanism to do its things and cover the
      no-shows.
      2. On L1 aggression don't send messages for blocks too far from the
      first_unfinalized there is no point in sending the messages for block
      20, if block 1 is still unfinalized.
      3. On L1 aggression, send messages then back-off for 3 *
      resend_unfinalized_period to give time for everyone to clear up their
      queues.
      4. If aggression is enabled accept duplicate messages from validators
      and don't punish them by reducting their reputation which, which may
      create no-shows.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
      Co-authored-by: default avatarAndrei Sandu <54316454+sandreim@users.noreply.github.com>
      85dd228d
    • Ludovic_Domingues's avatar
      Migration of polkadot-runtime-common auctions benchmarking to v2 (#6613) · 9dcdf813
      Ludovic_Domingues authored
      
      # Description
      Migrated polkadot-runtime-common auctions benchmarking to the new
      benchmarking syntax v2.
      This is part of #6202
      
      ---------
      
      Co-authored-by: default avatarGiuseppe Re <giuseppe.re@parity.io>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      9dcdf813
    • PG Herveou's avatar
      [pallet-revive] eth-rpc add missing tests (#6728) · 99be9b1e
      PG Herveou authored
      Add tests for #6608 
      
      fix https://github.com/paritytech/contract-issues/issues/12
      
      ---------
      
      Co-authored-by: command-bot <>
      99be9b1e
  9. Dec 10, 2024
    • Iulian Barbu's avatar
      omni-node: --dev sets manual seal and allows --chain to be set (#6646) · 48c28d4c
      Iulian Barbu authored
      
      # Description
      
      This PR changes a few things:
      * `--dev` flag will not conflict with `--chain` anymore, but if
      `--chain` is not given will set `--chain=dev`.
      * `--dev-block-time` is optional and it defaults to 3000ms if not set
      after setting `--dev`.
      * to start OmniNode with manual seal it is enough to pass just `--dev`.
      * `--dev-block-time` can still be used to start a node with manual seal,
      but it will not set it up as `--dev` does (it will not set a bunch of
      flags which are enabled by default when `--dev` is set: e.g. `--tmp`,
      `--alice` and `--force-authoring`.
      
      Closes: #6537
      
      ## Integration
      
      Relevant for node/runtime developers that use OmniNode lib, including
      `polkadot-omni-node` binary, although the recommended way for runtime
      development is to use `chopsticks`.
      
      ## Review Notes
      
      * Decided to focus only on OmniNode & templates docs in relation to it,
      and leave the `parachain-template-node` as is (meaning `--dev` isn't
      usable and testing a runtime with the `parachain-template-node` still
      needs a relay chain here). I am doing this because I think we want
      either way to phase out `parachain-template-node` and adding manual seal
      support for it is wasted effort. We might add support though if the
      demand is for `parachain-template-node`.
      * Decided to not infer the block time based on AURA config yet because
      there is still the option of setting a specific block time by using
      `--dev-block-time`. Also, would want first to align & merge on runtime
      metadata checks we added in Omni Node here:
      https://github.com/paritytech/polkadot-sdk/pull/6450 before starting to
      infer AURA config slot duration via the same way.
      
      - [x] update the docs to mention `--dev` now.
      - [x] mention about chopsticks in the context of runtime development
      
      ---------
      
      Signed-off-by: default avatarIulian Barbu <iulian.barbu@parity.io>
      Co-authored-by: default avatarMichal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com>
      48c28d4c
    • Ron's avatar
      XCMv5: Fix for compatibility with V4 (#6503) · fe4846f5
      Ron authored
      ## Description
      
      Our smoke tests transfer `WETH` from Sepolia to Westend-AssetHub breaks,
      try to reregister `WETH` on AH but fails as following:
      
      
      https://bridgehub-westend.subscan.io/xcm_message/westend-4796d6b3600aca32ef63b9953acf6a456cfd2fbe
      
      https://assethub-westend.subscan.io/extrinsic/9731267-0?event=9731267-2
      
      The reason is that the transact call encoded on BH to register the asset
      
      https://github.com/paritytech/polkadot-sdk/blob/a77940ba
      
      /bridges/snowbridge/primitives/router/src/inbound/mod.rs#L282-L289
      ```
      0x3500020209079edaa8020300fff9976782d46cc05630d1f6ebab18b2324d6b1400ce796ae65569a670d0c1cc1ac12515a3ce21b5fbf729d63d7b289baad070139d01000000000000000000000000000000
      ```
      
      the `asset_id` which is the xcm location can't be decoded on AH in V5
      
      Issue initial post in
      https://matrix.to/#/!qUtSTcfMJzBdPmpFKa:parity.io/$RNMAxIIOKGtBAqkgwiFuQf4eNaYpmOK-Pfw4d6vv1aU?via=parity.io&via=matrix.org&via=web3.foundation
      
      ---------
      
      Co-authored-by: default avatarAdrian Catangiu <adrian@parity.io>
      Co-authored-by: default avatarFrancisco Aguirre <franciscoaguirreperez@gmail.com>
      fe4846f5
    • Alexandru Gheorghe's avatar
      Fix order of resending messages after restart (#6729) · 65a4e5ee
      Alexandru Gheorghe authored
      
      The way we build the messages we need to send to approval-distribution
      can result in a situation where is we have multiple assignments covered
      by a coalesced approval, the messages are sent in this order:
      
      ASSIGNMENT1, APPROVAL, ASSIGNMENT2, because we iterate over each
      candidate and add to the queue of messages both the assignment and the
      approval for that candidate, and when the approval reaches the
      approval-distribution subsystem it won't be imported and gossiped
      because one of the assignment for it is not known.
      
      So in a network where a lot of nodes are restarting at the same time we
      could end up in a situation where a set of the nodes correctly received
      the assignments and approvals before the restart and approve their
      blocks and don't trigger their assignments. The other set of nodes
      should receive the assignments and approvals after the restart, but
      because the approvals never get broacasted anymore because of this bug,
      the only way they could approve is if other nodes start broadcasting
      their assignments.
      
      I think this bug contribute to the reason the network did not recovered
      on `25-11-25 15:55:40` after the restarts.
      
      Tested this scenario with a `zombienet` where `nodes` are finalising
      blocks because of aggression and all nodes are restarted at once and
      confirmed the network lags and doesn't recover before and it does after
      the fix
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
      65a4e5ee
    • Kazunobu Ndong's avatar
      polkadot-sdk-docs: Use command_macro! (#6624) · 19bc578e
      Kazunobu Ndong authored
      
      # Description
      
      **Understood assignment:**
      Initial assignment description is in #6194.
      In order to Simplify the display of commands and ensure they are tested
      for chain spec builder's `polkadot-sdk` reference docs, find every
      occurrence of `#[docify::export]` where `process:Command` is used, and
      replace the use of `process:Command` by `run_cmd!` from the `cmd_lib
      crate`.
      
      ---------
      
      Co-authored-by: default avatarIulian Barbu <14218860+iulianbarbu@users.noreply.github.com>
      19bc578e