Skip to content
  1. Dec 21, 2024
    • Dónal Murray's avatar
      [pallet-broker] add extrinsic to reserve a system core without having to wait... · f9cdf41a
      Dónal Murray authored
      [pallet-broker] add extrinsic to reserve a system core without having to wait two sale boundaries (#4273)
      
      When calling the reserve extrinsic after sales have started, the
      assignment will be reserved, but two sale period boundaries must pass
      before the core is actually assigned.
      
      Since this can take between 28 and 56 days on production networks, a new
      extrinsic is introduced to shorten the timeline.
      
      This essentially performs three actions:
      1. Reserve it (applies after two sale boundaries)
      2. Add it to the Workplan for the next sale period
      3. Add it to the Workplan for the rest of the current sale period from
      the next timeslice to be commmitted.
      
      The caller must ensure that a core is first added, with most relay chain
      implementations having a delay of two session boundaries until it comes
      into effect.
      
      Alternatively the extrinsic can be called on a core whose workload can
      be clobbered from now until the reservation kicks in (the sale period
      after the next). Any workplan entries for that core at other timeslices
      should be first removed by the caller.
      
      ---------
      
      Co-authored-by: command-bot <>
      f9cdf41a
  2. Dec 20, 2024
  3. Dec 19, 2024
  4. Dec 18, 2024
  5. Dec 17, 2024
    • Frazz's avatar
      adding stkd bootnodes (#6912) · 08bfa860
      Frazz authored
      
      
      # Description
      
      Opening this PR to add our bootnodes for the IBP. These nodes are
      located in Santiago Chile, we own and manage the underlying hardware. If
      you need any more information please let me know.
      
      
      ## Integration
      
      ```
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain asset-hub-westend \
        --reserved-only \
        --reserved-nodes "/dns/asset-hub-westend-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWDUPyF2q8b6fVFEuwxBbRV3coAy1kzuCPU3D9TRiLnUfE"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain bridge-hub-westend \
        --reserved-only \
        --reserved-nodes "/dns/bridge-hub-westend-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWJEfDZxrEKehoPbW2Mfg6rypttMXCMgMiybmapKqcByc1"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain collectives-westend \
        --reserved-only \
        --reserved-nodes "/dns/collectives-westend-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWFH7UZnWESzuRSgrLvNSfALjtpr9PmG7QGyRNCizWEHcd"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain people-westend \
        --reserved-only \
        --reserved-nodes "/dns/people-westend-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWJzL4R3kq9Ms88gsV6bS9zGT8DHySdqwau5SHNqTzToNM"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain coretime-westend \
        --reserved-only \
        --reserved-nodes "/dns/coretime-westend-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWCFNzjaiq45ZpW2qStmQdG5w7ZHrmi3RWUeG8cV2pPc2Y"
      
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain asset-hub-kusama \
        --reserved-only \
        --reserved-nodes "/dns/asset-hub-kusama-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWNCg821LyWDVrAJ2mG6ScDeeBFuDPiJtLYc9jCGNCyMoq"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain bridge-hub-kusama \
        --reserved-only \
        --reserved-nodes "/dns/bridge-hub-kusama-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWBE1ZhrYqMC3ECFK6qbufS9kgKuF57XpvvZU6LKsPUSnF"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain coretime-kusama \
        --reserved-only \
        --reserved-nodes "/dns/coretime-kusama-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWMPc6jEjzFLRCK7QgbcNh3gvxCzGvDKhU4F66QWf2kZmq"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain people-kusama \
        --reserved-only \
        --reserved-nodes "/dns/people-kusama-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWN32MmhPgZN8e1Dmc8DzEUKsfC2hga3Lqekko4VWvrbhq"
      
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain bridge-hub-polkadot \
        --reserved-only \
        --reserved-nodes "/dns/bridge-hub-polkadot-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWSBpo6fYU8CUr4fwA14CKSDUSj5jSgZzQDBNL1B8Dnmaw"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain collectives-polkadot \
        --reserved-only \
        --reserved-nodes "/dns/collectives-polkadot-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWNscpobBzjPEdjbbjjKRYh9j1whYJvagRJwb9UH68zCPC"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain people-polkadot \
        --reserved-only \
        --reserved-nodes "/dns/people-polkadot-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWDf2aLDKHQyLkDzdEGs6exNzWWw62s2EK9g1wrujJzRZt"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain coretime-polkadot \
        --reserved-only \
        --reserved-nodes "/dns/coretime-polkadot-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWFG9WQQTf3MX3YQypZjJtoJM5zCQgJcqYdxxTStsbhZGU"
      
      docker run --platform=linux/amd64 --rm parity/polkadot-parachain \
        --base-path /tmp/polkadot-data \
        --no-hardware-benchmarks --no-mdns \
        --chain asset-hub-polkadot \
        --reserved-only \
        --reserved-nodes "/dns/asset-hub-polkadot-01.bootnode.stkd.io/tcp/30633/wss/p2p/12D3KooWJUhizuk3crSvpyKLGycHBtnP93rwjksVueveU6x6k6RY"
      
      ```
      
      ## Review Notes
      
      None
      
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      08bfa860
    • Sebastian Kunert's avatar
      omni-node: Tolerate failing metadata check (#6923) · e6ddd392
      Sebastian Kunert authored
      #6450 introduced metadata checks. Supported are metadata v14 and higher.
      
      However, of course old chain-specs have a genesis code blob that might
      be on older version. This needs to be tolerated. We should just skip the
      checks in that case.
      
      Fixes #6921
      
      ---------
      
      Co-authored-by: command-bot <>
      e6ddd392
    • Alexander Theißen's avatar
      Remove unused dependencies from pallet_revive (#6917) · 05589737
      Alexander Theißen authored
      Removing apparently unused dependencies from `pallet_revive` and related
      crates.
      
      ---------
      
      Co-authored-by: command-bot <>
      05589737
    • Alexander Samusev's avatar
      ci: 5 retries for cargo (#6903) · 31179c40
      Alexander Samusev authored
      cc https://github.com/paritytech/ci_cd/issues/1038
      31179c40
  6. Dec 16, 2024
    • Jun Jiang's avatar
      Upgrade nix and reqwest (#6898) · 5b04b459
      Jun Jiang authored
      # Description
      
      Upgrade `nix` and `reqwest` to reduce outdated dependencies and speed up
      compilation.
      5b04b459
    • Iulian Barbu's avatar
      polkadot-omni-node-lib: remove unused dep (#6889) · adc0178f
      Iulian Barbu authored
      # Description
      
      Redundant dep that made its way in #6450 . 😅
      
      . It can be
      brought up when using `cargo udeps`. Added a github action that runs
      `cargo udeps` on the repo too.
      
      ## Integration
      
      N/A
      
      ## Review Notes
      
      N/A
      
      ---------
      
      Signed-off-by: default avatarIulian Barbu <[email protected]>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      adc0178f
    • Sebastian Kunert's avatar
      Omni-node: Detect pending code in storage and send go ahead signal in dev-mode. (#6885) · cee63ac0
      Sebastian Kunert authored
      We check if there is a pending validation code in storage. If there is,
      add the go-ahead signal in the relay chain storage proof.
      
      Not super elegant, but should get the job done for development.
      
      ---------
      
      Co-authored-by: command-bot <>
      cee63ac0
    • Nazar Mokrynskyi's avatar
      Upgrade libp2p from 0.52.4 to 0.54.1 (#6248) · c8812883
      Nazar Mokrynskyi authored
      
      
      # Description
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/5996
      
      https://github.com/libp2p/rust-libp2p/releases/tag/libp2p-v0.53.0
      https://github.com/libp2p/rust-libp2p/blob/master/CHANGELOG.md
      
      ## Integration
      
      Nothing special is needed, just note that `yamux_window_size` is no
      longer applicable to libp2p (litep2p seems to still have it though).
      
      ## Review Notes
      
      There are a few simplifications and improvements done in libp2p 0.53
      regarding swarm interface, I'll list a few key/applicable here.
      
      https://github.com/libp2p/rust-libp2p/pull/4788 removed
      `write_length_prefixed` function, so I inlined its code instead.
      
      https://github.com/libp2p/rust-libp2p/pull/4120 introduced new
      `libp2p::SwarmBuilder` instead of now deprecated
      `libp2p::swarm::SwarmBuilder`, the transition is straightforward and
      quite ergonomic (can be seen in tests).
      
      https://github.com/libp2p/rust-libp2p/pull/4581 is the most annoying
      change I have seen that basically makes many enums `#[non_exhaustive]`.
      I mapped some, but those that couldn't be mapped I dealt with by
      printing log messages once they are hit (the best solution I could come
      up with, at least with stable Rust).
      
      https://github.com/libp2p/rust-libp2p/issues/4306 makes connection close
      as soon as there are no handler using it, so I had to replace
      `KeepAlive::Until` with an explicit future that flips internal boolean
      after timeout, achieving the old behavior, though it should ideally be
      removed completely at some point.
      
      `yamux_window_size` is no longer used by libp2p thanks to
      https://github.com/libp2p/rust-libp2p/pull/4970 and generally Yamux
      should have a higher performance now.
      
      I have resolved and cleaned up all deprecations related to libp2p except
      `BandwidthSinks`. Libp2p deprecated it (though it is still present in
      0.54.1, which is why I didn't handle it just yet). Ideally Substrate
      would finally [switch to the official Prometheus
      client](https://github.com/paritytech/substrate/issues/12699), in which
      case we'd get metrics for free. Otherwise a bit of code will need to be
      copy-pasted to maintain current behavior with `BandwidthSinks` gone,
      which I left a TODO about.
      
      The biggest change in 0.54.0 is
      https://github.com/libp2p/rust-libp2p/pull/4568 that changed transport
      APIs and enabled unconditional potential port reuse, which can lead to
      very confusing errors if running two Substrate nodes on the same machine
      without changing listening port explicitly.
      
      Overall nothing scary here, but testing is always appreciated.
      
      # Checklist
      
      * [x] My PR includes a detailed description as outlined in the
      "Description" and its two subsections above.
      * [x] My PR follows the [labeling requirements](
      
      https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process
      ) of this project (at minimum one label for `T` required)
      * External contributors: ask maintainers to put the right label on your
      PR.
      
      ---
      
      Polkadot Address: 1vSxzbyz2cJREAuVWjhXUT1ds8vBzoxn2w4asNpusQKwjJd
      
      ---------
      
      Co-authored-by: default avatarDmitry Markin <[email protected]>
      c8812883
  7. Dec 15, 2024
    • Alexander Theißen's avatar
      Fix flaky `build-runtimes-polkavm` CI job (#6893) · 88d255c2
      Alexander Theißen authored
      The timeout was too low which made the job not finish in time sometimes:
      
      Hence:
      - Bumping the timeout to 60 minutes which is in line with other jobs
      which are building substantial parts of the repo.
      - Roll all the runtime builds into a single cargo invocation so that it
      aborts after the first failure. It also allows for more parallel
      compiling.
      88d255c2
  8. Dec 14, 2024
  9. Dec 13, 2024
    • davidk-pt's avatar
      Add `unstable-api` feature flag to `pallet-revive` (#6866) · ec69b612
      davidk-pt authored
      Follow up refactor to
      https://github.com/paritytech/polkadot-sdk/pull/6844#pullrequestreview-2497225717
      
      I still need to finish adding `#[cfg(feature = "unstable-api")]` to the
      rest of the tests and make sure all tests pass, I want to make sure I'm
      moving into right direction first
      
      @athei @xermicus
      
      
      
      ---------
      
      Co-authored-by: default avatarDavidK <[email protected]>
      Co-authored-by: default avatarAlexander Theißen <[email protected]>
      ec69b612
    • Shawn Tabrizi's avatar
      Only one ParaId variable in the Parachain Template (#6744) · 482bf082
      Shawn Tabrizi authored
      Many problems can occur when building and testing a Parachain caused by
      misconfiguring the paraid.
      
      This can happen when there are 3 different places you need to update!
      
      This PR makes it so a SINGLE location is the source of truth for the
      ParaId.
      482bf082
    • Bastian Köcher's avatar
      Update merkleized-metadata to 0.2.0 (#6863) · 6d92ded5
      Bastian Köcher authored
      0.1.2 was yanked as it was breaking semver.
      
      ---------
      
      Co-authored-by: command-bot <>
      6d92ded5
    • Alexandru Gheorghe's avatar
      Fix approval-voting canonicalize off by one (#6864) · 2dd2bb5a
      Alexandru Gheorghe authored
      
      
      Approval voting canonicalize is off by one that means if we are
      finalizing blocks one by one, approval-voting cleans it up every other
      block for example:
      
      - With 1, 2, 3, 4, 5, 6 blocks created, the stored range would be
      StoredBlockRange(1,7)
      - When block 3 is finalized the canonicalize works and StoredBlockRange
      is (4,7)
      - When block 4 is finalized the canonicalize exists early because of the
      `if range.0 > canon_number` break clause, so blocks are not cleaned up.
      - When block 5 is finalized the canonicalize works and StoredBlockRange
      becomes (6,7) and both block 4 and 5 are cleaned up.
      
      The consequences of this is that sometimes we keep block entries around
      after they are finalized, so at restart we consider this blocks and send
      them to approval-distribution.
      
      In most cases this is not a problem, but in the case when finality is
      lagging on restart approval-distribution will receive 4 as being the
      oldest block it needs to work on, and since BlockFinalized is never
      resent for block 4 after restart it won't get the opportunity to clean
      that up. Therefore it will end running approval-distribution aggression
      on block 4, because that is the oldest block it received from
      approval-voting for which it did not see a BlockFinalized signal.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <[email protected]>
      2dd2bb5a