Skip to content
Snippets Groups Projects
  1. Jan 20, 2025
    • Benjamin Gallois's avatar
      Fix `frame-benchmarking-cli` not buildable without rocksdb (#7263) · 2c4cecce
      Benjamin Gallois authored
      ## Description
      
      The `frame-benchmarking-cli` crate has not been buildable without the
      `rocksdb` feature since version 1.17.0.
      
      **Error:**  
      ```rust
      self.database()?.unwrap_or(Database::RocksDb),
                                   ^^^^^^^ variant or associated item not found in `Database`
      ```
      
      This issue is also related to the `rocksdb` feature bleeding (#3793),
      where the `rocksdb` feature was always activated even when compiling
      this crate with `--no-default-features`.
      
      **Fix:**  
      - Resolved the error by choosing `paritydb` as the default database when
      compiled without the `rocksdb` feature.
      - Fixed the issue where the `sc-cli` crate's `rocksdb` feature was
      always active, even compiling `frame-benchmarking-cli` with
      `--no-default-features`.
      
      ## Review Notes
      
      Fix the crate to be built without rocksdb, not intended to solve #3793.
      
      ---------
      
      Co-authored-by: command-bot <>
      2c4cecce
    • Branislav Kontur's avatar
      Apply a few minor fixes found while addressing the fellows PR for weights. (#7098) · 115ff4e9
      Branislav Kontur authored
      This PR addresses a few minor issues found while working on the
      polkadot-fellows PR
      [https://github.com/polkadot-fellows/runtimes/pull/522](https://github.com/polkadot-fellows/runtimes/pull/522):
      - Incorrect generic type for `InboundLaneData` in
      `check_message_lane_weights`.
      - Renaming leftovers: `assigner_on_demand` -> `on_demand`.
      115ff4e9
    • Sebastian Kunert's avatar
      Stabilize `ensure_execute_processes_have_correct_num_threads` test (#7253) · d5d9b127
      Sebastian Kunert authored
      Saw this test flake a few times, last time
      [here](https://github.com/paritytech/polkadot-sdk/actions/runs/12834432188/job/35791830215).
      
      We first fetch all processes in the test, then query `/proc/<pid>/stat`
      for every one of them. When the file was not found, we would error. Now
      we tolerate not finding this file. Ran 200 times locally without error,
      before would fail a few times, probably depending on process fluctuation
      (which I expect to be high on CI runners).
      d5d9b127
  2. Jan 17, 2025
    • Santi Balaguer's avatar
      added new proxy ParaRegistration to Westend (#6995) · f90a785c
      Santi Balaguer authored
      
      This adds a new Proxy type to Westend Runtime called ParaRegistration.
      This is related to:
      https://github.com/polkadot-fellows/runtimes/pull/520.
      
      This new proxy allows:
      1. Reserve paraID
      2. Register Parachain
      3. Leverage Utilites pallet
      4. Remove proxy.
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarDónal Murray <donal.murray@parity.io>
      f90a785c
    • thiolliere's avatar
      Make frame crate not use the feature experimental (#7177) · 4b2febe1
      thiolliere authored
      We already use it for lots of pallet.
      
      Keeping it feature gated by experimental means we lose the information
      of which pallet was using experimental before the migration to frame
      crate usage.
      
      We can consider `polkadot-sdk-frame` crate unstable but let's not use
      the feature `experimental`.
      
      ---------
      
      Co-authored-by: command-bot <>
      4b2febe1
  3. Jan 16, 2025
    • Ankan's avatar
      [Staking] Currency <> Fungible migration (#5501) · f5673cf2
      Ankan authored
      Migrate staking currency from `traits::LockableCurrency` to
      `traits::fungible::holds`.
      
      Resolves part of https://github.com/paritytech/polkadot-sdk/issues/226.
      
      ## Changes
      ### Nomination Pool
      TransferStake is now incompatible with fungible migration as old pools
      were not meant to have additional ED. Since they are anyways deprecated,
      removed its usage from all test runtimes.
      
      ### Staking
      - Config: `Currency` becomes of type `Fungible` while `OldCurrency` is
      the `LockableCurrency` used before.
      - Lazy migration of accounts. Any ledger update will create a new hold
      with no extra reads/writes. A permissionless extrinsic
      `migrate_currency()` releases the old `lock` along with some
      housekeeping.
      - Staking now requires ED to be left free. It also adds no consumer to
      staking accounts.
      - If hold cannot be applied to all stake, the un-holdable part is force
      withdrawn from the ledger.
      
      ### Delegated Staking
      The pallet does not add provider for agents anymore.
      
      ## Migration stats
      ### Polkadot
      Total accounts that can be migrated: 59564
      Accounts failing to migrate: 0
      Accounts with stake force withdrawn greater than ED: 59
      Total force withdrawal: 29591.26 DOT
      
      ### Kusama
      Total accounts that can be migrated: 26311
      Accounts failing to migrate: 0
      Accounts with stake force withdrawn greater than ED: 48
      Total force withdrawal: 1036.05 KSM
      
      
      [Full logs here](https://hackmd.io/@ak0n/BklDuFra0).
      
      ## Note about locks (freeze) vs holds
      With locks or freezes, staking could use total balance of an account.
      But with holds, the account needs to be left with at least Existential
      Deposit in free balance. This would also affect nomination pools which
      till now has been able to stake all funds contributed to it. An
      alternate version of this PR is
      https://github.com/paritytech/polkadot-sdk/pull/5658 where staking
      pallet does not add any provider, but means pools and delegated-staking
      pallet has to provide for these accounts and makes the end to end logic
      (of provider and consumer ref) lot less intuitive and prone to bug.
      
      This PR now introduces requirement for stakers to maintain ED in their
      free balance. This helps with removing the bug prone incrementing and
      decrementing of consumers and providers.
      
      ## TODO
      - [x] Test: Vesting + governance locked funds can be staked.
      - [ ] can `Call::restore_ledger` be removed? @gpestana 
      - [x] Ensure unclaimed withdrawals is not affected by no provider for
      pool accounts.
      - [x] Investigate kusama accounts with balance between 0 and ED.
      - [x] Permissionless call to release lock.
      - [x] Migration of consumer (dec) and provider (inc) for direct stakers.
      - [x] force unstake if hold cannot be applied to all stake.
      - [x] Fix try state checks (it thinks nothing is staked for unmigrated
      ledgers).
      - [x] Bench `migrate_currency`.
      - [x] Virtual Staker migration test.
      - [x] Ensure total issuance is upto date when minting rewards.
      
      ## Followup
      - https://github.com/paritytech/polkadot-sdk/issues/5742
      
      ---------
      
      Co-authored-by: command-bot <>
      f5673cf2
    • Liam Aharon's avatar
      Implement `pallet-asset-rewards` (#3926) · be2404cc
      Liam Aharon authored
      
      Closes #3149 
      
      ## Description
      
      This PR introduces `pallet-asset-rewards`, which allows accounts to be
      rewarded for freezing `fungible` tokens. The motivation for creating
      this pallet is to allow incentivising LPs.
      
      See the pallet docs for more info about the pallet.
      
      ## Runtime changes
      
      The pallet has been added to
      - `asset-hub-rococo`
      - `asset-hub-westend`
      
      The `NativeAndAssets` `fungibles` Union did not contain `PoolAssets`, so
      it has been renamed `NativeAndNonPoolAssets`
      
      A new `fungibles` Union `NativeAndAllAssets` was created to encompass
      all assets and the native token.
      
      ## TODO
      - [x] Emulation tests
      - [x] Fill in Freeze logic (blocked
      https://github.com/paritytech/polkadot-sdk/issues/3342) and re-run
      benchmarks
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarmuharem <ismailov.m.h@gmail.com>
      Co-authored-by: default avatarGuillaume Thiolliere <gui.thiolliere@gmail.com>
      be2404cc
  4. Jan 15, 2025
  5. Jan 14, 2025
    • Carlo Sala's avatar
      xcm: convert properly assets in xcmpayment apis (#7134) · 85c244f6
      Carlo Sala authored
      
      Port #6459 changes to relays as well, which were probably forgotten in
      that PR.
      Thanks!
      
      ---------
      
      Co-authored-by: default avatarFrancisco Aguirre <franciscoaguirreperez@gmail.com>
      Co-authored-by: command-bot <>
      85c244f6
    • Alexandru Gheorghe's avatar
      approval-voting: Fix sending of assignments after restart (#6973) · d38bb953
      Alexandru Gheorghe authored
      There is a problem on restart where nodes will not trigger their needed
      assignment if they were offline while the time of the assignment passed.
      
      That happens because after restart we will hit this condition
      https://github.com/paritytech/polkadot-sdk/blob/4e805ca0/polkadot/node/core/approval-voting/src/lib.rs#L2495
      and considered will be `tick_now` which is already higher than the tick
      of our assignment.
      
      The fix is to schedule a wakeup for untriggered assignments at restart
      and let the logic of processing an wakeup decide if it needs to trigger
      the assignment or not.
      
      One thing that we need to be careful here is to make sure we don't
      schedule the wake up immediately after restart because, the node would
      still be behind with all the assignments that should have received and
      might make it wrongfully decide it needs to trigger its assignment, so I
      added a `RESTART_WAKEUP_DELAY: Tick = 12` which should be more t...
      d38bb953
    • Alexandru Gheorghe's avatar
      Retry approval on availability failure if the check is still needed (#6807) · 6878ba1f
      Alexandru Gheorghe authored
      
      Recovering the POV can fail in situation where the node just restart and
      the DHT topology wasn't fully discovered yet, so the current node can't
      connect to most of its Peers. This is bad because for gossiping the
      assignment you need to be connected to just a few peers, so because we
      can't approve the candidate and other nodes will see this as a no show.
      
      This becomes bad in the scenario where you've got a lot of nodes
      restarting at the same time, so you end up having a lot of no-shows in
      the network that are never covered, in that case it makes sense for
      nodes to actually retry approving the candidate at a later data in time
      and retry several times if the block containing the candidate wasn't
      approved.
      
      ## TODO
      - [x] Add a subsystem test.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
      6878ba1f
    • Alin Dima's avatar
      forbid v1 descriptors with UMP signals (#7127) · ddffa027
      Alin Dima authored
      ddffa027
  6. Jan 13, 2025
  7. Jan 09, 2025
  8. Jan 07, 2025
  9. Jan 06, 2025
    • Alin Dima's avatar
      fix chunk fetching network compatibility zombienet test (#6988) · ffa90d0f
      Alin Dima authored
      Fix this zombienet test
      
      It was failing because in
      https://github.com/paritytech/polkadot-sdk/pull/6452 I enabled the v2
      receipts for testnet genesis,
      so the collators started sending v2 receipts with zeroed collator
      signatures to old validators that were still checking those signatures
      (which lead to disputes, since new validators considered the candidates
      valid).
      
      The fix is to also use an old image for collators, so that we don't
      create v2 receipts.
      
      We cannot remove this test yet because collators also perform chunk
      recovery, so until all collators are upgraded, we need to maintain this
      compatibility with the old protocol version (which is also why
      systematic recovery was not yet enabled)
      ffa90d0f
    • taozui472's avatar
      chore: delete repeat words (#7034) · 6eca7647
      taozui472 authored
      
      Co-authored-by: default avatarDónal Murray <donal.murray@parity.io>
      6eca7647
  10. Jan 05, 2025
    • thiolliere's avatar
      Implement cumulus StorageWeightReclaim as wrapping transaction extension +... · 63c73bf6
      thiolliere authored
      Implement cumulus StorageWeightReclaim as wrapping transaction extension + frame system ReclaimWeight (#6140)
      
      (rebasing of https://github.com/paritytech/polkadot-sdk/pull/5234)
      
      ## Issues:
      
      * Transaction extensions have weights and refund weight. So the
      reclaiming of unused weight must happen last in the transaction
      extension pipeline. Currently it is inside `CheckWeight`.
      * cumulus storage weight reclaim transaction extension misses the proof
      size of logic happening prior to itself.
      
      ## Done:
      
      * a new storage `ExtrinsicWeightReclaimed` in frame-system. Any logic
      which attempts to do some reclaim must use this storage to avoid double
      reclaim.
      * a new function `reclaim_weight` in frame-system pallet: info and post
      info in arguments, read the already reclaimed weight, calculate the new
      unused weight from info and post info. do the more accurate reclaim if
      higher.
      * `CheckWeight` is unchanged and still reclaim the weight in post
      dispatch
      * `ReclaimWeight` is a new transaction extension in frame system. For
      s...
      63c73bf6
  11. Dec 29, 2024
  12. Dec 27, 2024
  13. Dec 22, 2024
  14. Dec 20, 2024
    • Xavier Lau's avatar
      Reorder dependencies' keys (#6967) · a843d15e
      Xavier Lau authored
      
      It doesn't make sense to only reorder the features array.
      
      For example:
      
      This makes it hard for me to compare the dependencies and features,
      especially some crates have a really really long dependencies list.
      ```toml​
      [dependencies]
      c = "*"
      a = "*"
      b = "*"
      
      [features]
      std = [
        "a",
        "b",
        "c",
      ]
      ```
      
      This makes my life easier.
      ```toml​
      [dependencies]
      a = "*"
      b = "*"
      c = "*"
      
      [features]
      std = [
        "a",
        "b",
        "c",
      ]
      ```
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      Co-authored-by: command-bot <>
      a843d15e
  15. Dec 19, 2024
  16. Dec 18, 2024
  17. Dec 14, 2024
  18. Dec 13, 2024
    • Alexandru Gheorghe's avatar
      Fix approval-voting canonicalize off by one (#6864) · 2dd2bb5a
      Alexandru Gheorghe authored
      Approval voting canonicalize is off by one that means if we are
      finalizing blocks one by one, approval-voting cleans it up every other
      block for example:
      
      - With 1, 2, 3, 4, 5, 6 blocks created, the stored range would be
      StoredBlockRange(1,7)
      - When block 3 is finalized the canonicalize works and StoredBlockRange
      is (4,7)
      - When block 4 is finalized the canonicalize exists early because of the
      `if range.0 > canon_number` break clause, so blocks are not cleaned up.
      - When block 5 is finalized the canonicalize works and StoredBlockRange
      becomes (6,7) and both block 4 and 5 are cleaned up.
      
      The consequences of this is that sometimes we keep block entries around
      after they are finalized, so at restart we consider this blocks and send
      them to approval-distribution.
      
      In most cases this is not a problem, but in the case when finality is
      lagging on restart approval-distribution will receive 4 as being the
      oldest block it needs to work on, and since BlockFinalize...
      2dd2bb5a
    • Tsvetomir Dimitrov's avatar
      Collation fetching fairness (#4880) · 5153e2b5
      Tsvetomir Dimitrov authored
      
      Related to https://github.com/paritytech/polkadot-sdk/issues/1797
      
      # The problem
      When fetching collations in collator protocol/validator side we need to
      ensure that each parachain has got a fair core time share depending on
      its assignments in the claim queue. This means that the number of
      collations fetched per parachain should ideally be equal to (but
      definitely not bigger than) the number of claims for the particular
      parachain in the claim queue.
      
      # Why the current implementation is not good enough
      The current implementation doesn't guarantee such fairness. For each
      relay parent there is a `waiting_queue` (PerRelayParent -> Collations ->
      waiting_queue) which holds any unfetched collations advertised to the
      validator. The collations are fetched on first in first out principle
      which means that if two parachains share a core and one of the
      parachains is more aggressive it might starve the second parachain. How?
      At each relay parent up to `max_candidate_depth` candidates are accepted
      (enforced in `fn is_seconded_limit_reached`) so if one of the parachains
      is quick enough to fill in the queue with its advertisements the
      validator will never fetch anything from the rest of the parachains
      despite they are scheduled. This doesn't mean that the aggressive
      parachain will occupy all the core time (this is guaranteed by the
      runtime) but it will deny the rest of the parachains sharing the same
      core to have collations backed.
      
      # How to fix it
      The solution I am proposing is to limit fetches and advertisements based
      on the state of the claim queue. At each relay parent the claim queue
      for the core assigned to the validator is fetched. For each parachain a
      fetch limit is calculated (equal to the number of entries in the claim
      queue). Advertisements are not fetched for a parachain which has
      exceeded its claims in the claim queue. This solves the problem with
      aggressive parachains advertising too much collations.
      
      The second part is in collation fetching logic. The collator will keep
      track on which collations it has fetched so far. When a new collation
      needs to be fetched instead of popping the first entry from the
      `waiting_queue` the validator examines the claim queue and looks for the
      earliest claim which hasn't got a corresponding fetch. This way the
      collator will always try to prioritise the most urgent entries.
      
      ## How the 'fair share of coretime' for each parachain is determined?
      Thanks to async backing we can accept more than one candidate per relay
      parent (with some constraints). We also have got the claim queue which
      gives us a hint which parachain will be scheduled next on each core. So
      thanks to the claim queue we can determine the maximum number of claims
      per parachain.
      
      For example the claim queue is [A A A] at relay parent X so we know that
      at relay parent X we can accept three candidates for parachain A. There
      are two things to consider though:
      1. If we accept more than one candidate at relay parent X we are
      claiming the slot of a future relay parent. So accepting two candidates
      for relay parent X means that we are claiming the slot at rp X+1 or rp
      X+2.
      2. At the same time the slot at relay parent X could have been claimed
      by a previous relay parent(s). This means that we need to accept less
      candidates at X or even no candidates.
      
      There are a few cases worth considering:
      1. Slot claimed by previous relay parent.
          CQ @ rp X: [A A A]
          Advertisements at X-1 for para A: 2
          Advertisements at X-2 for para A: 2
      Outcome - at rp X we can accept only 1 advertisement since our slots
      were already claimed.
      2. Slot in our claim queue already claimed at future relay parent
          CQ @ rp X: [A A A]
          Advertisements at X+1 for para A: 1
          Advertisements at X+2 for para A: 1
      Outcome: at rp X we can accept only 1 advertisement since the slots in
      our relay parents were already claimed.
      
      The situation becomes more complicated with multiple leaves (forks).
      Imagine we have got a fork at rp X:
      ```
      CQ @ rp X: [A A A]
      (rp X) -> (rp X+1) -> rp(X+2)
               \-> (rp X+1')
      ```
      Now when we examine the claim queue at RP X we need to consider both
      forks. This means that accepting a candidate at X means that we should
      have a slot for it in *BOTH* leaves. If for example there are three
      candidates accepted at rp X+1' we can't accept any candidates at rp X
      because there will be no slot for it in one of the leaves.
      
      ## How the claims are counted
      There are two solutions for counting the claims at relay parent X:
      1. Keep a state for the claim queue (number of claims and which of them
      are claimed) and look it up when accepting a collation. With this
      approach we need to keep the state up to date with each new
      advertisement and each new leaf update.
      2. Calculate the state of the claim queue on the fly at each
      advertisement. This way we rebuild the state of the claim queue at each
      advertisements.
      
      Solution 1 is hard to implement with forks. There are too many variants
      to keep track of (different state for each leaf) and at the same time we
      might never need to use them. So I decided to go with option 2 -
      building claim queue state on the fly.
      
      To achieve this I've extended `View` from backing_implicit_view to keep
      track of the outer leaves. I've also added a method which accepts a
      relay parent and return all paths from an outer leaf to it. Let's call
      it `paths_to_relay_parent`.
      
      So how the counting works for relay parent X? First we examine the
      number of seconded and pending advertisements (more on pending in a
      second) from relay parent X to relay parent X-N (inclusive) where N is
      the length of the claim queue. Then we use `paths_to_relay_parent` to
      obtain all paths from outer leaves to relay parent X. We calculate the
      claims at relay parents X+1 to X+N (inclusive) for each leaf and get the
      maximum value. This way we guarantee that the candidate at rp X can be
      included in each leaf. This is the state of the claim queue which we use
      to decide if we can fetch one more advertisement at rp X or not.
      
      ## What is a pending advertisement
      I mentioned that we count seconded and pending advertisements at relay
      parent X. A pending advertisement is:
      1. An advertisement which is being fetched right now.
      2. An advertisement pending validation at backing subsystem.
      3. An advertisement blocked for seconding by backing because we don't
      know on of its parent heads.
      
      Any of these is considered a 'pending fetch' and a slot for it is kept.
      All of them are already tracked in `State`.
      
      ---------
      
      Co-authored-by: default avatarMaciej <maciej.zyszkiewicz@parity.io>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarAlin Dima <alin@parity.io>
      5153e2b5
  19. Dec 12, 2024