Skip to content
Snippets Groups Projects
  1. Jan 17, 2025
    • thiolliere's avatar
      Make frame crate not use the feature experimental (#7177) · 4b2febe1
      thiolliere authored
      We already use it for lots of pallet.
      
      Keeping it feature gated by experimental means we lose the information
      of which pallet was using experimental before the migration to frame
      crate usage.
      
      We can consider `polkadot-sdk-frame` crate unstable but let's not use
      the feature `experimental`.
      
      ---------
      
      Co-authored-by: command-bot <>
      4b2febe1
  2. Jan 16, 2025
    • Ankan's avatar
      [Staking] Currency <> Fungible migration (#5501) · f5673cf2
      Ankan authored
      Migrate staking currency from `traits::LockableCurrency` to
      `traits::fungible::holds`.
      
      Resolves part of https://github.com/paritytech/polkadot-sdk/issues/226.
      
      ## Changes
      ### Nomination Pool
      TransferStake is now incompatible with fungible migration as old pools
      were not meant to have additional ED. Since they are anyways deprecated,
      removed its usage from all test runtimes.
      
      ### Staking
      - Config: `Currency` becomes of type `Fungible` while `OldCurrency` is
      the `LockableCurrency` used before.
      - Lazy migration of accounts. Any ledger update will create a new hold
      with no extra reads/writes. A permissionless extrinsic
      `migrate_currency()` releases the old `lock` along with some
      housekeeping.
      - Staking now requires ED to be left free. It also adds no consumer to
      staking accounts.
      - If hold cannot be applied to all stake, the un-holdable part is force
      withdrawn from the ledger.
      
      ### Delegated Staking
      The pallet does not add provider for agents anymore.
      
      ## Migration stats
      ### Polkadot
      Total accounts that can be migrated: 59564
      Accounts failing to migrate: 0
      Accounts with stake force withdrawn greater than ED: 59
      Total force withdrawal: 29591.26 DOT
      
      ### Kusama
      Total accounts that can be migrated: 26311
      Accounts failing to migrate: 0
      Accounts with stake force withdrawn greater than ED: 48
      Total force withdrawal: 1036.05 KSM
      
      
      [Full logs here](https://hackmd.io/@ak0n/BklDuFra0).
      
      ## Note about locks (freeze) vs holds
      With locks or freezes, staking could use total balance of an account.
      But with holds, the account needs to be left with at least Existential
      Deposit in free balance. This would also affect nomination pools which
      till now has been able to stake all funds contributed to it. An
      alternate version of this PR is
      https://github.com/paritytech/polkadot-sdk/pull/5658 where staking
      pallet does not add any provider, but means pools and delegated-staking
      pallet has to provide for these accounts and makes the end to end logic
      (of provider and consumer ref) lot less intuitive and prone to bug.
      
      This PR now introduces requirement for stakers to maintain ED in their
      free balance. This helps with removing the bug prone incrementing and
      decrementing of consumers and providers.
      
      ## TODO
      - [x] Test: Vesting + governance locked funds can be staked.
      - [ ] can `Call::restore_ledger` be removed? @gpestana 
      - [x] Ensure unclaimed withdrawals is not affected by no provider for
      pool accounts.
      - [x] Investigate kusama accounts with balance between 0 and ED.
      - [x] Permissionless call to release lock.
      - [x] Migration of consumer (dec) and provider (inc) for direct stakers.
      - [x] force unstake if hold cannot be applied to all stake.
      - [x] Fix try state checks (it thinks nothing is staked for unmigrated
      ledgers).
      - [x] Bench `migrate_currency`.
      - [x] Virtual Staker migration test.
      - [x] Ensure total issuance is upto date when minting rewards.
      
      ## Followup
      - https://github.com/paritytech/polkadot-sdk/issues/5742
      
      ---------
      
      Co-authored-by: command-bot <>
      f5673cf2
    • Liam Aharon's avatar
      Implement `pallet-asset-rewards` (#3926) · be2404cc
      Liam Aharon authored
      
      Closes #3149 
      
      ## Description
      
      This PR introduces `pallet-asset-rewards`, which allows accounts to be
      rewarded for freezing `fungible` tokens. The motivation for creating
      this pallet is to allow incentivising LPs.
      
      See the pallet docs for more info about the pallet.
      
      ## Runtime changes
      
      The pallet has been added to
      - `asset-hub-rococo`
      - `asset-hub-westend`
      
      The `NativeAndAssets` `fungibles` Union did not contain `PoolAssets`, so
      it has been renamed `NativeAndNonPoolAssets`
      
      A new `fungibles` Union `NativeAndAllAssets` was created to encompass
      all assets and the native token.
      
      ## TODO
      - [x] Emulation tests
      - [x] Fill in Freeze logic (blocked
      https://github.com/paritytech/polkadot-sdk/issues/3342) and re-run
      benchmarks
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarmuharem <ismailov.m.h@gmail.com>
      Co-authored-by: default avatarGuillaume Thiolliere <gui.thiolliere@gmail.com>
      be2404cc
  3. Jan 15, 2025
  4. Jan 14, 2025
    • Carlo Sala's avatar
      xcm: convert properly assets in xcmpayment apis (#7134) · 85c244f6
      Carlo Sala authored
      
      Port #6459 changes to relays as well, which were probably forgotten in
      that PR.
      Thanks!
      
      ---------
      
      Co-authored-by: default avatarFrancisco Aguirre <franciscoaguirreperez@gmail.com>
      Co-authored-by: command-bot <>
      85c244f6
    • Alexandru Gheorghe's avatar
      approval-voting: Fix sending of assignments after restart (#6973) · d38bb953
      Alexandru Gheorghe authored
      There is a problem on restart where nodes will not trigger their needed
      assignment if they were offline while the time of the assignment passed.
      
      That happens because after restart we will hit this condition
      https://github.com/paritytech/polkadot-sdk/blob/4e805ca0
      
      /polkadot/node/core/approval-voting/src/lib.rs#L2495
      and considered will be `tick_now` which is already higher than the tick
      of our assignment.
      
      The fix is to schedule a wakeup for untriggered assignments at restart
      and let the logic of processing an wakeup decide if it needs to trigger
      the assignment or not.
      
      One thing that we need to be careful here is to make sure we don't
      schedule the wake up immediately after restart because, the node would
      still be behind with all the assignments that should have received and
      might make it wrongfully decide it needs to trigger its assignment, so I
      added a `RESTART_WAKEUP_DELAY: Tick = 12` which should be more than
      enough for the node to catch up.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
      Co-authored-by: default avatarordian <write@reusable.software>
      Co-authored-by: default avatarAndrei Eres <eresav@me.com>
      d38bb953
    • Alexandru Gheorghe's avatar
      Retry approval on availability failure if the check is still needed (#6807) · 6878ba1f
      Alexandru Gheorghe authored
      
      Recovering the POV can fail in situation where the node just restart and
      the DHT topology wasn't fully discovered yet, so the current node can't
      connect to most of its Peers. This is bad because for gossiping the
      assignment you need to be connected to just a few peers, so because we
      can't approve the candidate and other nodes will see this as a no show.
      
      This becomes bad in the scenario where you've got a lot of nodes
      restarting at the same time, so you end up having a lot of no-shows in
      the network that are never covered, in that case it makes sense for
      nodes to actually retry approving the candidate at a later data in time
      and retry several times if the block containing the candidate wasn't
      approved.
      
      ## TODO
      - [x] Add a subsystem test.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
      6878ba1f
    • Alin Dima's avatar
      forbid v1 descriptors with UMP signals (#7127) · ddffa027
      Alin Dima authored
      ddffa027
  5. Jan 13, 2025
  6. Jan 09, 2025
  7. Jan 07, 2025
  8. Jan 06, 2025
    • Alin Dima's avatar
      fix chunk fetching network compatibility zombienet test (#6988) · ffa90d0f
      Alin Dima authored
      Fix this zombienet test
      
      It was failing because in
      https://github.com/paritytech/polkadot-sdk/pull/6452 I enabled the v2
      receipts for testnet genesis,
      so the collators started sending v2 receipts with zeroed collator
      signatures to old validators that were still checking those signatures
      (which lead to disputes, since new validators considered the candidates
      valid).
      
      The fix is to also use an old image for collators, so that we don't
      create v2 receipts.
      
      We cannot remove this test yet because collators also perform chunk
      recovery, so until all collators are upgraded, we need to maintain this
      compatibility with the old protocol version (which is also why
      systematic recovery was not yet enabled)
      ffa90d0f
    • taozui472's avatar
      chore: delete repeat words (#7034) · 6eca7647
      taozui472 authored
      
      Co-authored-by: default avatarDónal Murray <donal.murray@parity.io>
      6eca7647
  9. Jan 05, 2025
    • thiolliere's avatar
      Implement cumulus StorageWeightReclaim as wrapping transaction extension +... · 63c73bf6
      thiolliere authored
      Implement cumulus StorageWeightReclaim as wrapping transaction extension + frame system ReclaimWeight (#6140)
      
      (rebasing of https://github.com/paritytech/polkadot-sdk/pull/5234)
      
      ## Issues:
      
      * Transaction extensions have weights and refund weight. So the
      reclaiming of unused weight must happen last in the transaction
      extension pipeline. Currently it is inside `CheckWeight`.
      * cumulus storage weight reclaim transaction extension misses the proof
      size of logic happening prior to itself.
      
      ## Done:
      
      * a new storage `ExtrinsicWeightReclaimed` in frame-system. Any logic
      which attempts to do some reclaim must use this storage to avoid double
      reclaim.
      * a new function `reclaim_weight` in frame-system pallet: info and post
      info in arguments, read the already reclaimed weight, calculate the new
      unused weight from info and post info. do the more accurate reclaim if
      higher.
      * `CheckWeight` is unchanged and still reclaim the weight in post
      dispatch
      * `ReclaimWeight` is a new transaction extension in frame system. For
      solo chains it must be used last in the transactino extension pipeline.
      It does the final most accurate reclaim
      * `StorageWeightReclaim` is moved from cumulus primitives into its own
      pallet (in order to define benchmark) and is changed into a wrapping
      transaction extension.
      It does the recording of proof size and does the reclaim using this
      recording and the info and post info. So parachains don't need to use
      `ReclaimWeight`. But also if they use it, there is no bug.
      
          ```rust
        /// The TransactionExtension to the basic transaction logic.
      pub type TxExtension =
      cumulus_pallet_weight_reclaim::StorageWeightReclaim<
               Runtime,
               (
                       frame_system::CheckNonZeroSender<Runtime>,
                       frame_system::CheckSpecVersion<Runtime>,
                       frame_system::CheckTxVersion<Runtime>,
                       frame_system::CheckGenesis<Runtime>,
                       frame_system::CheckEra<Runtime>,
                       frame_system::CheckNonce<Runtime>,
                       frame_system::CheckWeight<Runtime>,
      pallet_transaction_payment::ChargeTransactionPayment<Runtime>,
                       BridgeRejectObsoleteHeadersAndMessages,
      
      (bridge_to_rococo_config::OnBridgeHubWestendRefundBridgeHubRococoMessages,),
      frame_metadata_hash_extension::CheckMetadataHash<Runtime>,
               ),
        >;
        ```
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <action@github.com>
      Co-authored-by: default avatargeorgepisaltu <52418509+georgepisaltu@users.noreply.github.com>
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarSebastian Kunert <skunert49@gmail.com>
      Co-authored-by: command-bot <>
      63c73bf6
  10. Dec 29, 2024
  11. Dec 27, 2024
  12. Dec 22, 2024
  13. Dec 20, 2024
    • Xavier Lau's avatar
      Reorder dependencies' keys (#6967) · a843d15e
      Xavier Lau authored
      
      It doesn't make sense to only reorder the features array.
      
      For example:
      
      This makes it hard for me to compare the dependencies and features,
      especially some crates have a really really long dependencies list.
      ```toml​
      [dependencies]
      c = "*"
      a = "*"
      b = "*"
      
      [features]
      std = [
        "a",
        "b",
        "c",
      ]
      ```
      
      This makes my life easier.
      ```toml​
      [dependencies]
      a = "*"
      b = "*"
      c = "*"
      
      [features]
      std = [
        "a",
        "b",
        "c",
      ]
      ```
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      Co-authored-by: command-bot <>
      a843d15e
  14. Dec 19, 2024
  15. Dec 18, 2024
  16. Dec 14, 2024
  17. Dec 13, 2024
    • Alexandru Gheorghe's avatar
      Fix approval-voting canonicalize off by one (#6864) · 2dd2bb5a
      Alexandru Gheorghe authored
      
      Approval voting canonicalize is off by one that means if we are
      finalizing blocks one by one, approval-voting cleans it up every other
      block for example:
      
      - With 1, 2, 3, 4, 5, 6 blocks created, the stored range would be
      StoredBlockRange(1,7)
      - When block 3 is finalized the canonicalize works and StoredBlockRange
      is (4,7)
      - When block 4 is finalized the canonicalize exists early because of the
      `if range.0 > canon_number` break clause, so blocks are not cleaned up.
      - When block 5 is finalized the canonicalize works and StoredBlockRange
      becomes (6,7) and both block 4 and 5 are cleaned up.
      
      The consequences of this is that sometimes we keep block entries around
      after they are finalized, so at restart we consider this blocks and send
      them to approval-distribution.
      
      In most cases this is not a problem, but in the case when finality is
      lagging on restart approval-distribution will receive 4 as being the
      oldest block it needs to work on, and since BlockFinalized is never
      resent for block 4 after restart it won't get the opportunity to clean
      that up. Therefore it will end running approval-distribution aggression
      on block 4, because that is the oldest block it received from
      approval-voting for which it did not see a BlockFinalized signal.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
      2dd2bb5a
    • Tsvetomir Dimitrov's avatar
      Collation fetching fairness (#4880) · 5153e2b5
      Tsvetomir Dimitrov authored
      
      Related to https://github.com/paritytech/polkadot-sdk/issues/1797
      
      # The problem
      When fetching collations in collator protocol/validator side we need to
      ensure that each parachain has got a fair core time share depending on
      its assignments in the claim queue. This means that the number of
      collations fetched per parachain should ideally be equal to (but
      definitely not bigger than) the number of claims for the particular
      parachain in the claim queue.
      
      # Why the current implementation is not good enough
      The current implementation doesn't guarantee such fairness. For each
      relay parent there is a `waiting_queue` (PerRelayParent -> Collations ->
      waiting_queue) which holds any unfetched collations advertised to the
      validator. The collations are fetched on first in first out principle
      which means that if two parachains share a core and one of the
      parachains is more aggressive it might starve the second parachain. How?
      At each relay parent up to `max_candidate_depth` candidates are accepted
      (enforced in `fn is_seconded_limit_reached`) so if one of the parachains
      is quick enough to fill in the queue with its advertisements the
      validator will never fetch anything from the rest of the parachains
      despite they are scheduled. This doesn't mean that the aggressive
      parachain will occupy all the core time (this is guaranteed by the
      runtime) but it will deny the rest of the parachains sharing the same
      core to have collations backed.
      
      # How to fix it
      The solution I am proposing is to limit fetches and advertisements based
      on the state of the claim queue. At each relay parent the claim queue
      for the core assigned to the validator is fetched. For each parachain a
      fetch limit is calculated (equal to the number of entries in the claim
      queue). Advertisements are not fetched for a parachain which has
      exceeded its claims in the claim queue. This solves the problem with
      aggressive parachains advertising too much collations.
      
      The second part is in collation fetching logic. The collator will keep
      track on which collations it has fetched so far. When a new collation
      needs to be fetched instead of popping the first entry from the
      `waiting_queue` the validator examines the claim queue and looks for the
      earliest claim which hasn't got a corresponding fetch. This way the
      collator will always try to prioritise the most urgent entries.
      
      ## How the 'fair share of coretime' for each parachain is determined?
      Thanks to async backing we can accept more than one candidate per relay
      parent (with some constraints). We also have got the claim queue which
      gives us a hint which parachain will be scheduled next on each core. So
      thanks to the claim queue we can determine the maximum number of claims
      per parachain.
      
      For example the claim queue is [A A A] at relay parent X so we know that
      at relay parent X we can accept three candidates for parachain A. There
      are two things to consider though:
      1. If we accept more than one candidate at relay parent X we are
      claiming the slot of a future relay parent. So accepting two candidates
      for relay parent X means that we are claiming the slot at rp X+1 or rp
      X+2.
      2. At the same time the slot at relay parent X could have been claimed
      by a previous relay parent(s). This means that we need to accept less
      candidates at X or even no candidates.
      
      There are a few cases worth considering:
      1. Slot claimed by previous relay parent.
          CQ @ rp X: [A A A]
          Advertisements at X-1 for para A: 2
          Advertisements at X-2 for para A: 2
      Outcome - at rp X we can accept only 1 advertisement since our slots
      were already claimed.
      2. Slot in our claim queue already claimed at future relay parent
          CQ @ rp X: [A A A]
          Advertisements at X+1 for para A: 1
          Advertisements at X+2 for para A: 1
      Outcome: at rp X we can accept only 1 advertisement since the slots in
      our relay parents were already claimed.
      
      The situation becomes more complicated with multiple leaves (forks).
      Imagine we have got a fork at rp X:
      ```
      CQ @ rp X: [A A A]
      (rp X) -> (rp X+1) -> rp(X+2)
               \-> (rp X+1')
      ```
      Now when we examine the claim queue at RP X we need to consider both
      forks. This means that accepting a candidate at X means that we should
      have a slot for it in *BOTH* leaves. If for example there are three
      candidates accepted at rp X+1' we can't accept any candidates at rp X
      because there will be no slot for it in one of the leaves.
      
      ## How the claims are counted
      There are two solutions for counting the claims at relay parent X:
      1. Keep a state for the claim queue (number of claims and which of them
      are claimed) and look it up when accepting a collation. With this
      approach we need to keep the state up to date with each new
      advertisement and each new leaf update.
      2. Calculate the state of the claim queue on the fly at each
      advertisement. This way we rebuild the state of the claim queue at each
      advertisements.
      
      Solution 1 is hard to implement with forks. There are too many variants
      to keep track of (different state for each leaf) and at the same time we
      might never need to use them. So I decided to go with option 2 -
      building claim queue state on the fly.
      
      To achieve this I've extended `View` from backing_implicit_view to keep
      track of the outer leaves. I've also added a method which accepts a
      relay parent and return all paths from an outer leaf to it. Let's call
      it `paths_to_relay_parent`.
      
      So how the counting works for relay parent X? First we examine the
      number of seconded and pending advertisements (more on pending in a
      second) from relay parent X to relay parent X-N (inclusive) where N is
      the length of the claim queue. Then we use `paths_to_relay_parent` to
      obtain all paths from outer leaves to relay parent X. We calculate the
      claims at relay parents X+1 to X+N (inclusive) for each leaf and get the
      maximum value. This way we guarantee that the candidate at rp X can be
      included in each leaf. This is the state of the claim queue which we use
      to decide if we can fetch one more advertisement at rp X or not.
      
      ## What is a pending advertisement
      I mentioned that we count seconded and pending advertisements at relay
      parent X. A pending advertisement is:
      1. An advertisement which is being fetched right now.
      2. An advertisement pending validation at backing subsystem.
      3. An advertisement blocked for seconding by backing because we don't
      know on of its parent heads.
      
      Any of these is considered a 'pending fetch' and a slot for it is kept.
      All of them are already tracked in `State`.
      
      ---------
      
      Co-authored-by: default avatarMaciej <maciej.zyszkiewicz@parity.io>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarAlin Dima <alin@parity.io>
      5153e2b5
  18. Dec 12, 2024
  19. Dec 11, 2024
    • Francisco Aguirre's avatar
      Add aliasers to westend chains (#6814) · 48c6574b
      Francisco Aguirre authored
      
      `InitiateTransfer`, the new instruction introduced in XCMv5, allows
      preserving the origin after a cross-chain transfer via the usage of the
      `AliasOrigin` instruction. The receiving chain needs to be configured to
      allow such this instruction to have its intended effect and not just
      throw an error.
      
      In this PR, I add the alias rules specified in the [RFC for origin
      preservation](https://github.com/polkadot-fellows/RFCs/blob/main/text/0122-alias-origin-on-asset-transfers.md)
      to westend chains so we can test these scenarios in the testnet.
      
      The new scenarios include:
      - Sending a cross-chain transfer from one system chain to another and
      doing a Transact on the same message (1 hop)
      - Sending a reserve asset transfer from one chain to another going
      through asset hub and doing Transact on the same message (2 hops)
      
      The updated chains are:
      - Relay: added `AliasChildLocation`
      - Collectives: added `AliasChildLocation` and
      `AliasOriginRootUsingFilter<AssetHubLocation, Everything>`
      - People: added `AliasChildLocation` and
      `AliasOriginRootUsingFilter<AssetHubLocation, Everything>`
      - Coretime: added `AliasChildLocation` and
      `AliasOriginRootUsingFilter<AssetHubLocation, Everything>`
      
      AssetHub already has `AliasChildLocation` and doesn't need the other
      config item.
      BridgeHub is not intended to be used by end users so I didn't add any
      config item.
      Only added `AliasChildOrigin` to the relay since we intend for it to be
      used less.
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <action@github.com>
      Co-authored-by: command-bot <>
      48c6574b
    • Alexandru Gheorghe's avatar
      Make approval-distribution aggression a bit more robust and less spammy (#6696) · 85dd228d
      Alexandru Gheorghe authored
      
      After finality started lagging on kusama around `2025-11-25 15:55:40`
      nodes started being overloaded with messages and some restarted with
      ```
      Subsystem approval-distribution-subsystem appears unresponsive when sending a message of type polkadot_node_subsystem_types::messages::ApprovalDistributionMessage. origin=polkadot_service::relay_chain_selection::SelectRelayChainInner<sc_client_db::Backend<sp_runtime::generic::block::Block<sp_runtime::generic::header::Header<u32, sp_runtime::traits::BlakeTwo256>, sp_runtime::OpaqueExtrinsic>>, polkadot_overseer::Handle>
      ```
      
      I think this happened because our aggression in the current form is way
      too spammy and create problems in situation where we already constructed
      blocks with a load of candidates to check which what happened around
      `#25933682` before and after. However aggression, does help in the
      nightmare scenario where the network is segmented and sparsely
      connected, so I tend to think we shouldn't completely remove it.
      
      The current configuration is:
      ```
      l1_threshold: Some(16),
      l2_threshold: Some(28),
      resend_unfinalized_period: Some(8),
      ```
      The way aggression works right now :
      1. After L1 is triggered all nodes send all messages they created to all
      the other nodes and all messages they would have they already send
      according to the topology.
      2. Because of resend_unfinalized_period for each block all messages at
      step 1) are sent every 8 blocks, so for example let's say we have blocks
      1 to 24 unfinalized, then at block 25, all messages for block 1, 9 will
      be resent, and consequently at block 26, all messages for block 2, 10
      will be resent, this becomes worse as more blocks are created if backing
      backpressure did not kick in yet. In total this logic makes that each
      node receive 3 * total_number_of messages_per_block
      3. L2 aggression is way too spammy, when L2 aggression is enabled all
      nodes sends all messages of a block on GridXY, that means that all
      messages are received and sent by node at least 2*sqrt(num_validators),
      so on kusama would be 66 * NUM_MESSAGES_AT_FIRST_UNFINALIZED_BLOCK, so
      even with a reasonable number of messages like 10K, which you can have
      if you escalated because of no shows, you end-up sending and receiving
      ~660k messages at once, I think that's what makes the
      approval-distribution to appear unresponsive on some nodes.
      4. Duplicate messages are received by the nodes which turn, mark the
      node as banned, which may create more no-shows.
      
      ## Proposed improvements:
      1. Make L2 trigger way later 28 blocks, instead of 64, this should
      literally the last resort, until then we should try to let the
      approval-voting escalation mechanism to do its things and cover the
      no-shows.
      2. On L1 aggression don't send messages for blocks too far from the
      first_unfinalized there is no point in sending the messages for block
      20, if block 1 is still unfinalized.
      3. On L1 aggression, send messages then back-off for 3 *
      resend_unfinalized_period to give time for everyone to clear up their
      queues.
      4. If aggression is enabled accept duplicate messages from validators
      and don't punish them by reducting their reputation which, which may
      create no-shows.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
      Co-authored-by: default avatarAndrei Sandu <54316454+sandreim@users.noreply.github.com>
      85dd228d
    • Ludovic_Domingues's avatar
      Migration of polkadot-runtime-common auctions benchmarking to v2 (#6613) · 9dcdf813
      Ludovic_Domingues authored
      
      # Description
      Migrated polkadot-runtime-common auctions benchmarking to the new
      benchmarking syntax v2.
      This is part of #6202
      
      ---------
      
      Co-authored-by: default avatarGiuseppe Re <giuseppe.re@parity.io>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      9dcdf813
  20. Dec 10, 2024
    • Ron's avatar
      XCMv5: Fix for compatibility with V4 (#6503) · fe4846f5
      Ron authored
      ## Description
      
      Our smoke tests transfer `WETH` from Sepolia to Westend-AssetHub breaks,
      try to reregister `WETH` on AH but fails as following:
      
      
      https://bridgehub-westend.subscan.io/xcm_message/westend-4796d6b3600aca32ef63b9953acf6a456cfd2fbe
      
      https://assethub-westend.subscan.io/extrinsic/9731267-0?event=9731267-2
      
      The reason is that the transact call encoded on BH to register the asset
      
      https://github.com/paritytech/polkadot-sdk/blob/a77940ba
      
      /bridges/snowbridge/primitives/router/src/inbound/mod.rs#L282-L289
      ```
      0x3500020209079edaa8020300fff9976782d46cc05630d1f6ebab18b2324d6b1400ce796ae65569a670d0c1cc1ac12515a3ce21b5fbf729d63d7b289baad070139d01000000000000000000000000000000
      ```
      
      the `asset_id` which is the xcm location can't be decoded on AH in V5
      
      Issue initial post in
      https://matrix.to/#/!qUtSTcfMJzBdPmpFKa:parity.io/$RNMAxIIOKGtBAqkgwiFuQf4eNaYpmOK-Pfw4d6vv1aU?via=parity.io&via=matrix.org&via=web3.foundation
      
      ---------
      
      Co-authored-by: default avatarAdrian Catangiu <adrian@parity.io>
      Co-authored-by: default avatarFrancisco Aguirre <franciscoaguirreperez@gmail.com>
      fe4846f5