Skip to content
Snippets Groups Projects
  1. Jan 13, 2025
  2. Jan 09, 2025
  3. Jan 07, 2025
  4. Jan 06, 2025
    • Alin Dima's avatar
      fix chunk fetching network compatibility zombienet test (#6988) · ffa90d0f
      Alin Dima authored
      Fix this zombienet test
      
      It was failing because in
      https://github.com/paritytech/polkadot-sdk/pull/6452 I enabled the v2
      receipts for testnet genesis,
      so the collators started sending v2 receipts with zeroed collator
      signatures to old validators that were still checking those signatures
      (which lead to disputes, since new validators considered the candidates
      valid).
      
      The fix is to also use an old image for collators, so that we don't
      create v2 receipts.
      
      We cannot remove this test yet because collators also perform chunk
      recovery, so until all collators are upgraded, we need to maintain this
      compatibility with the old protocol version (which is also why
      systematic recovery was not yet enabled)
    • taozui472's avatar
      chore: delete repeat words (#7034) · 6eca7647
      taozui472 authored
      
      Co-authored-by: default avatarDónal Murray <donal.murray@parity.io>
  5. Jan 05, 2025
    • thiolliere's avatar
      Implement cumulus StorageWeightReclaim as wrapping transaction extension +... · 63c73bf6
      thiolliere authored
      Implement cumulus StorageWeightReclaim as wrapping transaction extension + frame system ReclaimWeight (#6140)
      
      (rebasing of https://github.com/paritytech/polkadot-sdk/pull/5234)
      
      ## Issues:
      
      * Transaction extensions have weights and refund weight. So the
      reclaiming of unused weight must happen last in the transaction
      extension pipeline. Currently it is inside `CheckWeight`.
      * cumulus storage weight reclaim transaction extension misses the proof
      size of logic happening prior to itself.
      
      ## Done:
      
      * a new storage `ExtrinsicWeightReclaimed` in frame-system. Any logic
      which attempts to do some reclaim must use this storage to avoid double
      reclaim.
      * a new function `reclaim_weight` in frame-system pallet: info and post
      info in arguments, read the already reclaimed weight, calculate the new
      unused weight from info and post info. do the more accurate reclaim if
      higher.
      * `CheckWeight` is unchanged and still reclaim the weight in post
      dispatch
      * `ReclaimWeight` is a new transaction extension in frame system. For
      solo chains it must be used last in the transactino extension pipeline.
      It does the final most accurate reclaim
      * `StorageWeightReclaim` is moved from cumulus primitives into its own
      pallet (in order to define benchmark) and is changed into a wrapping
      transaction extension.
      It does the recording of proof size and does the reclaim using this
      recording and the info and post info. So parachains don't need to use
      `ReclaimWeight`. But also if they use it, there is no bug.
      
          ```rust
        /// The TransactionExtension to the basic transaction logic.
      pub type TxExtension =
      cumulus_pallet_weight_reclaim::StorageWeightReclaim<
               Runtime,
               (
                       frame_system::CheckNonZeroSender<Runtime>,
                       frame_system::CheckSpecVersion<Runtime>,
                       frame_system::CheckTxVersion<Runtime>,
                       frame_system::CheckGenesis<Runtime>,
                       frame_system::CheckEra<Runtime>,
                       frame_system::CheckNonce<Runtime>,
                       frame_system::CheckWeight<Runtime>,
      pallet_transaction_payment::ChargeTransactionPayment<Runtime>,
                       BridgeRejectObsoleteHeadersAndMessages,
      
      (bridge_to_rococo_config::OnBridgeHubWestendRefundBridgeHubRococoMessages,),
      frame_metadata_hash_extension::CheckMetadataHash<Runtime>,
               ),
        >;
        ```
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <action@github.com>
      Co-authored-by: default avatargeorgepisaltu <52418509+georgepisaltu@users.noreply.github.com>
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarSebastian Kunert <skunert49@gmail.com>
      Co-authored-by: command-bot <>
  6. Dec 29, 2024
  7. Dec 27, 2024
  8. Dec 22, 2024
  9. Dec 20, 2024
    • Xavier Lau's avatar
      Reorder dependencies' keys (#6967) · a843d15e
      Xavier Lau authored
      
      It doesn't make sense to only reorder the features array.
      
      For example:
      
      This makes it hard for me to compare the dependencies and features,
      especially some crates have a really really long dependencies list.
      ```toml​
      [dependencies]
      c = "*"
      a = "*"
      b = "*"
      
      [features]
      std = [
        "a",
        "b",
        "c",
      ]
      ```
      
      This makes my life easier.
      ```toml​
      [dependencies]
      a = "*"
      b = "*"
      c = "*"
      
      [features]
      std = [
        "a",
        "b",
        "c",
      ]
      ```
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      Co-authored-by: command-bot <>
  10. Dec 19, 2024
  11. Dec 18, 2024
  12. Dec 14, 2024
  13. Dec 13, 2024
    • Alexandru Gheorghe's avatar
      Fix approval-voting canonicalize off by one (#6864) · 2dd2bb5a
      Alexandru Gheorghe authored
      
      Approval voting canonicalize is off by one that means if we are
      finalizing blocks one by one, approval-voting cleans it up every other
      block for example:
      
      - With 1, 2, 3, 4, 5, 6 blocks created, the stored range would be
      StoredBlockRange(1,7)
      - When block 3 is finalized the canonicalize works and StoredBlockRange
      is (4,7)
      - When block 4 is finalized the canonicalize exists early because of the
      `if range.0 > canon_number` break clause, so blocks are not cleaned up.
      - When block 5 is finalized the canonicalize works and StoredBlockRange
      becomes (6,7) and both block 4 and 5 are cleaned up.
      
      The consequences of this is that sometimes we keep block entries around
      after they are finalized, so at restart we consider this blocks and send
      them to approval-distribution.
      
      In most cases this is not a problem, but in the case when finality is
      lagging on restart approval-distribution will receive 4 as being the
      oldest block it needs to work on, and since BlockFinalized is never
      resent for block 4 after restart it won't get the opportunity to clean
      that up. Therefore it will end running approval-distribution aggression
      on block 4, because that is the oldest block it received from
      approval-voting for which it did not see a BlockFinalized signal.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
    • Tsvetomir Dimitrov's avatar
      Collation fetching fairness (#4880) · 5153e2b5
      Tsvetomir Dimitrov authored
      Related to https://github.com/paritytech/polkadot-sdk/issues/1797
      
      # The problem
      When fetching collations in collator protocol/validator side we need to
      ensure that each parachain has got a fair core time share depending on
      its assignments in the claim queue. This means that the number of
      collations fetched per parachain should ideally be equal to (but
      definitely not bigger than) the number of claims for the particular
      parachain in the claim queue.
      
      # Why the current implementation is not good enough
      The current implementation doesn't guarantee such fairness. For each
      relay parent there is a `waiting_queue` (PerRelayParent -> Collations ->
      waiting_queue) which holds any unfetched collations advertised to the
      validator. The collations are fetched on first in first out principle
      which means that if two parachains share a core and one of the
      parachains is more aggressive it might starve the second parachain. How?
      At each relay parent up to `max_candidate_depth` candidates ...
  14. Dec 12, 2024
  15. Dec 11, 2024
    • Francisco Aguirre's avatar
      Add aliasers to westend chains (#6814) · 48c6574b
      Francisco Aguirre authored
      
      `InitiateTransfer`, the new instruction introduced in XCMv5, allows
      preserving the origin after a cross-chain transfer via the usage of the
      `AliasOrigin` instruction. The receiving chain needs to be configured to
      allow such this instruction to have its intended effect and not just
      throw an error.
      
      In this PR, I add the alias rules specified in the [RFC for origin
      preservation](https://github.com/polkadot-fellows/RFCs/blob/main/text/0122-alias-origin-on-asset-transfers.md)
      to westend chains so we can test these scenarios in the testnet.
      
      The new scenarios include:
      - Sending a cross-chain transfer from one system chain to another and
      doing a Transact on the same message (1 hop)
      - Sending a reserve asset transfer from one chain to another going
      through asset hub and doing Transact on the same message (2 hops)
      
      The updated chains are:
      - Relay: added `AliasChildLocation`
      - Collectives: added `AliasChildLocation` and
      `AliasOriginRootUsingFilter<AssetHubLocation, Everything>`
      - People: added `AliasChildLocation` and
      `AliasOriginRootUsingFilter<AssetHubLocation, Everything>`
      - Coretime: added `AliasChildLocation` and
      `AliasOriginRootUsingFilter<AssetHubLocation, Everything>`
      
      AssetHub already has `AliasChildLocation` and doesn't need the other
      config item.
      BridgeHub is not intended to be used by end users so I didn't add any
      config item.
      Only added `AliasChildOrigin` to the relay since we intend for it to be
      used less.
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <action@github.com>
      Co-authored-by: command-bot <>
    • Alexandru Gheorghe's avatar
      Make approval-distribution aggression a bit more robust and less spammy (#6696) · 85dd228d
      Alexandru Gheorghe authored
      
      After finality started lagging on kusama around `2025-11-25 15:55:40`
      nodes started being overloaded with messages and some restarted with
      ```
      Subsystem approval-distribution-subsystem appears unresponsive when sending a message of type polkadot_node_subsystem_types::messages::ApprovalDistributionMessage. origin=polkadot_service::relay_chain_selection::SelectRelayChainInner<sc_client_db::Backend<sp_runtime::generic::block::Block<sp_runtime::generic::header::Header<u32, sp_runtime::traits::BlakeTwo256>, sp_runtime::OpaqueExtrinsic>>, polkadot_overseer::Handle>
      ```
      
      I think this happened because our aggression in the current form is way
      too spammy and create problems in situation where we already constructed
      blocks with a load of candidates to check which what happened around
      `#25933682` before and after. However aggression, does help in the
      nightmare scenario where the network is segmented and sparsely
      connected, so I tend to think we shouldn't completely remove it.
      
      The current configuration is:
      ```
      l1_threshold: Some(16),
      l2_threshold: Some(28),
      resend_unfinalized_period: Some(8),
      ```
      The way aggression works right now :
      1. After L1 is triggered all nodes send all messages they created to all
      the other nodes and all messages they would have they already send
      according to the topology.
      2. Because of resend_unfinalized_period for each block all messages at
      step 1) are sent every 8 blocks, so for example let's say we have blocks
      1 to 24 unfinalized, then at block 25, all messages for block 1, 9 will
      be resent, and consequently at block 26, all messages for block 2, 10
      will be resent, this becomes worse as more blocks are created if backing
      backpressure did not kick in yet. In total this logic makes that each
      node receive 3 * total_number_of messages_per_block
      3. L2 aggression is way too spammy, when L2 aggression is enabled all
      nodes sends all messages of a block on GridXY, that means that all
      messages are received and sent by node at least 2*sqrt(num_validators),
      so on kusama would be 66 * NUM_MESSAGES_AT_FIRST_UNFINALIZED_BLOCK, so
      even with a reasonable number of messages like 10K, which you can have
      if you escalated because of no shows, you end-up sending and receiving
      ~660k messages at once, I think that's what makes the
      approval-distribution to appear unresponsive on some nodes.
      4. Duplicate messages are received by the nodes which turn, mark the
      node as banned, which may create more no-shows.
      
      ## Proposed improvements:
      1. Make L2 trigger way later 28 blocks, instead of 64, this should
      literally the last resort, until then we should try to let the
      approval-voting escalation mechanism to do its things and cover the
      no-shows.
      2. On L1 aggression don't send messages for blocks too far from the
      first_unfinalized there is no point in sending the messages for block
      20, if block 1 is still unfinalized.
      3. On L1 aggression, send messages then back-off for 3 *
      resend_unfinalized_period to give time for everyone to clear up their
      queues.
      4. If aggression is enabled accept duplicate messages from validators
      and don't punish them by reducting their reputation which, which may
      create no-shows.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
      Co-authored-by: default avatarAndrei Sandu <54316454+sandreim@users.noreply.github.com>
    • Ludovic_Domingues's avatar
      Migration of polkadot-runtime-common auctions benchmarking to v2 (#6613) · 9dcdf813
      Ludovic_Domingues authored
      
      # Description
      Migrated polkadot-runtime-common auctions benchmarking to the new
      benchmarking syntax v2.
      This is part of #6202
      
      ---------
      
      Co-authored-by: default avatarGiuseppe Re <giuseppe.re@parity.io>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
  16. Dec 10, 2024
    • Ron's avatar
      XCMv5: Fix for compatibility with V4 (#6503) · fe4846f5
      Ron authored
      ## Description
      
      Our smoke tests transfer `WETH` from Sepolia to Westend-AssetHub breaks,
      try to reregister `WETH` on AH but fails as following:
      
      
      https://bridgehub-westend.subscan.io/xcm_message/westend-4796d6b3600aca32ef63b9953acf6a456cfd2fbe
      
      https://assethub-westend.subscan.io/extrinsic/9731267-0?event=9731267-2
      
      The reason is that the transact call encoded on BH to register the asset
      
      https://github.com/paritytech/polkadot-sdk/blob/a77940ba
      
      /bridges/snowbridge/primitives/router/src/inbound/mod.rs#L282-L289
      ```
      0x3500020209079edaa8020300fff9976782d46cc05630d1f6ebab18b2324d6b1400ce796ae65569a670d0c1cc1ac12515a3ce21b5fbf729d63d7b289baad070139d01000000000000000000000000000000
      ```
      
      the `asset_id` which is the xcm location can't be decoded on AH in V5
      
      Issue initial post in
      https://matrix.to/#/!qUtSTcfMJzBdPmpFKa:parity.io/$RNMAxIIOKGtBAqkgwiFuQf4eNaYpmOK-Pfw4d6vv1aU?via=parity.io&via=matrix.org&via=web3.foundation
      
      ---------
      
      Co-authored-by: default avatarAdrian Catangiu <adrian@parity.io>
      Co-authored-by: default avatarFrancisco Aguirre <franciscoaguirreperez@gmail.com>
    • Alexandru Gheorghe's avatar
      Fix order of resending messages after restart (#6729) · 65a4e5ee
      Alexandru Gheorghe authored
      
      The way we build the messages we need to send to approval-distribution
      can result in a situation where is we have multiple assignments covered
      by a coalesced approval, the messages are sent in this order:
      
      ASSIGNMENT1, APPROVAL, ASSIGNMENT2, because we iterate over each
      candidate and add to the queue of messages both the assignment and the
      approval for that candidate, and when the approval reaches the
      approval-distribution subsystem it won't be imported and gossiped
      because one of the assignment for it is not known.
      
      So in a network where a lot of nodes are restarting at the same time we
      could end up in a situation where a set of the nodes correctly received
      the assignments and approvals before the restart and approve their
      blocks and don't trigger their assignments. The other set of nodes
      should receive the assignments and approvals after the restart, but
      because the approvals never get broacasted anymore because of this bug,
      the only way they could approve is if other nodes start broadcasting
      their assignments.
      
      I think this bug contribute to the reason the network did not recovered
      on `25-11-25 15:55:40` after the restarts.
      
      Tested this scenario with a `zombienet` where `nodes` are finalising
      blocks because of aggression and all nodes are restarted at once and
      confirmed the network lags and doesn't recover before and it does after
      the fix
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
    • Joseph Zhao's avatar
      Remove AccountKeyring everywhere (#5899) · 311ea438
      Joseph Zhao authored
      
      Close: #5858
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
    • Branislav Kontur's avatar
  17. Dec 09, 2024
    • Adrian Catangiu's avatar
      xcm-executor: take transport fee from transferred assets if necessary (#4834) · e79fd2bb
      Adrian Catangiu authored
      
      # Description
      
      Sending XCM messages to other chains requires paying a "transport fee".
      This can be paid either:
      - from `origin` local account if `jit_withdraw = true`,
      - taken from Holding register otherwise.
      
      This currently works for following hops/scenarios:
      1. On destination no transport fee needed (only sending costs, not
      receiving),
      2. Local/originating chain: just set JIT=true and fee will be paid from
      signed account,
      3. Intermediary hops - only if intermediary is acting as reserve between
      two untrusted chains (aka only for `DepositReserveAsset` instruction) -
      this was fixed in https://github.com/paritytech/polkadot-sdk/pull/3142
      
      But now we're seeing more complex asset transfers that are mixing
      reserve transfers with teleports depending on the involved chains.
      
      # Example
      
      E.g. transferring DOT between Relay and parachain, but through AH (using
      AH instead of the Relay chain as parachain's DOT reserve).
      
      In the `Parachain --1--> AssetHub --2--> Relay` scenario, DOT has to be
      reserve-withdrawn in leg `1`, then teleported in leg `2`.
      On the intermediary hop (AssetHub), `InitiateTeleport` fails to send
      onward message because of missing transport fees. We also can't rely on
      `jit_withdraw` because the original origin is lost on the way, and even
      if it weren't we can't rely on the user having funded accounts on each
      hop along the way.
      
      # Solution/Changes
      
      - Charge the transport fee in the executor from the transferred assets
      (if available),
      - Only charge from transferred assets if JIT_WITHDRAW was not set,
      - Only charge from transferred assets if unless using XCMv5 `PayFees`
      where we do not have this problem.
      
      # Testing
      
      Added regression tests in emulated transfers.
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/4832
      Fixes https://github.com/paritytech/polkadot-sdk/issues/6637
      
      ---------
      
      Signed-off-by: default avatarAdrian Catangiu <adrian@parity.io>
      Co-authored-by: default avatarFrancisco Aguirre <franciscoaguirreperez@gmail.com>
    • Alexandru Gheorghe's avatar
      Fix `Possible bug: Vote import failed` after aggression is enabled (#6690) · da953454
      Alexandru Gheorghe authored
      
      After finality started lagging on kusama around 025-11-25 15:55:40
      validators started seeing ocassionally this log, when importing votes
      covering more than one assignment.
      ```
      Possible bug: Vote import failed
      ```
      
      That happens because the assumption that assignments from the same
      validator would have the same required routing doesn't hold after you
      enabled aggression, so you might end up receiving the first assignment
      then you modify the routing for it in `enable_aggression` then your
      receive the second assignment and the vote covering both assignments, so
      the rouing for the first and second assingment wouldn't match and we
      would fail to import the vote.
      
      From the logs I've seen, I don't think this is the reason the network
      didn't fully recover until the failsafe kicked it, because the votes had
      been already imported in approval-voting before this error.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
    • Maksym H's avatar
      Mak cmd swap omnibench (#6769) · 81b979ae
      Maksym H authored
      
      - change bench to default to old CLI
      - fix profile to production
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <action@github.com>
      Co-authored-by: command-bot <>
  18. Dec 08, 2024
  19. Dec 06, 2024
  20. Dec 05, 2024
    • Francisco Aguirre's avatar
      Added fallback_max_weight to Transact for sending messages to V4 chains (#6643) · f31c70aa
      Francisco Aguirre authored
      
      Closes: https://github.com/paritytech/polkadot-sdk/issues/6585
      
      Removing the `require_weight_at_most` parameter in V5 Transact had only
      one problem. Converting a message from V5 to V4 to send to chains that
      didn't upgrade yet. The conversion would not know what weight to give to
      the Transact, since V4 and below require it.
      
      To fix this, I added back the weight in the form of an `Option<Weight>`
      called `fallback_max_weight`. This can be set to `None` if you don't
      intend to deal with a chain that hasn't upgraded yet. If you set it to
      `Some(_)`, the behaviour is the same. The plan is to totally remove this
      in V6 since there will be a good conversion path from V6 to V5.
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <action@github.com>
      Co-authored-by: default avatarAdrian Catangiu <adrian@parity.io>
  21. Dec 03, 2024