Skip to content
Snippets Groups Projects
  1. Dec 11, 2024
    • Alexander Theißen's avatar
      pallet-revive: Statically verify imports on code deployment (#6759) · f0b5c3e6
      Alexander Theißen authored
      
      Previously, we failed at runtime if an unknown or unstable host function
      was called. This requires us to keep track of when a host function was
      added and when a code was deployed. We used the `api_version` to track
      at which API version each code was deployed. This made sure that when a
      new host function was added that old code won't have access to it. This
      is necessary as otherwise the behavior of a contract that made calls to
      this previously non existent host function would change from "trap" to
      "do something".
      
      In this PR we remove the API version. Instead, we statically verify on
      upload that no non-existent host function is ever used in the code. This
      will allow us to add new host function later without needing to keep
      track when they were added.
      
      This simplifies the code and also gives an immediate feedback if unknown
      host functions are used.
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <action@github.com>
    • Francisco Aguirre's avatar
      Add aliasers to westend chains (#6814) · 48c6574b
      Francisco Aguirre authored
      
      `InitiateTransfer`, the new instruction introduced in XCMv5, allows
      preserving the origin after a cross-chain transfer via the usage of the
      `AliasOrigin` instruction. The receiving chain needs to be configured to
      allow such this instruction to have its intended effect and not just
      throw an error.
      
      In this PR, I add the alias rules specified in the [RFC for origin
      preservation](https://github.com/polkadot-fellows/RFCs/blob/main/text/0122-alias-origin-on-asset-transfers.md)
      to westend chains so we can test these scenarios in the testnet.
      
      The new scenarios include:
      - Sending a cross-chain transfer from one system chain to another and
      doing a Transact on the same message (1 hop)
      - Sending a reserve asset transfer from one chain to another going
      through asset hub and doing Transact on the same message (2 hops)
      
      The updated chains are:
      - Relay: added `AliasChildLocation`
      - Collectives: added `AliasChildLocation` and
      `AliasOriginRootUsingFilter<AssetHubLocation, Everything>`
      - People: added `AliasChildLocation` and
      `AliasOriginRootUsingFilter<AssetHubLocation, Everything>`
      - Coretime: added `AliasChildLocation` and
      `AliasOriginRootUsingFilter<AssetHubLocation, Everything>`
      
      AssetHub already has `AliasChildLocation` and doesn't need the other
      config item.
      BridgeHub is not intended to be used by end users so I didn't add any
      config item.
      Only added `AliasChildOrigin` to the relay since we intend for it to be
      used less.
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <action@github.com>
      Co-authored-by: command-bot <>
    • Alexander Theißen's avatar
      snowbridge: Update alloy-core (#6808) · da2dd9b7
      Alexander Theißen authored
      I am planning to use `alloy_core` to implement precompile support in
      `pallet_revive`. I noticed that it is already used by snowbridge. In
      order to unify the dependencies I did the following:
      
      1. Switch to the `alloy-core` umbrella crate so that we have less
      individual dependencies to update.
      2. Bump the latest version and fixup the resulting compile errors.
    • Alexandru Gheorghe's avatar
      Make approval-distribution aggression a bit more robust and less spammy (#6696) · 85dd228d
      Alexandru Gheorghe authored
      
      After finality started lagging on kusama around `2025-11-25 15:55:40`
      nodes started being overloaded with messages and some restarted with
      ```
      Subsystem approval-distribution-subsystem appears unresponsive when sending a message of type polkadot_node_subsystem_types::messages::ApprovalDistributionMessage. origin=polkadot_service::relay_chain_selection::SelectRelayChainInner<sc_client_db::Backend<sp_runtime::generic::block::Block<sp_runtime::generic::header::Header<u32, sp_runtime::traits::BlakeTwo256>, sp_runtime::OpaqueExtrinsic>>, polkadot_overseer::Handle>
      ```
      
      I think this happened because our aggression in the current form is way
      too spammy and create problems in situation where we already constructed
      blocks with a load of candidates to check which what happened around
      `#25933682` before and after. However aggression, does help in the
      nightmare scenario where the network is segmented and sparsely
      connected, so I tend to think we shouldn't completely remove it.
      
      The current configuration is:
      ```
      l1_threshold: Some(16),
      l2_threshold: Some(28),
      resend_unfinalized_period: Some(8),
      ```
      The way aggression works right now :
      1. After L1 is triggered all nodes send all messages they created to all
      the other nodes and all messages they would have they already send
      according to the topology.
      2. Because of resend_unfinalized_period for each block all messages at
      step 1) are sent every 8 blocks, so for example let's say we have blocks
      1 to 24 unfinalized, then at block 25, all messages for block 1, 9 will
      be resent, and consequently at block 26, all messages for block 2, 10
      will be resent, this becomes worse as more blocks are created if backing
      backpressure did not kick in yet. In total this logic makes that each
      node receive 3 * total_number_of messages_per_block
      3. L2 aggression is way too spammy, when L2 aggression is enabled all
      nodes sends all messages of a block on GridXY, that means that all
      messages are received and sent by node at least 2*sqrt(num_validators),
      so on kusama would be 66 * NUM_MESSAGES_AT_FIRST_UNFINALIZED_BLOCK, so
      even with a reasonable number of messages like 10K, which you can have
      if you escalated because of no shows, you end-up sending and receiving
      ~660k messages at once, I think that's what makes the
      approval-distribution to appear unresponsive on some nodes.
      4. Duplicate messages are received by the nodes which turn, mark the
      node as banned, which may create more no-shows.
      
      ## Proposed improvements:
      1. Make L2 trigger way later 28 blocks, instead of 64, this should
      literally the last resort, until then we should try to let the
      approval-voting escalation mechanism to do its things and cover the
      no-shows.
      2. On L1 aggression don't send messages for blocks too far from the
      first_unfinalized there is no point in sending the messages for block
      20, if block 1 is still unfinalized.
      3. On L1 aggression, send messages then back-off for 3 *
      resend_unfinalized_period to give time for everyone to clear up their
      queues.
      4. If aggression is enabled accept duplicate messages from validators
      and don't punish them by reducting their reputation which, which may
      create no-shows.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
      Co-authored-by: default avatarAndrei Sandu <54316454+sandreim@users.noreply.github.com>
    • Ludovic_Domingues's avatar
      Migration of polkadot-runtime-common auctions benchmarking to v2 (#6613) · 9dcdf813
      Ludovic_Domingues authored
      
      # Description
      Migrated polkadot-runtime-common auctions benchmarking to the new
      benchmarking syntax v2.
      This is part of #6202
      
      ---------
      
      Co-authored-by: default avatarGiuseppe Re <giuseppe.re@parity.io>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
    • PG Herveou's avatar
      [pallet-revive] eth-rpc add missing tests (#6728) · 99be9b1e
      PG Herveou authored
      Add tests for #6608 
      
      fix https://github.com/paritytech/contract-issues/issues/12
      
      ---------
      
      Co-authored-by: command-bot <>
  2. Dec 10, 2024
    • Iulian Barbu's avatar
      omni-node: --dev sets manual seal and allows --chain to be set (#6646) · 48c28d4c
      Iulian Barbu authored
      # Description
      
      This PR changes a few things:
      * `--dev` flag will not conflict with `--chain` anymore, but if
      `--chain` is not given will set `--chain=dev`.
      * `--dev-block-time` is optional and it defaults to 3000ms if not set
      after setting `--dev`.
      * to start OmniNode with manual seal it is enough to pass just `--dev`.
      * `--dev-block-time` can still be used to start a node with manual seal,
      but it will not set it up as `--dev` does (it will not set a bunch of
      flags which are enabled by default when `--dev` is set: e.g. `--tmp`,
      `--alice` and `--force-authoring`.
      
      Closes: #6537
      
      ## Integration
      
      Relevant for node/runtime developers that use OmniNode lib, including
      `polkadot-omni-node` binary, although the recommended way for runtime
      development is to use `chopsticks`.
      
      ## Review Notes
      
      * Decided to focus only on OmniNode & templates docs in relation to it,
      and leave the `parachain-template-node` as is (meaning `--dev` isn't
      usable and te...
    • Ron's avatar
      XCMv5: Fix for compatibility with V4 (#6503) · fe4846f5
      Ron authored
      ## Description
      
      Our smoke tests transfer `WETH` from Sepolia to Westend-AssetHub breaks,
      try to reregister `WETH` on AH but fails as following:
      
      
      https://bridgehub-westend.subscan.io/xcm_message/westend-4796d6b3600aca32ef63b9953acf6a456cfd2fbe
      
      https://assethub-westend.subscan.io/extrinsic/9731267-0?event=9731267-2
      
      The reason is that the transact call encoded on BH to register the asset
      
      https://github.com/paritytech/polkadot-sdk/blob/a77940ba/bridges/snowbridge/primitives/router/src/inbound/mod.rs#L282-L289
      ```
      0x3500020209079edaa8020300fff9976782d46cc05630d1f6ebab18b2324d6b1400ce796ae65569a670d0c1cc1ac12515a3ce21b5fbf729d63d7b289baad070139d01000000000000000000000000000000
      ```
      
      the `asset_id` which is the xcm location can't be decoded on AH in V5
      
      Issue initial post in
      https://matrix.to/#/!qUtSTcfMJzBdPmpFKa:parity.io/$RNMAxIIOKGtBAqkgwiFuQf4eNaYpmOK-Pfw4d6vv1aU?via=parity.io&via=matrix.org&via=web3.foundation
      
      ---------
      
      Co-au...
    • Alexandru Gheorghe's avatar
      Fix order of resending messages after restart (#6729) · 65a4e5ee
      Alexandru Gheorghe authored
      
      The way we build the messages we need to send to approval-distribution
      can result in a situation where is we have multiple assignments covered
      by a coalesced approval, the messages are sent in this order:
      
      ASSIGNMENT1, APPROVAL, ASSIGNMENT2, because we iterate over each
      candidate and add to the queue of messages both the assignment and the
      approval for that candidate, and when the approval reaches the
      approval-distribution subsystem it won't be imported and gossiped
      because one of the assignment for it is not known.
      
      So in a network where a lot of nodes are restarting at the same time we
      could end up in a situation where a set of the nodes correctly received
      the assignments and approvals before the restart and approve their
      blocks and don't trigger their assignments. The other set of nodes
      should receive the assignments and approvals after the restart, but
      because the approvals never get broacasted anymore because of this bug,
      the only way they could approve is if other nodes start broadcasting
      their assignments.
      
      I think this bug contribute to the reason the network did not recovered
      on `25-11-25 15:55:40` after the restarts.
      
      Tested this scenario with a `zombienet` where `nodes` are finalising
      blocks because of aggression and all nodes are restarted at once and
      confirmed the network lags and doesn't recover before and it does after
      the fix
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
    • Kazunobu Ndong's avatar
      polkadot-sdk-docs: Use command_macro! (#6624) · 19bc578e
      Kazunobu Ndong authored
      
      # Description
      
      **Understood assignment:**
      Initial assignment description is in #6194.
      In order to Simplify the display of commands and ensure they are tested
      for chain spec builder's `polkadot-sdk` reference docs, find every
      occurrence of `#[docify::export]` where `process:Command` is used, and
      replace the use of `process:Command` by `run_cmd!` from the `cmd_lib
      crate`.
      
      ---------
      
      Co-authored-by: default avatarIulian Barbu <14218860+iulianbarbu@users.noreply.github.com>
    • Maksym H's avatar
      Let cmd bot to trigger ci on commit (#6813) · c808a009
      Maksym H authored
      Fixes: https://github.com/paritytech/ci_cd/issues/1079
      Improvements:
      - switch to github native token creation action
      - refresh branch in CI from HEAD, to prevent failure
      - add APP token when pushing, to allow CI to be retriggering by bot
    • Joseph Zhao's avatar
      Remove AccountKeyring everywhere (#5899) · 311ea438
      Joseph Zhao authored
      
      Close: #5858
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
    • Branislav Kontur's avatar
      Bridges - revert-back congestion mechanism (#6781) · 8f4b99cf
      Branislav Kontur authored
      
      Closes: https://github.com/paritytech/polkadot-sdk/issues/5551
      
      ## Description
      
      With [permissionless lanes
      PR#4949](https://github.com/paritytech/polkadot-sdk/pull/4949), the
      congestion mechanism based on sending
      `Transact(report_bridge_status(is_congested))` from
      `pallet-xcm-bridge-hub` to `pallet-xcm-bridge-hub-router` was replaced
      with a congestion mechanism that relied on monitoring XCMP queues.
      However, this approach could cause issues, such as suspending the entire
      XCMP queue instead of isolating the affected bridge. This PR reverts
      back to using `report_bridge_status` as before.
      
      ## TODO
      - [x] benchmarks
      - [x] prdoc
      
      ## Follow-up
      
      https://github.com/paritytech/polkadot-sdk/pull/6231
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <action@github.com>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarAdrian Catangiu <adrian@parity.io>
    • Branislav Kontur's avatar
    • Francisco Aguirre's avatar
      Add fallback_max_weight to snowbridge Transact (#6792) · 4fc92486
      Francisco Aguirre authored
      
      We removed the `require_weight_at_most` field and later changed it to
      `fallback_max_weight`.
      This was to have a fallback when sending a message to v4 chains, which
      happens in the small time window when chains are upgrading.
      We originally put no fallback for a message in snowbridge's inbound
      queue but we should have one.
      This PR adds it.
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <action@github.com>
  3. Dec 09, 2024
    • Adrian Catangiu's avatar
      xcm-executor: take transport fee from transferred assets if necessary (#4834) · e79fd2bb
      Adrian Catangiu authored
      
      # Description
      
      Sending XCM messages to other chains requires paying a "transport fee".
      This can be paid either:
      - from `origin` local account if `jit_withdraw = true`,
      - taken from Holding register otherwise.
      
      This currently works for following hops/scenarios:
      1. On destination no transport fee needed (only sending costs, not
      receiving),
      2. Local/originating chain: just set JIT=true and fee will be paid from
      signed account,
      3. Intermediary hops - only if intermediary is acting as reserve between
      two untrusted chains (aka only for `DepositReserveAsset` instruction) -
      this was fixed in https://github.com/paritytech/polkadot-sdk/pull/3142
      
      But now we're seeing more complex asset transfers that are mixing
      reserve transfers with teleports depending on the involved chains.
      
      # Example
      
      E.g. transferring DOT between Relay and parachain, but through AH (using
      AH instead of the Relay chain as parachain's DOT reserve).
      
      In the `Parachain --1--> AssetHub --2--> Relay` scenario, DOT has to be
      reserve-withdrawn in leg `1`, then teleported in leg `2`.
      On the intermediary hop (AssetHub), `InitiateTeleport` fails to send
      onward message because of missing transport fees. We also can't rely on
      `jit_withdraw` because the original origin is lost on the way, and even
      if it weren't we can't rely on the user having funded accounts on each
      hop along the way.
      
      # Solution/Changes
      
      - Charge the transport fee in the executor from the transferred assets
      (if available),
      - Only charge from transferred assets if JIT_WITHDRAW was not set,
      - Only charge from transferred assets if unless using XCMv5 `PayFees`
      where we do not have this problem.
      
      # Testing
      
      Added regression tests in emulated transfers.
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/4832
      Fixes https://github.com/paritytech/polkadot-sdk/issues/6637
      
      ---------
      
      Signed-off-by: default avatarAdrian Catangiu <adrian@parity.io>
      Co-authored-by: default avatarFrancisco Aguirre <franciscoaguirreperez@gmail.com>
    • Alexander Theißen's avatar
      pallet-revive: Remove unused dependencies (#6796) · 4198dc95
      Alexander Theißen authored
      
      The dependency on `pallet_balances` doesn't seem to be necessary. At
      least everything compiles for me without it. Removed this dependency and
      a few others that seem to be left overs.
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <action@github.com>
    • Alexandru Gheorghe's avatar
      Fix `Possible bug: Vote import failed` after aggression is enabled (#6690) · da953454
      Alexandru Gheorghe authored
      
      After finality started lagging on kusama around 025-11-25 15:55:40
      validators started seeing ocassionally this log, when importing votes
      covering more than one assignment.
      ```
      Possible bug: Vote import failed
      ```
      
      That happens because the assumption that assignments from the same
      validator would have the same required routing doesn't hold after you
      enabled aggression, so you might end up receiving the first assignment
      then you modify the routing for it in `enable_aggression` then your
      receive the second assignment and the vote covering both assignments, so
      the rouing for the first and second assingment wouldn't match and we
      would fail to import the vote.
      
      From the logs I've seen, I don't think this is the reason the network
      didn't fully recover until the failsafe kicked it, because the votes had
      been already imported in approval-voting before this error.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
    • Maksym H's avatar
      Mak cmd swap omnibench (#6769) · 81b979ae
      Maksym H authored
      
      - change bench to default to old CLI
      - fix profile to production
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <action@github.com>
      Co-authored-by: command-bot <>
    • Egor_P's avatar
      [CI/CD]Revert the token changes in backport flow (#6794) · b2e1e592
      Egor_P authored
      Set back the token for the cmd_bot in the backport flow so that it work
      again, till the new set up will be figured out with the sec team
  4. Dec 08, 2024
  5. Dec 06, 2024
  6. Dec 05, 2024
  7. Dec 04, 2024
  8. Dec 03, 2024
    • Michal Kucharczyk's avatar
      `fatxpool`: handling limits and priorities improvements (#6405) · 41a5d8ec
      Michal Kucharczyk authored
      This PR provides a number of improvements around handling limits and
      priorities in the fork-aware transaction pool.
      
      
      #### Notes to reviewers.
      #### Following are the notable changes:
      1. #### [Better
      support](https://github.com/paritytech/polkadot-sdk/pull/6405/commits/414ec3cc)
      for `Usurped` transactions
      
      When any view reports an `Usurped` transaction (replaced by other with
      higher priority) it is removed from all the views (also inactive).
      Removal is implemented by simply submitting usurper transaction to all
      the views. It is also ensured that usurped tx will not sneak into the
      `view_store` in newly created view (this is why
      `ViewStore::pending_txs_replacements` was added).
      
      1. ####
      [`TimedTransactionSource`](https://github.com/paritytech/polkadot-sdk/pull/6405/commits/f10590f3)
      introduced:
      
      Every view now has an information when the transaction entered the pool.
      Enforce limits (now only for future txs) uses this timestamp to find
      worst transactions. Having common timestamp ensures coherent assessment
      of the transaction's importance across different views. This also could
      later be used to select which ready transaction shall be dropped.
      
      1. #### `DroppedWatcher`: [improved
      logic](https://github.com/paritytech/polkadot-sdk/pull/6405/commits/560db28c)
      for future transactions
      For future transaction - if the last referencing view is removed, the
      transaction will be dropped from the pool. This prevents future
      unincluded and un-promoted transactions from staying in the pool for
      long time.
      
      #### And some minor changes:
      
      1.
      [simplified](https://github.com/paritytech/polkadot-sdk/pull/6405/commits/2d0bbf83)
      the flow in `update_view_with_mempool` (code duplication + minor bug
      fix).
      2. `graph::BasePool`: [handling
      priorities](https://github.com/paritytech/polkadot-sdk/pull/6405/commits/c9f2d393)
      for future transaction improved (previously transaction with lower prio
      was reported as failed),
      3. `graph::listener`: dedicated `limit_enforced`/`usurped`/`dropped`
      [calls
      added](https://github.com/paritytech/polkadot-sdk/pull/6405/commits/7b58a68c),
      4. flaky test
      [fixed](https://github.com/paritytech/polkadot-sdk/pull/6405/commits/e0a7bc6c
      
      )
      5. new tests added,
      
      related to: #5809
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <action@github.com>
      Co-authored-by: default avatarIulian Barbu <14218860+iulianbarbu@users.noreply.github.com>
    • Lulu's avatar
      Add publish-check-compile workflow (#6556) · 896c8144
      Lulu authored
      Add publish-check-compile workflow
      
      This Applies staged prdocs then configures crate deps to pull from
      crates.io for our already published crates and local paths for
      things to be published. Then runs cargo check on the result.
      
      This results in a build state consitent with that of publish time and
      should catch compile errors that we would of otherwise ran into mid
      pubish.
      
      This acts as a supplement to the check-semver job. check-semver works on
      a high level and judges what changes are incorrect and why. This job
      just runs the change, sees if it compiles, and if not spits out
      a compile error.
    • PG Herveou's avatar
      Bump Westend AH (#6583) · d1d92ab7
      PG Herveou authored
      
      Bump Asset-Hub westend spec version
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <action@github.com>
    • Alexander Theißen's avatar
      pallet-revive-fixtures: Try not to re-create fixture dir (#6735) · c56a98b9
      Alexander Theißen authored
      
      On some systems trying to re-create the output directory will lead to an
      error.
      
      Fixes https://github.com/paritytech/subxt/issues/1876
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>