Skip to content
Snippets Groups Projects
  1. Oct 02, 2024
  2. Sep 30, 2024
  3. Sep 27, 2024
  4. Sep 26, 2024
    • Alexander Samusev's avatar
      [ci] Update CI image with rust 1.81.0 and 2024-09-11 (#5676) · 6c3219eb
      Alexander Samusev authored
      
      cc https://github.com/paritytech/ci_cd/issues/1035
      
      cc https://github.com/paritytech/ci_cd/issues/1023
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarMaksym H <1177472+mordamax@users.noreply.github.com>
      Co-authored-by: default avatargui <gui.thiolliere@gmail.com>
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      Co-authored-by: default avatarggwpez <ggwpez@users.noreply.github.com>
    • Alexandru Gheorghe's avatar
      [5 / 5] Introduce approval-voting-parallel (#4849) · b16237ad
      Alexandru Gheorghe authored
      This is the implementation of the approach described here:
      https://github.com/paritytech/polkadot-sdk/issues/1617#issuecomment-2150321612
      &
      https://github.com/paritytech/polkadot-sdk/issues/1617#issuecomment-2154357547
      &
      https://github.com/paritytech/polkadot-sdk/issues/1617#issuecomment-2154721395.
      
      ## Description of changes
      
      The end goal is to have an architecture where we have single
      subsystem(`approval-voting-parallel`) and multiple worker types that
      would full-fill the work that currently is fulfilled by the
      `approval-distribution` and `approval-voting` subsystems. The main loop
      of the new subsystem would do just the distribution of work to the
      workers.
      
      The new subsystem will have:
      - N approval-distribution workers: This would do the work that is
      currently being done by the approval-distribution subsystem and in
      addition to that will also perform the crypto-checks that an assignment
      is valid and that a vote is correctly signed. Work is assigned via the
      following formula: `worker_index = msg.validator % WORKER_COUNT`, this
      guarantees that all assignments and approvals from the same validator
      reach the same worker.
      - 1 approval-voting worker: This would receive an already valid message
      and do everything the approval-voting currently does, except the
      crypto-checking that has been moved already to the approval-distribution
      worker.
      
      On the hot path of processing messages **no** synchronisation and
      waiting is needed between approval-distribution and approval-voting
      workers.
      
      <img width="1431" alt="Screenshot 2024-06-07 at 11 28 08"
      src="https://github.com/paritytech/polkadot-sdk/assets/49718502/a196199b-b705-4140-87d4-c6900ba8595e">
      
      
      
      ## Guidelines for reading
      
      The full implementation is broken in 5 PRs and all of them are
      self-contained and improve things incrementally even without the
      parallelisation being implemented/enabled, the reason this approach was
      taken instead of a big-bang PR, is to make things easier to review and
      reduced the risk of breaking this critical subsystems.
      
      After reading the full description of this PR, the changes should be
      read in the following order:
      1. https://github.com/paritytech/polkadot-sdk/pull/4848, some other
      micro-optimizations for networks with a high number of validators. This
      change gives us a speed up by itself without any other changes.
      2. https://github.com/paritytech/polkadot-sdk/pull/4845 , this contains
      only interface changes to decouple the subsystem from the `Context` and
      be able to run multiple instances of the subsystem on different threads.
      **No functional changes**
      3. https://github.com/paritytech/polkadot-sdk/pull/4928, moving of the
      crypto checks from approval-voting in approval-distribution, so that the
      approval-distribution has no reason to wait after approval-voting
      anymore. This change gives us a speed up by itself without any other
      changes.
      4. https://github.com/paritytech/polkadot-sdk/pull/4846, interface
      changes to make approval-voting runnable on a separate thread. **No
      functional changes**
      5. This PR, where we instantiate an `approval-voting-parallel` subsystem
      that runs on different workers the logic currently in
      `approval-distribution` and `approval-voting`.
      6. The next step after this changes get merged and deploy would be to
      bring all the files from approval-distribution, approval-voting,
      approval-voting-parallel into a single rust crate, to make it easier to
      maintain and understand the structure.
      
      ## Results
      Running subsystem-benchmarks with 1000 validators 100 fully ocuppied
      cores and triggering all assignments and approvals for all tranches
      
      #### Approval does not lags behind. 
       Master
      ```
      Chain selection approved  after 72500 ms hash=0x0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a
      ```
      With this PoC
      ```
      Chain selection approved  after 3500 ms hash=0x0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a
      ```
      
      #### Gathering enough assignments
       
      Enough assignments are gathered in less than 500ms, so that gives un a
      guarantee that un-necessary work does not get triggered, on master on
      the same benchmark because the subsystems fall behind on work, that
      number goes above 32 seconds on master.
       
      <img width="2240" alt="Screenshot 2024-06-20 at 15 48 22"
      src="https://github.com/paritytech/polkadot-sdk/assets/49718502/d2f2b29c-5ff6-44b4-a245-5b37ab8e58bc">
      
      
      #### Cpu usage:
      Master
      ```
      CPU usage, seconds                     total   per block
      approval-distribution                96.9436      9.6944
      approval-voting                     117.4676     11.7468
      test-environment                     44.0092      4.4009
      ```
      With this PoC
      ```
      CPU usage, seconds                     total   per block
      approval-distribution                 0.0014      0.0001 --- unused
      approval-voting                       0.0437      0.0044.  --- unused
      approval-voting-parallel              5.9560      0.5956
      approval-voting-parallel-0           22.9073      2.2907
      approval-voting-parallel-1           23.0417      2.3042
      approval-voting-parallel-2           22.0445      2.2045
      approval-voting-parallel-3           22.7234      2.2723
      approval-voting-parallel-4           21.9788      2.1979
      approval-voting-parallel-5           23.0601      2.3060
      approval-voting-parallel-6           22.4805      2.2481
      approval-voting-parallel-7           21.8330      2.1833
      approval-voting-parallel-db          37.1954      3.7195.  --- the approval-voting thread.
      ```
      
      # Enablement strategy
      
      Because just some trivial plumbing is needed in approval-distribution
      and approval-voting to be able to run things in parallel and because
      this subsystems plays a critical part in the system this PR proposes
      that we keep both ways of running the approval work, as separated
      subsystems and just a single subsystem(`approval-voting-parallel`) which
      has multiple workers for the distribution work and one worker for the
      approval-voting work and switch between them with a comandline flag.
      
      The benefits for this is twofold.
      1. With the same polkadot binary we can easily switch just a few
      validators to use the parallel approach and gradually make this the
      default way of running, if now issues arise.
      2. In the worst case scenario were it becomes the default way of running
      things, but we discover there are critical issues with it we have the
      path to quickly disable it by asking validators to adjust their command
      line flags.
      
      
      # Next steps
      - [x] Make sure through various testing we are not missing anything 
      - [x] Polish the implementations to make them production ready
      - [x] Add Unittest Tests for approval-voting-parallel.
      - [x] Define and implement the strategy for rolling this change, so that
      the blast radius is minimal(single validator) in case there are problems
      with the implementation.
      - [x]  Versi long running tests.
      - [x] Add relevant metrics.
      
      @ordian @eskimor @sandreim @AndreiEres
      
      , let me know what you think.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
  5. Sep 25, 2024
    • Adrian Catangiu's avatar
      xcm-executor: validate destinations for ReserveWithdraw and Teleport transfers (#5660) · b5ac7a9d
      Adrian Catangiu authored
      
      This change adds the required validation for stronger UX guarantees when
      using `InitiateReserveWithdraw` or `InitiateTeleport` XCM instructions.
      Execution of the instructions will fail if the local chain is not
      configured to trust the "destination" or "reserve" chain as a
      reserve/trusted-teleporter for the provided "assets".
      
      With this change, misuse of `InitiateReserveWithdraw`/`InitiateTeleport`
      fails on origin with no overall side-effects, rather than failing on
      destination (with side-effects to origin's assets issuance).
      
      The commit also makes the same validations for pallet-xcm transfers, and
      adds regression tests.
      
      ---------
      
      Signed-off-by: default avatarAdrian Catangiu <adrian@parity.io>
      Co-authored-by: default avatarBranislav Kontur <bkontur@gmail.com>
    • Liam Aharon's avatar
      MBM `try-runtime` support (#4251) · cc6a5130
      Liam Aharon authored
      
      # MBM try-runtime support
      
      This MR adds support to the try-runtime trait such that the
      try-runtime-CLI will be able to support MBM testing
      [here](https://github.com/paritytech/try-runtime-cli/pull/90). It mainly
      adds two feature-gated hooks to the `SteppedMigration` hook to
      facilitate testing. These hooks are named `pre_upgrade` and
      `post_upgrade` and have the same signature and implications as for
      single-block migrations.
      
      ## Integration
      
      To make use of this in your Multi-Block-Migration, just implement the
      two new hooks and test pre- and post-conditions in them:
      
      ```rust
      #[cfg(feature = "try-runtime")]
      fn pre_upgrade() -> Result<Vec<u8>, frame_support::sp_runtime::TryRuntimeError> {
      	// ...
      }
      
      #[cfg(feature = "try-runtime")]
      fn post_upgrade(prev: Vec<u8>) -> Result<(), frame_support::sp_runtime::TryRuntimeError> {
          // ...
      }
      ```
      
      You may return an error or panic in these functions to indicate failure.
      This will then show up in the try-runtime-CLI and can be used in CI for
      testing.
      
      Changes:
      - Adds `try-runtime` gated methods `pre_upgrade` and `post_upgrade` on
      `SteppedMigration`
      - Adds `try-runtime` gated methods `nth_pre_upgrade` and
      `nth_post_upgrade` on `SteppedMigrations`
      - Modifies `pallet_migrations` implementation to run pre_upgrade and
      post_upgrade steps at the appropriate times, and panic in the event of
      migration failure.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Signed-off-by: default avatargeorgepisaltu <george.pisaltu@parity.io>
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarclaravanstaden <claravanstaden64@gmail.com>
      Co-authored-by: default avatarggwpez <ggwpez@users.noreply.github.com>
      Co-authored-by: default avatargeorgepisaltu <george.pisaltu@parity.io>
  6. Sep 24, 2024
    • Branislav Kontur's avatar
      Bridges lane id agnostic for backwards compatibility (#5649) · 710e74dd
      Branislav Kontur authored
      
      This PR primarily fixes the issue with
      `zombienet-bridges-0001-asset-transfer-works` (see:
      https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7404903).
      
      The PR looks large, but most of the changes involve splitting `LaneId`
      into `LegacyLaneId` and `HashedLaneId`. All pallets now use `LaneId` as
      a generic parameter.
      
      The actual bridging pallets are now backward compatible and work with
      actual **substrate-relay v1.6.10**, which does not even known anything
      about permissionless lanes or the new pallet changes.
      
      
      
      ## Important
      
      - [x] added migration for `pallet_bridge_relayers` and
      `RewardsAccountParams` change order of params, which generates different
      accounts
      
      ## Deployment follow ups
      - [ ] fix monitoring for
      `at_{}_relay_{}_reward_for_msgs_from_{}_on_lane_{}`
      - [ ] check sovereign reward accounts - because of changed
      `RewardsAccountParams`
      - [ ] deploy another messages instances for permissionless lanes - on
      BHs or AHs?
      - [ ] return back `open_and_close_bridge_works` for another
      `pallet-bridge-messages` instance
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarAdrian Catangiu <adrian@parity.io>
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
    • Adrian Catangiu's avatar
      snowbridge: improve destination fee handling to avoid trapping fees dust (#5563) · 62534e53
      Adrian Catangiu authored
      On messages Ethereum -> Polkadot Asset Hub: whether they are a token
      transfer or a `Transact` for registering new token, make sure to handle
      unspent fees, rather than trapping them.
      
      This PR deposits them to Snowbridge's sovereign account on Asset Hub.
      
      ---------
      
      Co-authored-by: command-bot <>
  7. Sep 23, 2024
    • Alin Dima's avatar
      elastic scaling: add core selector to cumulus (#5372) · b9eb68bc
      Alin Dima authored
      Partially implements
      https://github.com/paritytech/polkadot-sdk/issues/5048
      
      - adds a core selection runtime API to cumulus and a generic way of
      configuring it for a parachain
      - modifies the slot based collator to utilise the claim queue and the
      generic core selection
      
      What's left to be implemented (in a follow-up PR):
      - add the UMP signal for core selection into the parachain-system pallet
      
      View the RFC for more context:
      https://github.com/polkadot-fellows/RFCs/pull/103
      
      ---------
      
      Co-authored-by: command-bot <>
  8. Sep 22, 2024
    • Bastian Köcher's avatar
      Fix RPC relay chain interface (#5796) · 128f6c79
      Bastian Köcher authored
      Use `sp_core::Bytes` as `payload` to encode the values correctly as
      `hex` string.
    • Branislav Kontur's avatar
      Moved presets to the testnet runtimes (#5327) · 8735c663
      Branislav Kontur authored
      
      It is a first step for switching to the `frame-omni-bencher` for CI.
      
      This PR includes several changes related to generating chain specs plus:
      
      - [x] pallet `assigned_slots` fix missing `#[serde(skip)]` for phantom
      - [x] pallet `paras_inherent` benchmark fix - cherry-picked from
      https://github.com/paritytech/polkadot-sdk/pull/5688
      - [x] migrates `get_preset` to the relevant runtimes
      - [x] fixes Rococo genesis presets - does not work
      https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7317249
      - [x] fixes Rococo benchmarks for CI 
      - [x] migrate westend genesis
      - [x] remove wococo stuff
      
      Closes: https://github.com/paritytech/polkadot-sdk/issues/5680
      
      ## Follow-ups
      - Fix for frame-omni-bencher
      https://github.com/paritytech/polkadot-sdk/pull/5655
      - Enable new short-benchmarking CI -
      https://github.com/paritytech/polkadot-sdk/pull/5706
      - Remove gitlab pipelines for short benchmarking
      - refactor all Cumulus runtimes to use `get_preset` -
      https://github.com/paritytech/polkadot-sdk/issues/5704
      - https://github.com/paritytech/polkadot-sdk/issues/5705
      - https://github.com/paritytech/polkadot-sdk/issues/5700
      - [ ] Backport to the stable
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarordian <noreply@reusable.software>
  9. Sep 20, 2024
  10. Sep 19, 2024
    • Iulian Barbu's avatar
      cumulus/minimal-node: added prometheus metrics for the RPC client (#5572) · c8d5e5a3
      Iulian Barbu authored
      
      # Description
      
      When we start a node with connections to external RPC servers (as a
      minimal node), we lack metrics around how many individual calls we're
      doing to the remote RPC servers and their duration. This PR adds metrics
      that measure durations of each RPC call made by the minimal nodes, and
      implicitly how many calls there are.
      
      Closes #5409 
      Closes #5689
      
      ## Integration
      
      Node operators should be able to track minimal node metrics and decide
      appropriate actions according to how the metrics are interpreted/felt.
      The added metrics can be observed by curl'ing the prometheus metrics
      endpoint for the ~relaychain~ parachain (it was changed based on the
      review). The metrics are represented by
      ~`polkadot_parachain_relay_chain_rpc_interface`~
      `relay_chain_rpc_interface` namespace (I realized lining up
      `parachain_relay_chain` in the same metric might be confusing :).
      Excerpt from the curl:
      
      ```
      relay_chain_rpc_interface_bucket{method="chain_getBlockHash",chain="rococo_local_testnet",le="0.001"} 15
      relay_chain_rpc_interface_bucket{method="chain_getBlockHash",chain="rococo_local_testnet",le="0.004"} 23
      relay_chain_rpc_interface_bucket{method="chain_getBlockHash",chain="rococo_local_testnet",le="0.016"} 23
      relay_chain_rpc_interface_bucket{method="chain_getBlockHash",chain="rococo_local_testnet",le="0.064"} 23
      relay_chain_rpc_interface_bucket{method="chain_getBlockHash",chain="rococo_local_testnet",le="0.256"} 24
      relay_chain_rpc_interface_bucket{method="chain_getBlockHash",chain="rococo_local_testnet",le="1.024"} 24
      relay_chain_rpc_interface_bucket{method="chain_getBlockHash",chain="rococo_local_testnet",le="4.096"} 24
      relay_chain_rpc_interface_bucket{method="chain_getBlockHash",chain="rococo_local_testnet",le="16.384"} 24
      relay_chain_rpc_interface_bucket{method="chain_getBlockHash",chain="rococo_local_testnet",le="65.536"} 24
      relay_chain_rpc_interface_bucket{method="chain_getBlockHash",chain="rococo_local_testnet",le="+Inf"} 24
      relay_chain_rpc_interface_sum{method="chain_getBlockHash",chain="rococo_local_testnet"} 0.11719075
      relay_chain_rpc_interface_count{method="chain_getBlockHash",chain="rococo_local_testnet"} 24
      ```
      
      ## Review Notes
      
      The way we measure durations/hits is based on `HistogramVec` struct
      which allows us to collect timings for each RPC client method called
      from the minimal node., It can be extended to measure the RPCs against
      other dimensions too (status codes, response sizes, etc). The timing
      measuring is done at the level of the `relay-chain-rpc-interface`, in
      the `RelayChainRpcClient` struct's method 'request_tracing'. A single
      entry point for all RPC requests done through the
      relay-chain-rpc-interface. The requests durations will fall under
      exponential buckets described by start `0.001`, factor `4` and count
      `9`.
      
      ---------
      
      Signed-off-by: default avatarIulian Barbu <iulian.barbu@parity.io>
    • Francisco Aguirre's avatar
      [xcm-emulator] Better logs for message execution and processing (#5712) · b230b0e3
      Francisco Aguirre authored
      When running XCM emulated tests and seeing the logs with `RUST_LOG=xcm`
      or `RUST_LOG=xcm=trace`, it's sometimes a bit hard to figure out the
      chain where the logs are coming from.
      
      I added a log whenever `execute_with` is called, to know the chain which
      makes the following logs. Looks like so:
      
      <img width="1499" alt="Screenshot 2024-09-13 at 20 14 13"
      src="https://github.com/user-attachments/assets/a31d7aa4-11d1-4d3e-9a65-86f38347c880">
      
      There are already log targets for when UMP, DMP and HRMP messages are
      being processed. To see them, you have to use the log targets `ump`,
      `dmp`, and `hrmp` respectively. So `RUST_LOG=xcm,ump,dmp,hrmp` would let
      you see every log.
      I prefixed the targets with `xcm::` so you can get all the relevant logs
      just by filtering by `xcm`. You can always use the whole target to see
      just the messages being processed.
      
      These logs showed the message as an array of bytes, I made them show a
      hexadecimal string instead since that's easier to copy in case you want
      to decode it or use it in another tool. They look like this now:
      
      <img width="1499" alt="Screenshot 2024-09-13 at 20 17 15"
      src="https://github.com/user-attachments/assets/5abf4a97-1ea7-4832-b3b0-d54c54905d1a">
      
      The HRMP and UMP ones are very similar.
  11. Sep 17, 2024
    • Nazar Mokrynskyi's avatar
      Syncing strategy refactoring (part 2) (#5666) · 43cd6fd4
      Nazar Mokrynskyi authored
      # Description
      
      Follow-up to https://github.com/paritytech/polkadot-sdk/pull/5469 and
      mostly covering https://github.com/paritytech/polkadot-sdk/issues/5333.
      
      The primary change here is that syncing strategy is no longer created
      inside of syncing engine, instead syncing strategy is an argument of
      syncing engine, more specifically it is an argument to `build_network`
      that most downstream users will use. This also extracts addition of
      request-response protocols outside of network construction, making sure
      they are physically not present when they don't need to be (imagine
      syncing strategy that uses none of Substrate's protocols in its
      implementation for example).
      
      This technically allows to completely replace syncing strategy with
      whatever strategy chain might need.
      
      There will be at least one follow-up PR that will simplify
      `SyncingStrategy` trait and other public interfaces to remove mentions
      of block/state/warp sync requests, replacing them with generic APIs,
      such that strategies where warp sync is not applicable don't have to
      provide dummy method implementations, etc.
      
      ## Integration
      
      Downstream projects will have to write a bit of boilerplate calling
      `build_polkadot_syncing_strategy` function to create previously default
      syncing strategy.
      
      ## Review Notes
      
      Please review PR through individual commits rather than the final diff,
      it will be easier that way. The changes are mostly just moving code
      around one step at a time.
      
      # Checklist
      
      * [x] My PR includes a detailed description as outlined in the
      "Description" and its two subsections above.
      * [x] My PR follows the [labeling requirements](
      
      https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process
      ) of this project (at minimum one label for `T` required)
      * External contributors: ask maintainers to put the right label on your
      PR.
      * [x] I have made corresponding changes to the documentation (if
      applicable)
  12. Sep 16, 2024
    • Przemek Rzad's avatar
      Ensure correct product name in license headers (#5702) · 9064fb4d
      Przemek Rzad authored
      - This will ensure that a correct product name
      (Polkadot/Cumulus/Substrate) is referenced in license headers.
      - Closes https://github.com/paritytech/license-scanner/issues/49
    • Muharem Ismailov's avatar
      Asset Hub: auto incremented asset id for trust backed assets (#5687) · 22bdc3e5
      Muharem Ismailov authored
      Setup auto incremented asset id to `50_000_000` for trust backed assets.
      
      In order to align with Polkadot/Kusama Asset Hub -
      https://github.com/polkadot-fellows/runtimes/pull/414
      The next closes existing assets IDs in Rococo is `69_696_969`, in
      Westend is `88_228_866`.
      
      ### Migration
      **Stakeholders**: all clients providing asset creation functionality on
      Westend/Rococo Asset Hub
      
      This change does not break the API but introduces a new constraint. It
      implements an auto-incremented ID strategy for Trust-Backed Assets (50
      pallet instance indexes on both networks), starting at ID 50,000,000.
      Each new asset must be created with an ID that is one greater than the
      last asset created. The next ID can be fetched from the `NextAssetId`
      storage item of the assets pallet. An empty `NextAssetId` storage item
      indicates no constraint on the next asset ID and can serve as a feature
      flag for this release.
  13. Sep 13, 2024
    • Ron's avatar
      Transfer Polkadot-native assets to Ethereum (#5546) · fb7300ce
      Ron authored
      # Description
      
      Adding support for send polkadot native assets(PNA) to Ethereum network
      through snowbridge. Asset with location in view of AH Including:
      
      - Relay token `(1,Here)`
      - Native asset `(0,[PalletInstance(instance),GenereIndex(index)])`
      managed by Assets Pallet
      - Native asset of Parachain `(1,[Parachain(paraId)])` managed by Foreign
      Assets Pallet
      
      The original PR in https://github.com/Snowfork/polkadot-sdk/pull/128
      which has been internally reviewed by Snowbridge team.
      
      # Notes
      
      - This feature depends on the companion solidity change in
      https://github.com/Snowfork/snowbridge/pull/1155. Currently register PNA
      is only allowed from
      [sudo](https://github.com/Snowfork/polkadot-sdk/blob/46cb3528
      
      /bridges/snowbridge/pallets/system/src/lib.rs#L621),
      so it's actually not enabled. Will require another runtime upgrade to
      make the call permissionless together with upgrading the Gateway
      contract.
      
      - To make things easy multi-hop transfer(i.e. sending PNA from Ethereum
      through AH to Destination chain) is not support ed in this PR. For this
      case user can switch to 2-phases transfer instead.
      
      ---------
      
      Co-authored-by: default avatarClara van Staden <claravanstaden64@gmail.com>
      Co-authored-by: default avatarAlistair Singh <alistair.singh7@gmail.com>
      Co-authored-by: default avatarVincent Geddes <117534+vgeddes@users.noreply.github.com>
      Co-authored-by: default avatarFrancisco Aguirre <franciscoaguirreperez@gmail.com>
      Co-authored-by: default avatarAdrian Catangiu <adrian@parity.io>
  14. Sep 12, 2024
  15. Sep 10, 2024
  16. Sep 07, 2024
    • José Molina Colmenero's avatar
      Add debugging info for `StorageWeightReclaim` (#5594) · 016421ac
      José Molina Colmenero authored
      
      When inspecting the logs we often encounter the following message:
      
      `Benchmarked storage weight smaller than consumed storage weight.
      benchmarked: {benchmarked_weight} consumed: {consumed_weight} unspent:
      {unspent}`
      
      However, it is very hard to guess which call is causing the issue.
      
      With the changes proposed in this PR, information about the call is
      provided so that we can easily identify the source of the problem
      without further delay, and this way work more efficiently in solving the
      issue.
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
  17. Sep 05, 2024
    • Alexandru Gheorghe's avatar
      Add benchmark for the number of minimum cpu cores (#5127) · a947cb83
      Alexandru Gheorghe authored
      
      Fixes: https://github.com/paritytech/polkadot-sdk/issues/5122.
      
      This PR extends the existing single core `benchmark_cpu` to also build a
      score of the entire processor by spawning `EXPECTED_NUM_CORES(8)`
      threads and averaging their throughput.
      
      This is better than simply checking the number of cores, because also
      covers multi-tenant environments where the OS sees a high number of
      available CPUs, but because it has to share it with the rest of his
      neighbours its total throughput does not satisfy the minimum
      requirements.
      
      
      ## TODO
      - [x] Obtain reference values on the reference hardware.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
    • Francisco Aguirre's avatar
      Clear other messages before dry-run to get only the ones produced during (#5581) · 8d9ebcd5
      Francisco Aguirre authored
      
      The dry-run shows in `forwarded_xcms` all the messages in the queues at
      the time of calling the API.
      Each time the API is called, the result could be different.
      You could get messages even if you dry-run something that doesn't send a
      message, like a `System::remark`.
      
      This PR fixes this by clearing the message queues before doing the
      dry-run, so the only messages left are the ones the users of the API
      actually care about.
      
      ---------
      
      Co-authored-by: default avatarAdrian Catangiu <adrian@parity.io>
  18. Sep 04, 2024
    • Muharem Ismailov's avatar
      Collective: dynamic deposit based on number of proposals (#3151) · cc3b7bbd
      Muharem Ismailov authored
      
      Introduce a dynamic proposal deposit mechanism influenced by the total
      number of active proposals, with the option to set the deposit to none.
      
      The potential cost (e.g., balance hold) for proposal submission and
      storage is determined by the implementation of the `Consideration`
      trait. The footprint is defined as `proposal_count`, representing the
      total number of active proposals in the system, excluding the one
      currently being proposed. This cost may vary based on the proposal
      count. The pallet also offers various types to define a cost strategy
      based on the number of proposals.
      
      Two new calls are introduced:
      - kill(origin, proposal_hash): the cancellation of a proposal,
      accompanied by the burning of the associated cost/consideration ticket.
      - release_proposal_cost(origin, proposal_hash): the release of the cost
      for a non-active proposal.
      
      Additionally change: 
      - benchmarks have been upgraded to benchmarks::v2 for collective pallet;
      - `ensure_successful` function added to the `Consideration` under
      `runtime-benchmarks` feature.
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarGitHub Action <action@github.com>
  19. Sep 03, 2024
    • Iulian Barbu's avatar
      cumulus/client: added external rpc connection retry logic (#5515) · 4d2f7932
      Iulian Barbu authored
      
      # Description
      
      Adds retry logic that makes the RPC relay chain interface more reliable
      for the cases of a collator connecting to external RPC servers.
      
      Closes #5514 
      Closes #4278
      
      Final solution still debated on #5514 , what this PR addresses might
      change (e.g. #4278 might require a more advanced approach).
      
      ## Integration
      
      Users that start collators should barely observe differences based on
      this logic, since the retry logic applies only in case the collators
      fail to connect to the RPC servers. In practice I assume the RPC servers
      are already live before starting collators, and the issue isn't visible.
      
      ## Review Notes
      
      The added retry logic is for retrying the connection to the RPC servers
      (which can be multiple). It is at the level of the
      cumulus/client/relay-chain-rpc-interface module, but more specifically
      relevant to the RPC clients logic (`ClientManager`). The retry logic is
      not configurable, it tries to connect to the RPC client for 5 times,
      with an exponential backoff in between each iteration starting with 1
      second wait time and ending with 16 seconds. The same logic is applied
      in case an existing connection to an RPC is dropped. There is a
      `ReconnectingWebsocketWorker` who ensures there is connectivity to at
      least on RPC node, and the retry logic makes this stronger by insisting
      on trying connections to the RPC servers list for 5 times.
      
      ## Testing
      
      - This was tested manually by starting zombienet natively based on
      [006-rpc_collator_builds_blocks.toml](https://github.com/paritytech/polkadot-sdk/blob/master/cumulus/zombienet/tests/0006-rpc_collator_builds_blocks.toml)
      and observing collators don't fail anymore:
      
      ```bash
      zombienet -l text --dir zbn-run -f --provider native spawn polkadot-sdk/cumulus/zombienet/tests/0006-rpc_collator_builds_blocks.toml
      ```
      
      - Added a unit test that exercises the retry logic for a client
      connection to a server that comes online in 10 seconds. The retry logic
      can wait for as long as 30 seconds, but thought that it is too much for
      a unit test. Just being conscious of CI time if it runs this test, but I
      am happy to see suggestions around it too. I am not that sure either it
      runs in CI, haven't figured it out entirely yet. The test can be
      considered an integration test too, but it exercises crate internal
      implementation, not the public API.
      
      Collators example logs after the change:
      ```
      2024-08-29 14:28:11.730  INFO tokio-runtime-worker reconnecting-websocket-client: [Parachain] Trying to connect to next external relaychain node. current_iteration=0 index=2 url="ws://127.0.0.1:37427/"
      2024-08-29 14:28:12.737  INFO tokio-runtime-worker reconnecting-websocket-client: [Parachain] Trying to connect to next external relaychain node. current_iteration=1 index=0 url="ws://127.0.0.1:43617/"
      2024-08-29 14:28:12.739  INFO tokio-runtime-worker reconnecting-websocket-client: [Parachain] Trying to connect to next external relaychain node. current_iteration=1 index=1 url="ws://127.0.0.1:37965/"
      2024-08-29 14:28:12.755  INFO tokio-runtime-worker reconnecting-websocket-client: [Parachain] Trying to connect to next external relaychain node. current_iteration=1 index=2 url="ws://127.0.0.1:37427/"
      2024-08-29 14:28:14.758  INFO tokio-runtime-worker reconnecting-websocket-client: [Parachain] Trying to connect to next external relaychain node. current_iteration=2 index=0 url="ws://127.0.0.1:43617/"
      2024-08-29 14:28:14.759  INFO tokio-runtime-worker reconnecting-websocket-client: [Parachain] Trying to connect to next external relaychain node. current_iteration=2 index=1 url="ws://127.0.0.1:37965/"
      2024-08-29 14:28:14.760  INFO tokio-runtime-worker reconnecting-websocket-client: [Parachain] Trying to connect to next external relaychain node. current_iteration=2 index=2 url="ws://127.0.0.1:37427/"
      2024-08-29 14:28:18.766  INFO tokio-runtime-worker reconnecting-websocket-client: [Parachain] Trying to connect to next external relaychain node. current_iteration=3 index=0 url="ws://127.0.0.1:43617/"
      2024-08-29 14:28:18.768  INFO tokio-runtime-worker reconnecting-websocket-client: [Parachain] Trying to connect to next external relaychain node. current_iteration=3 index=1 url="ws://127.0.0.1:37965/"
      2024-08-29 14:28:18.768  INFO tokio-runtime-worker reconnecting-websocket-client: [Parachain] Trying to connect to next external relaychain node. current_iteration=3 index=2 url="ws://127.0.0.1:37427/"
      2024-08-29 14:28:26.770  INFO tokio-runtime-worker reconnecting-websocket-client: [Parachain] Trying to connect to next external relaychain node. current_iteration=4 index=0 url="ws://127.0.0.1:43617/"
      ```
      
      ---------
      
      Signed-off-by: default avatarIulian Barbu <iulian.barbu@parity.io>
      Co-authored-by: default avatarSebastian Kunert <skunert49@gmail.com>
  20. Sep 02, 2024
    • Clara van Staden's avatar
      Snowbridge free consensus updates (#5201) · c8015b2e
      Clara van Staden authored
      
      Allow free Snowbridge consensus updates, if the header interval is
      larger than the configured value (set to 32, so once a epoch).
      
      This PR also moves the Rococo Snowbridge pallet config into its own
      module.
      
      Original PR: https://github.com/Snowfork/polkadot-sdk/pull/159
      
      ---------
      
      Co-authored-by: default avatarFrancisco Aguirre <franciscoaguirreperez@gmail.com>
    • Andrei Sandu's avatar
      Elastic scaling: introduce new candidate receipt primitive (#5322) · ad2ac0db
      Andrei Sandu authored
      
      closes https://github.com/paritytech/polkadot-sdk/issues/5044
      
      This PR switches the runtime to the new receipts format (vstaging
      primitives). I've implemented `From` to convert from new primitives to
      `v7` primitives and used them in the node runtime api client
      implementation. Until we implement the support in the node, it will
      continue e to use the v7 primitives but the runtime apis already use the
      new primitives.
      
      
      An expected downside of RFC103 is decoding V2 receipts shows garbage
      values if the input is V1:
      
      _![ima_9ce77de](https://github.com/user-attachments/assets/71d80e78-e238-4518-8cd1-548ae0d74b70)_
      
      TODO:
      - [x] fix tests
      - [x] A few more tests for the new primitives
      - [x] PRDoc
      
      ---------
      
      Signed-off-by: default avatarAndrei Sandu <andrei-mihail@parity.io>
    • Branislav Kontur's avatar
      [bridges-v2] Permissionless lanes (#4949) · 22100999
      Branislav Kontur authored
      Relates to:
      https://github.com/paritytech/parity-bridges-common/issues/2451
      Closes: https://github.com/paritytech/parity-bridges-common/issues/2500
      
      ## Summary
      
      Now, the bridging pallet supports only static lanes, which means lanes
      that are hard-coded in the runtime files. This PR fixes that and adds
      support for dynamic, also known as permissionless, lanes. This means
      that allowed origins (relay chain, sibling parachains) can open and
      close bridges (through BridgeHubs) with another bridged (substrate-like)
      consensus using just `xcm::Transact` and `OriginKind::Xcm`.
      
      _This PR is based on the migrated code from the Bridges V2
      [branch](https://github.com/paritytech/polkadot-sdk/pull/4427) from the
      old `parity-bridges-common`
      [repo](https://github.com/paritytech/parity-bridges-common/tree/bridges-v2)._
      
      ## Explanation
      
      Please read
      [bridges/modules/xcm-bridge-hub/src/lib.rs](https://github.com/paritytech/polkadot-sdk/blob/149b0ac2/bridg...
    • Nazar Mokrynskyi's avatar
      Improve `sc-service` API (#5364) · da654103
      Nazar Mokrynskyi authored
      
      This improves `sc-service` API by not requiring the whole
      `&Configuration`, using specific configuration options instead.
      `RpcConfiguration` was also extracted from `Configuration` to group all
      RPC options together.
      
      We don't use Substrate's CLI and would rather not use `Configuration`
      either, but some key public functions require it even though they
      ignored most of the fields anyway.
      
      `RpcConfiguration` is very helpful not just for consolidation of the
      fields, but also to finally make RPC optional for our use case, while
      Substrate still runs RPC server on localhost even if listening address
      is explicitly set to `None`, which is annoying (and I suspect there is a
      reason for it, so didn't want to change the default just yet).
      
      While this is a breaking change, most developers will not notice it if
      they use higher-level APIs.
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/2897
      
      ---------
      
      Co-authored-by: default avatarNiklas Adolfsson <niklasadolfsson1@gmail.com>
    • Francisco Aguirre's avatar
      Swaps for XCM delivery fees (#5131) · 5291412e
      Francisco Aguirre authored
      # Context
      
      Fees can already be paid in other assets locally thanks to the Trader
      implementations we have.
      This doesn't work when sending messages because delivery fees go through
      a different mechanism altogether.
      The idea is to fix this leveraging the `AssetExchanger` config item
      that's able to turn the asset the user wants to pay fees in into the
      asset the router expects for delivery fees.
      
      # Main addition
      
      An adapter was needed to use `pallet-asset-conversion` for exchanging
      assets in XCM.
      This was created in
      https://github.com/paritytech/polkadot-sdk/pull/5130.
      
      The XCM executor was modified to use `AssetExchanger` (when available)
      to swap assets to pay for delivery fees.
      
      ## Limitations
      
      We can only pay for delivery fees in different assets in intermediate
      hops. We can't pay in different assets locally. The first hop will
      always need the native token of the chain (or whatever is specified in
      the `XcmRouter`).
      This is a byproduct of using the `BuyExecution` instruction to know
      which asset should be used for delivery fee payment.
      Since this instruction is not present when executing an XCM locally, we
      are left with this limitation.
      To illustrate this limitation, I'll show two scenarios. All chains
      involved have pools.
      
      ### Scenario 1
      
      Parachain A --> Parachain B
      
      Here, parachain A can use any asset in a pool with its native asset to
      pay for local execution fees.
      However, as of now we can't use those for local delivery fees.
      This means transfers from A to B need some amount of A's native token to
      pay for delivery fees.
      
      ### Scenario 2
      
      Parachain A --> Parachain C --> Parachain B
      
      Here, Parachain C's remote delivery fees can be paid with any asset in a
      pool with its native asset.
      This allows a reserve asset transfer between A and B with C as the
      reserve to only need A's native token at the starting hop.
      After that, it could all be pool assets.
      
      ## Future work
      
      The fact that delivery fees go through a totally different mechanism
      results in a lot of bugs and pain points.
      Unfortunately, this is not so easy to solve in a backwards compatible
      manner.
      Delivery fees will be integrated into the language in future XCM
      versions, following
      https://github.com/polkadot-fellows/xcm-format/pull/53.
      
      Old PR: https://github.com/paritytech/polkadot-sdk/pull/4375.
  21. Aug 30, 2024
  22. Aug 28, 2024
    • Serban Iorga's avatar
      polkadot-parachain: Add omni-node variant with u64 block number (#5269) · c4ced11f
      Serban Iorga authored
      Related to https://github.com/paritytech/polkadot-sdk/issues/4787
      
      The main changes in this PR are the following:
      - making the NodeSpec logic generic on the Block type
      - adding an omni-node variant with u64 block number
      
      Apart from this, the PR also moves some of the logic in `service.rs` to
      the `common` subfolder
      
      The omni-node variant with u64 block number is not used yet. We have to
      either expose the option in the CLI or to read the block number from the
      chain spec somehow. Will do it in a future PR.
    • PG Herveou's avatar
    • A Ahmad's avatar
      IBP Coretime Polkadot bootnodes (#5499) · ef3a0d8f
      A Ahmad authored
      ✄
      -----------------------------------------------------------------------------
      
      Thank you for your Pull Request! :pray:
      
       Please make sure it follows the
      contribution guidelines outlined in [this
      
      document](https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md)
      and fill out the
      sections below. Once you're ready to submit your PR for review, please
      delete this section and leave only the text under
      the "Description" heading.
      
      # Description
      
      *A concise description of what your PR is doing, and what potential
      issue it is solving. Use [Github semantic
      
      linking](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
      to link the PR to an issue that must be closed once this is merged.*
      
      ## Integration
      
      *In depth notes about how this PR should be integrated by downstream
      projects. This part is mandatory, and should be
      reviewed by reviewers, if the PR does NOT have the `R0-Silent` label. In
      case of a `R0-Silent`, it can be ignored.*
      
      ## Review Notes
      
      *In depth notes about the **implementation** details of your PR. This
      should be the main guide for reviewers to
      understand your approach and effectively review it. If too long, use
      
      [`<details>`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/details)*.
      
      *Imagine that someone who is depending on the old code wants to
      integrate your new code and the only information that
      they get is this section. It helps to include example usage and default
      value here, with a `diff` code-block to show
      possibly integration.*
      
      *Include your leftover TODOs, if any, here.*
      
      # Checklist
      
      * [ ] My PR includes a detailed description as outlined in the
      "Description" and its two subsections above.
      * [ ] My PR follows the [labeling requirements](
      
      https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process
      ) of this project (at minimum one label for `T` required)
      * External contributors: ask maintainers to put the right label on your
      PR.
      * [ ] I have made corresponding changes to the documentation (if
      applicable)
      * [ ] I have added tests that prove my fix is effective or that my
      feature works (if applicable)
      
      You can remove the "Checklist" section once all have been checked. Thank
      you for your contribution!
      
      ✄
      -----------------------------------------------------------------------------
      
      Co-authored-by: default avatarDónal Murray <donal.murray@parity.io>
    • Niklas Adolfsson's avatar
      rpc server: listen to `ipv6 socket` if available and... · 09254eb9
      Niklas Adolfsson authored
      rpc server: listen to `ipv6 socket` if available and `--experimental-rpc-endpoint` CLI option (#4792)
      
      Close https://github.com/paritytech/polkadot-sdk/issues/3488,
      https://github.com/paritytech/polkadot-sdk/issues/4331
      
      This changes/adds the following:
      
      1. The default setting is that substrate starts a rpc server that
      listens to localhost both Ipv4 and Ipv6 on the same port. Ipv6 is
      allowed to fail because some platforms may not support it
      2. A new RPC CLI option `--experimental-rpc-endpoint` which allow to
      configure arbitrary listen addresses including the port, if this is
      enabled no other interfaces are enabled.
      3. If the local addr is not found for any of the sockets the server is
      not started throws an error.
      4. Remove the deny_unsafe from the RPC implementations instead this is
      an extension to allow different polices for different interfaces/sockets
      such one may enable unsafe on local interface and safe on only the
      external interface.
      
      So for instance in this PR it's now possible to start up three RPC...