Skip to content
  1. Feb 12, 2024
    • Javier Viola's avatar
      Fix(zombienet): allow finality_lag be to be 1 or less and increase timeout (#3299) · 044d914c
      Javier Viola authored
      Change `parityDb` test assertions after a quick check with @alexggh in
      order to resolve failures in
      `zombienet-polkadot-misc-0001-parachains-paritydb`(e.g
      https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5186277).
      Thx!
      044d914c
    • Oliver Tale-Yazdi's avatar
      Lift dependencies to the workspace (Part 1) (#2070) · e80c2473
      Oliver Tale-Yazdi authored
      Changes (partial https://github.com/paritytech/polkadot-sdk/issues/994):
      - Set log to `0.4.20` everywhere
      - Lift `log` to the workspace
      
      Starting with a simpler one after seeing
      https://github.com/paritytech/polkadot-sdk/pull/2065 from @jsdw
      
      .
      This sets the `default-features` to `false` in the root and then
      overwrites that in each create to its original value. This is necessary
      since otherwise the `default` features are additive and its impossible
      to disable them in the crate again once they are enabled in the
      workspace.
      
      I am using a tool to do this, so its mostly a test to see that it works
      as expected.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      e80c2473
    • Alexandru Gheorghe's avatar
      statement-distribution: Fix CostMinor("Unexpected Statement") (#3223) · 8362a681
      Alexandru Gheorghe authored
      
      
      On grid distribution messages have two paths of reaching a node, so
      there is the possiblity of a race when two peers send each other the
      same statement around the same time. Statement local_knowledge will tell
      us that the peer should have not send the statement because we sent it
      to it.
      
      Fix it by also keeping track only of the statement we received from a
      given peer and penalize it only if it sends it to us more than once.
      
      Fixes: https://github.com/paritytech/polkadot-sdk/issues/2346
      
      Additionally, also use different Cost labels for different paths to make
      it easier to debug things.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <[email protected]>
      8362a681
    • Andrei Eres's avatar
      subsystem-bench: polish imports (#3262) · cbd68467
      Andrei Eres authored
      cbd68467
  2. Feb 11, 2024
    • maksimryndin's avatar
      refactor pvf security module (#3047) · 4883e144
      maksimryndin authored
      
      
      resolve https://github.com/paritytech/polkadot-sdk/issues/2321
      
      - [x] refactor `security` module into a conditionally compiled
      - [x] rename `amd64` into x86-64 for consistency with conditional
      compilation guards and remove reference to a particular vendor
      - [x] run unit tests and zombienet
      
      ---------
      
      Co-authored-by: default avatars0me0ne-unkn0wn <[email protected]>
      4883e144
  3. Feb 09, 2024
    • Eugen Snitko's avatar
      Add forklift to remaining jobs (#3236) · edd95b37
      Eugen Snitko authored
      Add [forklift
      caching](https://gitlab.parity.io/parity/infrastructure/ci_cd/forklift/forklift)
      to remainig jobs
      
      by .sh and .py scripts:
      - cargo-check-each-crate x6 (`.gitlab/check-each-crate.py`)
      - build-linux-stable (`polkadot/scripts/build-only-wasm.sh`)
      
      by before_script:
      - build-linux-substrate
      - build-subkey-linux (with `.build-subkey` job)
      - cargo-check-benches x2
      
      **To disable feature set FORKLIFT_BYPASS variable to true in [project
      settings in
      gitlab](https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/settings/ci_cd)**
      (forklift now handles FORKLIFT_BYPASS by itself)
      edd95b37
    • Egor_P's avatar
      [Backport] Version bumps from 1.7.0 release (#3254) · b2c81b58
      Egor_P authored
      This PR backports version bumps from `1.7.0` release branch and moves
      related prdoc files to the appropriate folder.
      b2c81b58
  4. Feb 08, 2024
    • Gonçalo Pestana's avatar
      Fixes `TotalValueLocked` out of sync in nomination pools (#3052) · aac07af0
      Gonçalo Pestana authored
      The `TotalLockedValue` storage value in nomination pools pallet may get
      out of sync if the staking pallet does implicit withdrawal of unlocking
      chunks belonging to a bonded pool stash. This fix is based on a new
      method in the `OnStakingUpdate` traits, `on_withdraw`, which allows the
      nomination pools pallet to adjust the `TotalLockedValue` every time
      there is an implicit or explicit withdrawal from a bonded pool's stash.
      
      This PR also adds a migration that checks and updates the on-chain TVL
      if it got out of sync due to the bug this PR fixes.
      
      **Changes to `trait OnStakingUpdate`**
      
      In order for staking to notify the nomination pools pallet that chunks
      where withdrew, we add a new method, `on_withdraw` to the
      `OnStakingUpdate` trait. The nomination pools pallet filters the
      withdraws that are related to bonded pool accounts and updates the
      `TotalValueLocked` accordingly.
      
      **Others**
      - Adds try-state checks to the EPM/staking e2e tests
      - Adds tests for auto withdrawing in the context of nomination pools
      
      **To-do**
      - [x] check if we need a migration to fix the current `TotalValueLocked`
      (run try-runtime)
      - [x] migrations to fix the current on-chain TVL value 
      
        **Kusama**:
      ```
      TotalValueLocked: 99.4559 kKSM
      TotalValueLocked (calculated) 99.4559 kKSM
      ```
      
      
      ️ **Westend**:
      ```
      TotalValueLocked: 18.4060 kWND
      TotalValueLocked (calculated) 18.4050 kWND
      ```
      **Polkadot**: TVL not released yet.
      
      Closes https://github.com/paritytech/polkadot-sdk/issues/3055
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarRoss Bulat <[email protected]>
      Co-authored-by: default avatarDónal Murray <[email protected]>
      aac07af0
    • Oliver Tale-Yazdi's avatar
      `bench pallet`: only require `Hash` instead of `Block` (#3244) · c36c51ca
      Oliver Tale-Yazdi authored
      
      
      Preparation for https://github.com/paritytech/polkadot-sdk/issues/2664
      
      Changes:
      - Only require `Hash` instead of `Block` for the benchmarking
      - Refactor DB types to do the same
      
      ## Integration
      
      This breaking change can easily be integrated into your node via:  
      ```patch
      - cmd.run::<Block, ()>(config)
      + cmd.run::<HashingFor<Block>, ()>(config)
      ```
      
      Status: waiting for CI checks
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      Co-authored-by: default avatarcheme <[email protected]>
      c36c51ca
    • Andrei Eres's avatar
      subsystem-bench: run cli benchmarks only using config files (#3239) · 07f85929
      Andrei Eres authored
      This PR removes the configuration of subsystem benchmarks via CLI
      arguments. After this, we only keep configurations only in yaml files.
      It removes unnecessary code duplication
      07f85929
    • Louis Merlin's avatar
      Add try_state and integrity_test to XCM simulator fuzzer (#3222) · 84d89e37
      Louis Merlin authored
      This adds `try_state()` and `integrity_test()` to the four runtimes of
      the XCM-simulator fuzzer.
      
      With this, we are able to stress-test [message-queue's
      try_state](https://github.com/paritytech/polkadot-sdk/blob/7df1ae3b/substrate/frame/message-queue/src/lib.rs#L1245-L1347).
      
      This also adds the `Transact` block-listing from #2424 to avoid
      false-positives.
      
      Thank you @ggwpez for the help with the runtime configurations.
      84d89e37
  5. Feb 06, 2024
    • Andrei Eres's avatar
      subsystem-bench: Prepare CI output (#3158) · 9e6298e7
      Andrei Eres authored
      
      
      1. Benchmark results are collected in a single struct.
      2. The output of the results is prettified.
      3. The result struct used to save the output as a yaml and store it in
      artifacts in a CI job.
      
      ```
      $ cargo run -p polkadot-subsystem-bench --release -- test-sequence --path polkadot/node/subsystem-bench/examples/availability_read.yaml | tee output.txt
      $ cat output.txt
      
      polkadot/node/subsystem-bench/examples/availability_read.yaml #1
      
      Network usage, KiB                     total   per block
      Received from peers               510796.000  170265.333
      Sent to peers                        221.000      73.667
      
      CPU usage, s                           total   per block
      availability-recovery                 38.671      12.890
      Test environment                       0.255       0.085
      
      
      polkadot/node/subsystem-bench/examples/availability_read.yaml #2
      
      Network usage, KiB                     total   per block
      Received from peers               413633.000  137877.667
      Sent to peers                        353.000     117.667
      
      CPU usage, s                           total   per block
      availability-recovery                 52.630      17.543
      Test environment                       0.271       0.090
      
      
      polkadot/node/subsystem-bench/examples/availability_read.yaml #3
      
      Network usage, KiB                     total   per block
      Received from peers               424379.000  141459.667
      Sent to peers                        703.000     234.333
      
      CPU usage, s                           total   per block
      availability-recovery                 51.128      17.043
      Test environment                       0.502       0.167
      
      ```
      
      ```
      $ cargo run -p polkadot-subsystem-bench --release -- --ci test-sequence --path polkadot/node/subsystem-bench/examples/availability_read.yaml | tee output.txt
      $ cat output.txt
      - benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #1'
        network:
        - resource: Received from peers
          total: 509011.0
          per_block: 169670.33333333334
        - resource: Sent to peers
          total: 220.0
          per_block: 73.33333333333333
        cpu:
        - resource: availability-recovery
          total: 31.845848445
          per_block: 10.615282815
        - resource: Test environment
          total: 0.23582828799999941
          per_block: 0.07860942933333313
      
      - benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #2'
        network:
        - resource: Received from peers
          total: 411738.0
          per_block: 137246.0
        - resource: Sent to peers
          total: 351.0
          per_block: 117.0
        cpu:
        - resource: availability-recovery
          total: 18.93596025099999
          per_block: 6.31198675033333
        - resource: Test environment
          total: 0.2541994199999979
          per_block: 0.0847331399999993
      
      - benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #3'
        network:
        - resource: Received from peers
          total: 424548.0
          per_block: 141516.0
        - resource: Sent to peers
          total: 703.0
          per_block: 234.33333333333334
        cpu:
        - resource: availability-recovery
          total: 16.54178526900001
          per_block: 5.513928423000003
        - resource: Test environment
          total: 0.43960946299999537
          per_block: 0.14653648766666513
      ```
      
      ---------
      
      Co-authored-by: default avatarAndrei Sandu <[email protected]>
      9e6298e7
    • Branislav Kontur's avatar
      [pallet_xcm] Forgotten migration to XCMv4 + added `try-state` to the `pallet_xcm` (#3228) · 8c1c99f0
      Branislav Kontur authored
      Relates to: https://github.com/paritytech/polkadot-sdk/issues/3214
      
      ## TODO
      
      - [ ] backport to the `1.7.0` release
      8c1c99f0
    • Oliver Tale-Yazdi's avatar
      Ranked collective `Add`+`Remove` origins (#3212) · c552fb54
      Oliver Tale-Yazdi authored
      
      
      Superseeds https://github.com/paritytech/polkadot-sdk/pull/1245  
      
      This PR is a migration of the
      https://github.com/paritytech/substrate/pull/14577.
      
      The PR added associated types (`AddOrigin` & `RemoveOrigin`) to
      `Config`. It allows you to decouple types and areas of responsibility,
      since at the moment the same types are responsible for adding and
      promoting(removing and demoting). This will improve the flexibility of
      the pallet configuration.
      
      ```
      /// The origin required to add a member.
      type AddOrigin: EnsureOrigin<Self::RuntimeOrigin, Success = ()>;
      
      /// The origin required to remove a member. The success value indicates the
      /// maximum rank *from which* the removal may be.
      type RemoveOrigin: EnsureOrigin<Self::RuntimeOrigin, Success = Rank>;
      ```
      To achieve the backward compatibility, the users of the pallet can use
      the old type via the new morph:
      
      ```
      type AddOrigin = MapSuccess<Self::PromoteOrigin, Ignore>;
      type RemoveOrigin = Self::DemoteOrigin;
      ```
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      Co-authored-by: default avatarPraetorP <[email protected]>
      Co-authored-by: default avatarPavel Orlov <[email protected]>
      c552fb54
    • Alin Dima's avatar
      prospective-parachains: respond with multiple backable candidates (#3160) · 7df1ae3b
      Alin Dima authored
      Fixes https://github.com/paritytech/polkadot-sdk/issues/3129
      7df1ae3b
  6. Feb 05, 2024
    • Alexandru Gheorghe's avatar
      Introduce approval-voting/distribution benchmark (#2621) · f9f88688
      Alexandru Gheorghe authored
      
      
      ## Summary
      Built on top of the tooling and ideas introduced in
      https://github.com/paritytech/polkadot-sdk/pull/2528, this PR introduces
      a synthetic benchmark for measuring and assessing the performance
      characteristics of the approval-voting and approval-distribution
      subsystems.
      
      Currently this allows, us to simulate the behaviours of these systems
      based on the following dimensions:
      ```
      TestConfiguration:
      # Test 1
      - objective: !ApprovalsTest
          last_considered_tranche: 89
          min_coalesce: 1
          max_coalesce: 6
          enable_assignments_v2: true
          send_till_tranche: 60
          stop_when_approved: false
          coalesce_tranche_diff: 12
          workdir_prefix: "/tmp"
          num_no_shows_per_candidate: 0
          approval_distribution_expected_tof: 6.0
          approval_distribution_cpu_ms: 3.0
          approval_voting_cpu_ms: 4.30
        n_validators: 500
        n_cores: 100
        n_included_candidates: 100
        min_pov_size: 1120
        max_pov_size: 5120
        peer_bandwidth: 524288000000
        bandwidth: 524288000000
        latency:
          min_latency:
            secs: 0
            nanos: 1000000
          max_latency:
            secs: 0
            nanos: 100000000
        error: 0
        num_blocks: 10
      ```
      
      ## The approach
      1. We build a real overseer with the real implementations for
      approval-voting and approval-distribution subsystems.
      2. For a given network size, for each validator we pre-computed all
      potential assignments and approvals it would send, because this a
      computation heavy operation this will be cached on a file on disk and be
      re-used if the generation parameters don't change.
      3. The messages will be sent accordingly to the configured parameters
      and those are split into 3 main benchmarking scenarios.
      
      ## Benchmarking scenarios
      
      ### Best case scenario *approvals_throughput_best_case.yaml*
      It send to the approval-distribution only the minimum required tranche
      to gathered the needed_approvals, so that a candidate is approved.
      
      ### Behaviour in the presence of no-shows *approvals_no_shows.yaml*
      It sends the tranche needed to approve a candidate when we have a
      maximum of *num_no_shows_per_candidate* tranches with no-shows for each
      candidate.
      
      ### Maximum throughput *approvals_throughput.yaml*
      It sends all the tranches for each block and measures the used CPU and
      necessary network bandwidth. by the approval-voting and
      approval-distribution subsystem.
      
      ## How to run it
      ```
      cargo run -p polkadot-subsystem-bench --release -- test-sequence --path polkadot/node/subsystem-bench/examples/approvals_throughput.yaml
      ```
      
      ## Evaluating performance
      ### Use the real subsystems metrics
      If you follow the steps in
      https://github.com/paritytech/polkadot-sdk/tree/master/polkadot/node/subsystem-bench#install-grafana
      for installing locally prometheus and grafana, all real metrics for the
      `approval-distribution`, `approval-voting` and overseer are available.
      E.g:
      <img width="2149" alt="Screenshot 2023-12-05 at 11 07 46"
      src="https://github.com/paritytech/polkadot-sdk/assets/49718502/cb8ae2dd-178b-4922-bfa4-dc37e572ed38">
      
      <img width="2551" alt="Screenshot 2023-12-05 at 11 09 42"
      src="https://github.com/paritytech/polkadot-sdk/assets/49718502/8b4542ba-88b9-46f9-9b70-cc345366081b">
      
      <img width="2154" alt="Screenshot 2023-12-05 at 11 10 15"
      src="https://github.com/paritytech/polkadot-sdk/assets/49718502/b8874d8d-632e-443a-9840-14ad8e90c54f">
      
      <img width="2535" alt="Screenshot 2023-12-05 at 11 10 52"
      src="https://github.com/paritytech/polkadot-sdk/assets/49718502/779a439f-fd18-4985-bb80-85d5afad78e2">
      
      ### Profile with pyroscope
      1. Setup pyroscope following the steps in
      https://github.com/paritytech/polkadot-sdk/tree/master/polkadot/node/subsystem-bench#install-pyroscope,
      then run any of the benchmark scenario with `--profile` as the
      arguments.
      2. Open the pyroscope dashboard in grafana, e.g:
      <img width="2544" alt="Screenshot 2024-01-09 at 17 09 58"
      src="https://github.com/paritytech/polkadot-sdk/assets/49718502/58f50c99-a910-4d20-951a-8b16639303d9">
      
      
      
      ### Useful  logs
      1. Network bandwidth requirements:
      ```
      Payload bytes received from peers: 503993 KiB total, 50399 KiB/block
      Payload bytes sent to peers: 629971 KiB total, 62997 KiB/block
      ```
      
      2. Cpu usage by the approval-distribution/approval-voting subsystems.
      ```
      approval-distribution CPU usage 84.061s
      approval-distribution CPU usage per block 8.406s
      approval-voting CPU usage 96.532s
      approval-voting CPU usage per block 9.653s
      ```
      
      3. Time passed until a given block is approved
      ```
       Chain selection approved  after 3500 ms hash=0x0101010101010101010101010101010101010101010101010101010101010101
      Chain selection approved  after 4500 ms hash=0x0202020202020202020202020202020202020202020202020202020202020202
      ```
      
      ### Using benchmark to quantify improvements from
      https://github.com/paritytech/polkadot-sdk/pull/1178 +
      https://github.com/paritytech/polkadot-sdk/pull/1191
      
      Using a versi-node we compare the scenarios where all new optimisations
      are disabled with a scenarios where tranche0 assignments are sent in a
      single message and a conservative simulation where the coalescing of
      approvals gives us just 50% reduction in the number of messages we send.
      
      Overall, what we see is a speedup of around 30-40% in the time it takes
      to process the necessary messages and a 30-40% reduction in the
      necessary bandwidth.
      
      #### Best case scenario comparison(minimum required tranches sent).
      Unoptimised
      ```
          Number of blocks: 10
          Payload bytes received from peers: 53289 KiB total, 5328 KiB/block
          Payload bytes sent to peers: 52489 KiB total, 5248 KiB/block
          approval-distribution CPU usage 6.732s
          approval-distribution CPU usage per block 0.673s
          approval-voting CPU usage 9.523s
          approval-voting CPU usage per block 0.952s
      ```
      
      vs Optimisation enabled
      ```
         Number of blocks: 10
         Payload bytes received from peers: 32141 KiB total, 3214 KiB/block
         Payload bytes sent to peers: 37314 KiB total, 3731 KiB/block
         approval-distribution CPU usage 4.658s
         approval-distribution CPU usage per block 0.466s
         approval-voting CPU usage 6.236s
         approval-voting CPU usage per block 0.624s
      ```
      
      #### Worst case all tranches sent, very unlikely happens when sharding
      breaks.
      
      Unoptimised
      ```
         Number of blocks: 10
         Payload bytes received from peers: 746393 KiB total, 74639 KiB/block
         Payload bytes sent to peers: 729151 KiB total, 72915 KiB/block
         approval-distribution CPU usage 118.681s
         approval-distribution CPU usage per block 11.868s
         approval-voting CPU usage 124.118s
         approval-voting CPU usage per block 12.412s
      ```
      
      vs optimised
      ```
          Number of blocks: 10
          Payload bytes received from peers: 503993 KiB total, 50399 KiB/block
          Payload bytes sent to peers: 629971 KiB total, 62997 KiB/block
          approval-distribution CPU usage 84.061s
          approval-distribution CPU usage per block 8.406s
          approval-voting CPU usage 96.532s
          approval-voting CPU usage per block 9.653s
      ```
      
      
      ## TODOs
      [x] Polish implementation.
      [x] Use what we have so far to evaluate
      https://github.com/paritytech/polkadot-sdk/pull/1191 before merging.
      [x] List of features and additional dimensions we want to use for
      benchmarking.
      [x] Run benchmark on hardware similar with versi and kusama nodes.
      [ ] Add benchmark to be run in CI for catching regression in
      performance.
      [ ] Rebase on latest changes for network emulation.
      
      ---------
      
      Signed-off-by: default avatarAndrei Sandu <[email protected]>
      Signed-off-by: default avatarAlexandru Gheorghe <[email protected]>
      Co-authored-by: default avatarAndrei Sandu <[email protected]>
      Co-authored-by: default avatarAndrei Sandu <[email protected]>
      f9f88688
  7. Feb 02, 2024
  8. Jan 31, 2024
    • Oliver Tale-Yazdi's avatar
      [FRAME] Make `core-fellowship` ans `salary` work for swapped members (#3156) · 07e55006
      Oliver Tale-Yazdi authored
      Fixup for https://github.com/paritytech/polkadot-sdk/pull/2587 to make
      the `core-fellowship` crate work with swapped members.
      
      Adds a `MemberSwappedHandler` to the `ranked-collective` pallet that are
      implemented by `core-fellowship+salary`.
      There is are exhaustive tests
      [here](https://github.com/paritytech/polkadot-sdk/blob/72aa7ac1/substrate/frame/core-fellowship/src/tests/integration.rs#L338)
      and
      [here](https://github.com/paritytech/polkadot-sdk/blob/ab3cdb05
      
      /substrate/frame/salary/src/tests/integration.rs#L224)
      to check that adding member `1` is equivalent to adding member `0` and
      then swapping.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      07e55006
    • Branislav Kontur's avatar
      [xcm] Fix `SovereignPaidRemoteExporter` and `DepositAsset` handling (#3157) · 6ea472ad
      Branislav Kontur authored
      This PR addresses two issues:
      - It modifies `DepositAsset`'s asset filter from `All` to
      `AllCounted(1)` to prevent potentially charging excessive weight/fees.
      This adjustment avoids situations where fees could be calculated based
      on the count of assets, as illustrated
      [here](https://github.com/paritytech/polkadot-sdk/blob/master/cumulus/parachains/runtimes/bridge-hubs/bridge-hub-rococo/src/weights/xcm/mod.rs#L38-L46).
      - It encapsulates `DepositAsset` with `SetAppendix` to ensure that
      `fees` are not trapped in any case. For instance, this prevents issues
      when `ExportXcm::validate` encounters an error during the processing of
      `ExportMessage`.
      6ea472ad
    • dependabot[bot]'s avatar
      Bump bounded-collections from 0.1.9 to 0.2.0 (#3118) · 2adf499a
      dependabot[bot] authored
      
      
      Bumps [bounded-collections](https://github.com/paritytech/parity-common)
      from 0.1.9 to 0.2.0.
      <details>
      <summary>Commits</summary>
      <ul>
      <li>See full diff in <a
      href="https://github.com/paritytech/parity-common/commits/impl-rlp-v0.2.0">compare
      view</a></li>
      </ul>
      </details>
      <br />
      
      
      [![Dependabot compatibility
      score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=bounded-collections&package-manager=cargo&previous-version=0.1.9&new-version=0.2.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
      
      Dependabot will resolve any conflicts with this PR as long as you don't
      alter it yourself. You can also trigger a rebase manually by commenting
      `@dependabot rebase`.
      
      [//]: # (dependabot-automerge-start)
      [//]: # (dependabot-automerge-end)
      
      ---
      
      <details>
      <summary>Dependabot commands and options</summary>
      <br />
      
      You can trigger Dependabot actions by commenting on this PR:
      - `@dependabot rebase` will rebase this PR
      - `@dependabot recreate` will recreate this PR, overwriting any edits
      that have been made to it
      - `@dependabot merge` will merge this PR after your CI passes on it
      - `@dependabot squash and merge` will squash and merge this PR after
      your CI passes on it
      - `@dependabot cancel merge` will cancel a previously requested merge
      and block automerging
      - `@dependabot reopen` will reopen this PR if it is closed
      - `@dependabot close` will close this PR and stop Dependabot recreating
      it. You can achieve the same result by closing it manually
      - `@dependabot show <dependency name> ignore conditions` will show all
      of the ignore conditions of the specified dependency
      - `@dependabot ignore <dependency name> major version` will close this
      group update PR and stop Dependabot creating any more for the specific
      dependency's major version (unless you unignore this specific
      dependency's major version or upgrade to it yourself)
      - `@dependabot ignore <dependency name> minor version` will close this
      group update PR and stop Dependabot creating any more for the specific
      dependency's minor version (unless you unignore this specific
      dependency's minor version or upgrade to it yourself)
      - `@dependabot ignore <dependency name>` will close this group update PR
      and stop Dependabot creating any more for the specific dependency
      (unless you unignore this specific dependency or upgrade to it yourself)
      - `@dependabot unignore <dependency name>` will remove all of the ignore
      conditions of the specified dependency
      - `@dependabot unignore <dependency name> <ignore condition>` will
      remove the ignore condition of the specified dependency and ignore
      conditions
      
      
      </details>
      
      Signed-off-by: default avatardependabot[bot] <[email protected]>
      Co-authored-by: default avatardependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
      Co-authored-by: default avatarLiam Aharon <[email protected]>
      2adf499a
    • Alexandru Gheorghe's avatar
    • Adrian Catangiu's avatar
      xcm-executor: DepositReserveAsset charges delivery fees from inner assets (#3142) · 5354097a
      Adrian Catangiu authored
      
      
      This fix aims to solve an issue in Kusama that resulted in failed
      reserve asset transfers.
      
      During multi-hop XCMs, like reserve asset transfers where the reserve is
      not the sender nor the destination, but a third remote chain, the origin
      is not available to pay for delivery fees out of their account directly,
      so delivery fees should be paid out of transferred assets.
      
      This commit also adds an xcm-emulator regression test that validates
      this scenario is now working.
      
      Signed-off-by: default avatarAdrian Catangiu <[email protected]>
      Co-authored-by: default avatarFrancisco Aguirre <[email protected]>
      5354097a
    • Branislav Kontur's avatar
      [frame] `#[pallet::composite_enum]` improved variant count handling + removed... · bb8ddc46
      Branislav Kontur authored
      [frame] `#[pallet::composite_enum]` improved variant count handling + removed `pallet_balances`'s `MaxHolds` config (#2657)
      
      I started this investigation/issue based on @liamaharon
      
       question
      [here](https://github.com/paritytech/polkadot-sdk/pull/1801#discussion_r1410452499).
      
      ## Problem
      
      The `pallet_balances` integrity test should correctly detect that the
      runtime has correct distinct `HoldReasons` variant count. I assume the
      same situation exists for RuntimeFreezeReason.
      
      It is not a critical problem, if we set `MaxHolds` with a sufficiently
      large value, everything should be ok. However, in this case, the
      integrity_test check becomes less useful.
      
      **Situation for "any" runtime:**
      - `HoldReason` enums from different pallets:
      ```rust
              /// from pallet_nis
              #[pallet::composite_enum]
      	pub enum HoldReason {
      		NftReceipt,
      	}
      
              /// from pallet_preimage
              #[pallet::composite_enum]
      	pub enum HoldReason {
      		Preimage,
      	}
      
              // from pallet_state-trie-migration
              #[pallet::composite_enum]
      	pub enum HoldReason {
      		SlashForContinueMigrate,
      		SlashForMigrateCustomTop,
      		SlashForMigrateCustomChild,
      	}
      ```
      
      - generated `RuntimeHoldReason` enum looks like:
      ```rust
      pub enum RuntimeHoldReason {
      
          #[codec(index = 32u8)]
          Preimage(pallet_preimage::HoldReason),
      
          #[codec(index = 38u8)]
          Nis(pallet_nis::HoldReason),
      
          #[codec(index = 42u8)]
          StateTrieMigration(pallet_state_trie_migration::HoldReason),
      }
      ```
      
      - composite enum `RuntimeHoldReason` variant count is detected as `3`
      - we set `type MaxHolds = ConstU32<3>`
      - `pallet_balances::integrity_test` is ok with `3`(at least 3)
      
      However, the real problem can occur in a live runtime where some
      functionality might stop working. This is due to a total of 5 distinct
      hold reasons (for pallets with multi-instance support, it is even more),
      and not all of them can be used because of an incorrect `MaxHolds`,
      which is deemed acceptable according to the `integrity_test`:
        ```
        // pseudo-code - if we try to call all of these:
      
      T::Currency::hold(&pallet_nis::HoldReason::NftReceipt.into(),
      &nft_owner, deposit)?;
      T::Currency::hold(&pallet_preimage::HoldReason::Preimage.into(),
      &nft_owner, deposit)?;
      
      T::Currency::hold(&pallet_state_trie_migration::HoldReason::SlashForContinueMigrate.into(),
      &nft_owner, deposit)?;
      
        // With `type MaxHolds = ConstU32<3>` these two will fail
      
      T::Currency::hold(&pallet_state_trie_migration::HoldReason::SlashForMigrateCustomTop.into(),
      &nft_owner, deposit)?;
      
      T::Currency::hold(&pallet_state_trie_migration::HoldReason::SlashForMigrateCustomChild.into(),
      &nft_owner, deposit)?;
        ```  
      
      
      ## Solutions
      
      A macro `#[pallet::*]` expansion is extended of `VariantCount`
      implementation for the `#[pallet::composite_enum]` enum type. This
      expansion generates the `VariantCount` implementation for pallets'
      `HoldReason`, `FreezeReason`, `LockId`, and `SlashReason`. Enum variants
      must be plain enum values without fields to ensure a deterministic
      count.
      
      The composite runtime enum, `RuntimeHoldReason` and
      `RuntimeFreezeReason`, now sets `VariantCount::VARIANT_COUNT` as the sum
      of pallets' enum `VariantCount::VARIANT_COUNT`:
      ```rust
      #[frame_support::pallet(dev_mode)]
      mod module_single_instance {
      
      	#[pallet::composite_enum]
      	pub enum HoldReason {
      		ModuleSingleInstanceReason1,
      		ModuleSingleInstanceReason2,
      	}
      ...
      }
      
      #[frame_support::pallet(dev_mode)]
      mod module_multi_instance {
      
      	#[pallet::composite_enum]
      	pub enum HoldReason<I: 'static = ()> {
      		ModuleMultiInstanceReason1,
      		ModuleMultiInstanceReason2,
      		ModuleMultiInstanceReason3,
      	}
      ...
      }
      
      
      impl self::sp_api_hidden_includes_construct_runtime::hidden_include::traits::VariantCount
          for RuntimeHoldReason
      {
          const VARIANT_COUNT: u32 = 0
              + module_single_instance::HoldReason::VARIANT_COUNT
              + module_multi_instance::HoldReason::<module_multi_instance::Instance1>::VARIANT_COUNT
              + module_multi_instance::HoldReason::<module_multi_instance::Instance2>::VARIANT_COUNT
              + module_multi_instance::HoldReason::<module_multi_instance::Instance3>::VARIANT_COUNT;
      }
      ```
      
      In addition, `MaxHolds` is removed (as suggested
      [here](https://github.com/paritytech/polkadot-sdk/pull/2657#discussion_r1443324573))
      from `pallet_balances`, and its `Holds` are now bounded to
      `RuntimeHoldReason::VARIANT_COUNT`. Therefore, there is no need to let
      the runtime specify `MaxHolds`.
      
      
      ## For reviewers
      
      Relevant changes can be found here:
      - `substrate/frame/support/procedural/src/lib.rs` 
      -  `substrate/frame/support/procedural/src/pallet/parse/composite.rs`
      -  `substrate/frame/support/procedural/src/pallet/expand/composite.rs`
      -
      `substrate/frame/support/procedural/src/construct_runtime/expand/composite_helper.rs`
      -
      `substrate/frame/support/procedural/src/construct_runtime/expand/hold_reason.rs`
      -
      `substrate/frame/support/procedural/src/construct_runtime/expand/freeze_reason.rs`
      - `substrate/frame/support/src/traits/misc.rs`
      
      And the rest of the files is just about removed `MaxHolds` from
      `pallet_balances`
      
      ## Next steps
      
      Do the same for `MaxFreezes`
      https://github.com/paritytech/polkadot-sdk/issues/2997.
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      Co-authored-by: default avatarDónal Murray <[email protected]>
      Co-authored-by: default avatargupnik <[email protected]>
      bb8ddc46
    • Francisco Aguirre's avatar
      Add limits to XCMv4 (#3114) · cc4805b5
      Francisco Aguirre authored
      
      
      Some more work regarding XCMv4. Two limits from v3 were not transferred
      over, those are:
      - The instructions limit
      - The number of assets limit
      Both of these are now in v4.
      
      For some reason `AssetInstance` increased in size, don't know why CI
      didn't catch that before.
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      Co-authored-by: command-bot <>
      cc4805b5
  9. Jan 30, 2024
  10. Jan 29, 2024
  11. Jan 28, 2024