1. Mar 26, 2024
    • Dcompoze's avatar
      Fix spelling mistakes across the whole repository (#3808) · 002d9260
      Dcompoze authored
      **Update:** Pushed additional changes based on the review comments.
      
      **This pull request fixes various spelling mistakes in this
      repository.**
      
      Most of the changes are contained in the first **3** commits:
      
      - `Fix spelling mistakes in comments and docs`
      
      - `Fix spelling mistakes in test names`
      
      - `Fix spelling mistakes in error messages, panic messages, logs and
      tracing`
      
      Other source code spelling mistakes are separated into individual
      commits for easier reviewing:
      
      - `Fix the spelling of 'authority'`
      
      - `Fix the spelling of 'REASONABLE_HEADERS_IN_JUSTIFICATION_ANCESTRY'`
      
      - `Fix the spelling of 'prev_enqueud_messages'`
      
      - `Fix the spelling of 'endpoint'`
      
      - `Fix the spelling of 'children'`
      
      - `Fix the spelling of 'PenpalSiblingSovereignAccount'`
      
      - `Fix the spelling of 'PenpalSudoAccount'`
      
      - `Fix the spelling of 'insufficient'`
      
      - `Fix the spelling of 'PalletXcmExtrinsicsBenchmark'`
      
      - `Fix the spelling of 'subtracted'`
      
      - `Fix the spelling of 'CandidatePendingAvailability'`
      
      - `Fix the spelling of 'exclusive'`
      
      - `Fix the spelling of 'until'`
      
      - `Fix the spelling of 'discriminator'`
      
      - `Fix the spelling of 'nonexistent'`
      
      - `Fix the spelling of 'subsystem'`
      
      - `Fix the spelling of 'indices'`
      
      - `Fix the spelling of 'committed'`
      
      - `Fix the spelling of 'topology'`
      
      - `Fix the spelling of 'response'`
      
      - `Fix the spelling of 'beneficiary'`
      
      - `Fix the spelling of 'formatted'`
      
      - `Fix the spelling of 'UNKNOWN_PROOF_REQUEST'`
      
      - `Fix the spelling of 'succeeded'`
      
      - `Fix the spelling of 'reopened'`
      
      - `Fix the spelling of 'proposer'`
      
      - `Fix the spelling of 'InstantiationNonce'`
      
      - `Fix the spelling of 'depositor'`
      
      - `Fix the spelling of 'expiration'`
      
      - `Fix the spelling of 'phantom'`
      
      - `Fix the spelling of 'AggregatedKeyValue'`
      
      - `Fix the spelling of 'randomness'`
      
      - `Fix the spelling of 'defendant'`
      
      - `Fix the spelling of 'AquaticMammal'`
      
      - `Fix the spelling of 'transactions'`
      
      - `Fix the spelling of 'PassingTracingSubscriber'`
      
      - `Fix the spelling of 'TxSignaturePayload'`
      
      - `Fix the spelling of 'versioning'`
      
      - `Fix the spelling of 'descendant'`
      
      - `Fix the spelling of 'overridden'`
      
      - `Fix the spelling of 'network'`
      
      Let me know if this structure is adequate.
      
      **Note:** The usage of the words `Merkle`, `Merkelize`, `Merklization`,
      `Merkelization`, `Merkleization`, is somewhat inconsistent but I left it
      as it is.
      
      ~~**Note:** In some places the term `Receival` is used to refer to
      message reception, IMO `Reception` is the correct word here, but I left
      it as it is.~~
      
      ~~**Note:** In some places the term `Overlayed` is used instead of the
      more acceptable version `Overlaid` but I also left it as it is.~~
      
      ~~**Note:** In some places the term `Applyable` is used instead of the
      correct version `Applicable` but I also left it as it is.~~
      
      **Note:** Some usage of British vs American english e.g. `judgement` vs
      `judgment`, `initialise` vs `initialize`, `optimise` vs `optimize` etc.
      are both present in different places, but I suppose that's
      understandable given the number of contributors.
      
      ~~**Note:** There is a spelling mistake in `.github/CODEOWNERS` but it
      triggers errors in CI when I make changes to it, so I left it as it
      is.~~
      002d9260
  2. Mar 21, 2024
  3. Mar 20, 2024
    • eskimor's avatar
      Fix algorithmic complexity of on-demand scheduler with regards to number of cores. (#3190) · b74353d3
      eskimor authored
      
      
      We witnessed really poor performance on Rococo, where we ended up with
      50 on-demand cores. This was due to the fact that for each core the full
      queue was processed. With this change full queue processing will happen
      way less often (most of the time complexity is O(1) or O(log(n))) and if
      it happens then only for one core (in expectation).
      
      Also spot price is now updated before each order to ensure economic back
      pressure.
      
      
      TODO:
      
      - [x] Implement
      - [x] Basic tests
      - [x] Add more tests (see todos)
      - [x] Run benchmark to confirm better performance, first results suggest
      > 100x faster.
      - [x] Write migrations
      - [x] Bump scale-info version and remove patch in Cargo.toml
      - [x] Write PR docs: on-demand performance improved, more on-demand
      cores are now non problematic anymore. If need by also the max queue
      size can be increased again. (Maybe not to 10k)
      
      Optional: Performance can be improved even more, if we called
      `pop_assignment_for_core()`, before calling `report_processed` (Avoid
      needless affinity drops). The effect gets smaller the larger the claim
      queue and I would only go for it, if it does not add complexity to the
      scheduler.
      
      ---------
      
      Co-authored-by: default avatareskimor <[email protected]>
      Co-authored-by: default avatarantonva <[email protected]>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarAnton Vilhelm Ásgeirsson <[email protected]>
      Co-authored-by: default avatarordian <[email protected]>
      b74353d3
    • Tsvetomir Dimitrov's avatar
      Expose `ClaimQueue` via a runtime api and use it in `collation-generation` (#3580) · e58e854a
      Tsvetomir Dimitrov authored
      The PR adds two things:
      1. Runtime API exposing the whole claim queue
      2. Consumes the API in `collation-generation` to fetch the next
      scheduled `ParaEntry` for an occupied core.
      
      Related to https://github.com/paritytech/polkadot-sdk/issues/1797
      e58e854a
  4. Mar 19, 2024
    • Davide Galassi's avatar
      Implement crypto byte array newtypes in term of a shared type (#3684) · 1e9fd237
      Davide Galassi authored
      Introduces `CryptoBytes` type defined as:
      
      ```rust
      pub struct CryptoBytes<const N: usize, Tag = ()>(pub [u8; N], PhantomData<fn() -> Tag>);
      ```
      
      The type implements a bunch of methods and traits which are typically
      expected from a byte array newtype
      (NOTE: some of the methods and trait implementations IMO are a bit
      redundant, but I decided to maintain them all to not change too much
      stuff in this PR)
      
      It also introduces two (generic) typical consumers of `CryptoBytes`:
      `PublicBytes` and `SignatureBytes`.
      
      ```rust
      pub struct PublicTag;
      pub PublicBytes<const N: usize, CryptoTag> = CryptoBytes<N, (PublicTag, CryptoTag)>;
      
      pub struct SignatureTag;
      pub SignatureBytes<const N: usize, CryptoTag> = CryptoBytes<N, (SignatureTag, CryptoTag)>;
      ```
      
      Both of them use a tag to differentiate the two types at a higher level.
      Downstream specializations will further specialize using a dedicated
      crypto tag. For example in ECDSA:
      
      
      ```rust
      pub struct EcdsaTag;
      
      pub type Public = PublicBytes<PUBLIC_KEY_SERIALIZED_SIZE, EcdsaTag>;
      pub type Signature = PublicBytes<PUBLIC_KEY_SERIALIZED_SIZE, EcdsaTag>;
      ```
      
      Overall we have a cleaner and most importantly **consistent** code for
      all the types involved
      
      All these details are opaque to the end user which can use `Public` and
      `Signature` for the cryptos as before
      1e9fd237
  5. Mar 15, 2024
  6. Mar 11, 2024
  7. Mar 08, 2024
  8. Mar 07, 2024
    • Pablo Andrés Dorado Suárez's avatar
      fix(pallet-benchmarking): split test functions in v2 (#3574) · f4fbddec
      Pablo Andrés Dorado Suárez authored
      Closes #376
      
      ---------
      
      Co-authored-by: command-bot <>
      f4fbddec
    • Andrei Sandu's avatar
      ParaInherent create: update `apply_weight_limit` wrt elastic scaling (#3573) · c16fcc47
      Andrei Sandu authored
      Changes the way we perform the random selection of backed candidates
      when there isn't enough room for all of them. Instead of picking
      individual backed candidates `apply_weight` now operates on chains of
      candidates. This is fully backwards compatible and relies on the node
      side (provisioner/prospective parachains) doing the heavy lifting and
      providing the candidates in the order they form a chain.
      
      The same approach can be implemented for bitfields random selection once
      https://github.com/paritytech/polkadot-sdk/pull/3479
      
       is merged.
      
      The approach taken in this PR aims for reduced additional complexity at
      the cost of being less fair wrt how many backed candidates from each
      chain are picked. It favors elastic scaling parachains vs parachains not
      using elastic scaling, but from my perspective it should be fine as this
      should happen under exceptional circumstances like dispute storms.
      
      Note: to make things more fair we can consider specializing `random_sel`
      such that it will try to pick candidates one by one in the order
      provided by the provisioner such that non elastic scaling parachains
      have the same chance of getting a candidate backed.
      
      ---------
      
      Signed-off-by: default avatarAndrei Sandu <[email protected]>
      c16fcc47
  9. Feb 29, 2024
  10. Feb 28, 2024
  11. Feb 23, 2024
  12. Feb 14, 2024
  13. Feb 02, 2024
  14. Jan 31, 2024
    • Branislav Kontur's avatar
      [frame] `#[pallet::composite_enum]` improved variant count handling + removed... · bb8ddc46
      Branislav Kontur authored
      [frame] `#[pallet::composite_enum]` improved variant count handling + removed `pallet_balances`'s `MaxHolds` config (#2657)
      
      I started this investigation/issue based on @liamaharon question
      [here](https://github.com/paritytech/polkadot-sdk/pull/1801#discussion_r1410452499).
      
      ## Problem
      
      The `pallet_balances` integrity test should correctly detect that the
      runtime has correct distinct `HoldReasons` variant count. I assume the
      same situation exists for RuntimeFreezeReason.
      
      It is not a critical problem, if we set `MaxHolds` with a sufficiently
      large value, everything should be ok. However, in this case, the
      integrity_test check becomes less useful.
      
      **Situation for "any" runtime:**
      - `HoldReason` enums from different pallets:
      ```rust
              /// from pallet_nis
              #[pallet::composite_enum]
      	pub enum HoldReason {
      		NftReceipt,
      	}
      
              /// from pallet_preimage
              #[pallet::composite_enum]
      	pub enum HoldReason {
      		Preimage,
      	}
      
              // from pallet_state-trie-migration
              #[pallet::composite_enum]
      	pub enum HoldReason {
      		SlashForContinueMigrate,
      		SlashForMigrateCustomTop,
      		SlashForMigrateCustomChild,
      	}
      ```
      
      - generated `RuntimeHoldReason` enum looks like:
      ```rust
      pub enum RuntimeHoldReason {
      
          #[codec(index = 32u8)]
          Preimage(pallet_preimage::HoldReason),
      
          #[codec(index = 38u8)]
          Nis(pallet_nis::HoldReason),
      
          #[codec(index = 42u8)]
          StateTrieMigration(pallet_state_trie_migration::HoldReason),
      }
      ```
      
      - composite enum `RuntimeHoldReason` variant count is detected as `3`
      - we set `type MaxHolds = ConstU32<3>`
      - `pallet_balances::integrity_test` is ok with `3`(at least 3)
      
      However, the real problem can occur in a live runtime where some
      functionality might stop working. This is due to a total of 5 distinct
      hold reasons (for pallets with multi-instance support, it is even more),
      and not all of them can be used because of an incorrect `MaxHolds`,
      which is deemed acceptable according to the `integrity_test`:
        ```
        // pseudo-code - if we try to call all of these:
      
      T::Currency::hold(&pallet_nis::HoldReason::NftReceipt.into(),
      &nft_owner, deposit)?;
      T::Currency::hold(&pallet_preimage::HoldReason::Preimage.into(),
      &nft_owner, deposit)?;
      
      T::Currency::hold(&pallet_state_trie_migration::HoldReason::SlashForContinueMigrate.into(),
      &nft_owner, deposit)?;
      
        // With `type MaxHolds = ConstU32<3>` these two will fail
      
      T::Currency::hold(&pallet_state_trie_migration::HoldReason::SlashForMigrateCustomTop.into(),
      &nft_owner, deposit)?;
      
      T::Currency::hold(&pallet_state_trie_migration::HoldReason::SlashForMigrateCustomChild.into(),
      &nft_owner, deposit)?;
        ```  
      
      
      ## Solutions
      
      A macro `#[pallet::*]` expansion is extended of `VariantCount`
      implementation for the `#[pallet::composite_enum]` enum type. This
      expansion generates the `VariantCount` implementation for pallets'
      `HoldReason`, `FreezeReason`, `LockId`, and `SlashReason`. Enum variants
      must be plain enum values without fields to ensure a deterministic
      count.
      
      The composite runtime enum, `RuntimeHoldReason` and
      `RuntimeFreezeReason`, now sets `VariantCount::VARIANT_COUNT` as the sum
      of pallets' enum `VariantCount::VARIANT_COUNT`:
      ```rust
      #[frame_support::pallet(dev_mode)]
      mod module_single_instance {
      
      	#[pallet::composite_enum]
      	pub enum HoldReason {
      		ModuleSingleInstanceReason1,
      		ModuleSingleInstanceReason2,
      	}
      ...
      }
      
      #[frame_support::pallet(dev_mode)]
      mod module_multi_instance {
      
      	#[pallet::composite_enum]
      	pub enum HoldReason<I: 'static = ()> {
      		ModuleMultiInstanceReason1,
      		ModuleMultiInstanceReason2,
      		ModuleMultiInstanceReason3,
      	}
      ...
      }
      
      
      impl self::sp_api_hidden_includes_construct_runtime::hidden_include::traits::VariantCount
          for RuntimeHoldReason
      {
          const VARIANT_COUNT: u32 = 0
              + module_single_instance::HoldReason::VARIANT_COUNT
              + module_multi_instance::HoldReason::<module_multi_instance::Instance1>::VARIANT_COUNT
              + module_multi_instance::HoldReason::<module_multi_instance::Instance2>::VARIANT_COUNT
              + module_multi_instance::HoldReason::<module_multi_instance::Instance3>::VARIANT_COUNT;
      }
      ```
      
      In addition, `MaxHolds` is removed (as suggested
      [here](https://github.com/paritytech/polkadot-sdk/pull/2657#discussion_r1443324573))
      from `pallet_balances`, and its `Holds` are now bounded to
      `RuntimeHoldReason::VARIANT_COUNT`. Therefore, there is no need to let
      the runtime specify `MaxHolds`.
      
      
      ## For reviewers
      
      Relevant changes can be found here:
      - `substrate/frame/support/procedural/src/lib.rs` 
      -  `substrate/frame/support/procedural/src/pallet/parse/composite.rs`
      -  `substrate/frame/support/procedural/src/pallet/expand/composite.rs`
      -
      `substrate/frame/support/procedural/src/construct_runtime/expand/composite_helper.rs`
      -
      `substrate/frame/support/procedural/src/construct_runtime/expand/hold_reason.rs`
      -
      `substrate/frame/support/procedural/src/construct_runtime/expand/freeze_reason.rs`
      - `substrate/frame/support/src/traits/misc.rs`
      
      And the rest of the files is just about removed `MaxHolds` from
      `pallet_balances`
      
      ## Next steps
      
      Do the same for `MaxFreezes`
      https://github.com/paritytech/polkadot-sdk/issues/2997
      
      .
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      Co-authored-by: default avatarDónal Murray <[email protected]>
      Co-authored-by: default avatargupnik <[email protected]>
      bb8ddc46
  15. Jan 22, 2024
  16. Jan 18, 2024
  17. Jan 16, 2024
    • Francisco Aguirre's avatar
      XCMv4 (#1230) · 8428f678
      Francisco Aguirre authored
      
      
      # Note for reviewer
      
      Most changes are just syntax changes necessary for the new version.
      Most important files should be the ones under the `xcm` folder.
      
      # Description 
      
      Added XCMv4.
      
      ## Removed `Multi` prefix
      The following types have been renamed:
      - MultiLocation -> Location
      - MultiAsset -> Asset
      - MultiAssets -> Assets
      - InteriorMultiLocation -> InteriorLocation
      - MultiAssetFilter -> AssetFilter
      - VersionedMultiAsset -> VersionedAsset
      - WildMultiAsset -> WildAsset
      - VersionedMultiLocation -> VersionedLocation
      
      In order to fix a name conflict, the `Assets` in `xcm-executor` were
      renamed to `HoldingAssets`, as they represent assets in holding.
      
      ## Removed `Abstract` asset id
      
      It was not being used anywhere and this simplifies the code.
      
      Now assets are just constructed as follows:
      
      ```rust
      let asset: Asset = (AssetId(Location::new(1, Here)), 100u128).into();
      ```
      
      No need for specifying `Concrete` anymore.
      
      ## Outcome is now a named fields struct
      
      Instead of
      
      ```rust
      pub enum Outcome {
        Complete(Weight),
        Incomplete(Weight, Error),
        Error(Error),
      }
      ```
      
      we now have
      
      ```rust
      pub enum Outcome {
        Complete { used: Weight },
        Incomplete { used: Weight, error: Error },
        Error { error: Error },
      }
      ```
      
      ## Added Reanchorable trait
      
      Now both locations and assets implement this trait, making it easier to
      reanchor both.
      
      ## New syntax for building locations and junctions
      
      Now junctions are built using the following methods:
      
      ```rust
      let location = Location {
          parents: 1,
          interior: [Parachain(1000), PalletInstance(50), GeneralIndex(1984)].into()
      };
      ```
      
      or
      
      ```rust
      let location = Location::new(1, [Parachain(1000), PalletInstance(50), GeneralIndex(1984)]);
      ```
      
      And they are matched like so:
      
      ```rust
      match location.unpack() {
        (1, [Parachain(id)]) => ...
        (0, Here) => ...,
        (1, [_]) => ...,
      }
      ```
      
      This syntax is mandatory in v4, and has been also implemented for v2 and
      v3 for easier migration.
      
      This was needed to make all sizes smaller.
      
      # TODO
      - [x] Scaffold v4
      - [x] Port github.com/paritytech/polkadot/pull/7236
      - [x] Remove `Multi` prefix
      - [x] Remove `Abstract` asset id
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarKeith Yeung <[email protected]>
      8428f678
  18. Jan 12, 2024
    • joe petrowski's avatar
      Add More HRMP Channel Opening Tests (#2915) · f7306d32
      joe petrowski authored
      A recent referendum failed because the parachain's sovereign account had
      enough funds for the deposit (the runtime at the time required a
      deposit) but not enough left over for the existential deposit, which
      must remain free.
      
      The current logic does not require a deposit for channels with system
      chains, and the current tests check that none is taken. This new test
      simply ensures that even without any balance in a parachain's sovereign
      account, opening a channel with a system chain will be successful.
      f7306d32
  19. Jan 11, 2024
  20. Jan 08, 2024
  21. Dec 27, 2023
  22. Dec 22, 2023
  23. Dec 21, 2023
  24. Dec 20, 2023
    • Dónal Murray's avatar
      Fix clippy lints behind feature gates and add new CI step all features (#2569) · d68868f6
      Dónal Murray authored
      
      
      Many clippy lints usually enforced by `-Dcomplexity` and `-Dcorrectness`
      are not caught by CI as they are gated by `features`, like
      `runtime-benchmarks`, while the clippy CI job runs with only the default
      features for all targets.
      
      This PR also adds a CI step to run clippy with `--all-features` to
      ensure the code quality is maintained behind feature gates from now on.
      
      To improve local development, clippy lints are downgraded to warnings,
      but they still will result in an error at CI due to the `-Dwarnings`
      rustflag.
      
      ---------
      
      Co-authored-by: default avatarLiam Aharon <[email protected]>
      d68868f6
  25. Dec 18, 2023
  26. Dec 14, 2023
  27. Dec 13, 2023
    • Alexandru Gheorghe's avatar
      Approve multiple candidates with a single signature (#1191) · a84dd0db
      Alexandru Gheorghe authored
      Initial implementation for the plan discussed here: https://github.com/paritytech/polkadot-sdk/issues/701
      Built on top of https://github.com/paritytech/polkadot-sdk/pull/1178
      v0: https://github.com/paritytech/polkadot/pull/7554,
      
      ## Overall idea
      
      When approval-voting checks a candidate and is ready to advertise the
      approval, defer it in a per-relay chain block until we either have
      MAX_APPROVAL_COALESCE_COUNT candidates to sign or a candidate has stayed
      MAX_APPROVALS_COALESCE_TICKS in the queue, in both cases we sign what
      candidates we have available.
      
      This should allow us to reduce the number of approvals messages we have
      to create/send/verify. The parameters are configurable, so we should
      find some values that balance:
      
      - Security of the network: Delaying broadcasting of an approval
      shouldn't but the finality at risk and to make sure that never happens
      we won't delay sending a vote if we are past 2/3 from the no-show time.
      - Scalability of the network: MAX_APPROVAL_COALESCE_COUNT = 1 &
      MAX_APPROVALS_COALESCE_TICKS =0, is what we have now and we know from
      the measurements we did on versi, it bottlenecks
      approval-distribution/approval-voting when increase significantly the
      number of validators and parachains
      - Block storage: In case of disputes we have to import this votes on
      chain and that increase the necessary storage with
      MAX_APPROVAL_COALESCE_COUNT * CandidateHash per vote. Given that
      disputes are not the normal way of the network functioning and we will
      limit MAX_APPROVAL_COALESCE_COUNT in the single digits numbers, this
      should be good enough. Alternatively, we could try to create a better
      way to store this on-chain through indirection, if that's needed.
      
      ## Other fixes:
      - Fixed the fact that we were sending random assignments to
      non-validators, that was wrong because those won't do anything with it
      and they won't gossip it either because they do not have a grid topology
      set, so we would waste the random assignments.
      - Added metrics to be able to debug potential no-shows and
      mis-processing of approvals/assignments.
      
      ## TODO:
      - [x] Get feedback, that this is moving in the right direction. @ordian
      @sandreim @eskimor @burdges, let me know what you think.
      - [x] More and more testing.
      - [x]  Test in versi.
      - [x] Make MAX_APPROVAL_COALESCE_COUNT &
      MAX_APPROVAL_COALESCE_WAIT_MILLIS a parachain host configuration.
      - [x] Make sure the backwards compatibility works correctly
      - [x] Make sure this direction is compatible with other streams of work:
      https://github.com/paritytech/polkadot-sdk/issues/635 &
      https://github.com/paritytech/polkadot-sdk/issues/742
      
      
      - [x] Final versi burn-in before merging
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <[email protected]>
      a84dd0db
  28. Dec 07, 2023
  29. Nov 28, 2023
  30. Nov 20, 2023
  31. Nov 14, 2023
  32. Nov 13, 2023
    • Adrian Catangiu's avatar
      pallet-xcm: enhance `reserve_transfer_assets` to support remote reserves (#1672) · 18257373
      Adrian Catangiu authored
      ## Motivation
      
      `pallet-xcm` is the main user-facing interface for XCM functionality,
      including assets manipulation functions like `teleportAssets()` and
      `reserve_transfer_assets()` calls.
      
      While `teleportAsset()` works both ways, `reserve_transfer_assets()`
      works only for sending reserve-based assets to a remote destination and
      beneficiary when the reserve is the _local chain_.
      
      ## Solution
      
      This PR enhances `pallet_xcm::(limited_)reserve_withdraw_assets` to
      support transfers when reserves are other chains.
      This will allow complete, **bi-directional** reserve-based asset
      transfers user stories using `pallet-xcm`.
      
      Enables following scenarios:
      - transferring assets with local reserve (was previously supported iff
      asset used as fee also had local reserve - now it works in all cases),
      - transferring assets with reserve on destination,
      - transferring assets with reserve on remote/third-party chain (iff
      assets and fees have same remote reserve),
      - transferring assets with reserve different than the reserve of the
      asset to be used as fees - meaning can be used to transfer random asset
      with local/dest reserve while using DOT for fees on all involved chains,
      even if DOT local/dest reserve doesn't match asset reserve,
      - transferring assets with any type of local/dest reserve while using
      fees which can be teleported between involved chains.
      
      All of the above is done by pallet inner logic without the user having
      to specify which scenario/reserves/teleports/etc. The correct scenario
      and corresponding XCM programs are identified, and respectively, built
      automatically based on runtime configuration of trusted teleporters and
      trusted reserves.
      
      #### Current limitations:
      - while `fees` and "non-fee" `assets` CAN have different reserves (or
      fees CAN be teleported), the remaining "non-fee" `assets` CANNOT, among
      themselves, have different reserve locations (this is also implicitly
      enforced by `MAX_ASSETS_FOR_TRANSFER=2`, but this can be safely
      increased in the future).
      - `fees` and "non-fee" `assets` CANNOT have **different remote**
      reserves (this could also be supported in the future, but adds even more
      complexity while possibly not being worth it - we'll see what the future
      holds).
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/1584
      Fixes https://github.com/paritytech/polkadot-sdk/issues/2055
      
      
      
      ---------
      
      Co-authored-by: default avatarFrancisco Aguirre <[email protected]>
      Co-authored-by: default avatarBranislav Kontur <[email protected]>
      18257373
  33. Nov 05, 2023
  34. Nov 02, 2023
    • Oliver Tale-Yazdi's avatar
      Use `Message Queue` as DMP and XCMP dispatch queue (#1246) · e1c033eb
      Oliver Tale-Yazdi authored
      (imported from https://github.com/paritytech/cumulus/pull/2157)
      
      ## Changes
      
      This MR refactores the XCMP, Parachains System and DMP pallets to use
      the [MessageQueue](https://github.com/paritytech/substrate/pull/12485)
      for delayed execution of incoming messages. The DMP pallet is entirely
      replaced by the MQ and thereby removed. This allows for PoV-bounded
      execution and resolves a number of issues that stem from the current
      work-around.
      
      All System Parachains adopt this change.  
      The most important changes are in `primitives/core/src/lib.rs`,
      `parachains/common/src/process_xcm_message.rs`,
      `pallets/parachain-system/src/lib.rs`, `pallets/xcmp-queue/src/lib.rs`
      and the runtime configs.
      
      ### DMP Queue Pallet
      
      The pallet got removed and its logic refactored into parachain-system.
      Overweight message management can be done directly through the MQ
      pallet.
      
      Final undeployment migrations are provided by
      `cumulus_pallet_dmp_queue::UndeployDmpQueue` and `DeleteDmpQueue` that
      can be configured with an aux config trait like:
      
      ```rust
      parameter_types! {
      	pub const DmpQueuePalletName: &'static str = \"DmpQueue\" < CHANGE ME;
      	pub const RelayOrigin: AggregateMessageOrigin = AggregateMessageOrigin::Parent;
      }
      
      impl cumulus_pallet_dmp_queue::MigrationConfig for Runtime {
      	type PalletName = DmpQueuePalletName;
      	type DmpHandler = frame_support::traits::EnqueueWithOrigin<MessageQueue, RelayOrigin>;
      	type DbWeight = <Runtime as frame_system::Config>::DbWeight;
      }
      
      // And adding them to your Migrations tuple:
      pub type Migrations = (
      	...
      	cumulus_pallet_dmp_queue::UndeployDmpQueue<Runtime>,
      	cumulus_pallet_dmp_queue::DeleteDmpQueue<Runtime>,
      );
      ```
      
      ### XCMP Queue pallet
      
      Removed all dispatch queue functionality. Incoming XCMP messages are now
      either: Immediately handled if they are Signals, enqueued into the MQ
      pallet otherwise.
      
      New config items for the XCMP queue pallet:
      ```rust
      /// The actual queue implementation that retains the messages for later processing.
      type XcmpQueue: EnqueueMessage<ParaId>;
      
      /// How a XCM over HRMP from a sibling parachain should be processed.
      type XcmpProcessor: ProcessMessage<Origin = ParaId>;
      
      /// The maximal number of suspended XCMP channels at the same time.
      #[pallet::constant]
      type MaxInboundSuspended: Get<u32>;
      ```
      
      How to configure those:
      
      ```rust
      // Use the MessageQueue pallet to store messages for later processing. The `TransformOrigin` is needed since
      // the MQ pallet itself operators on `AggregateMessageOrigin` but we want to enqueue `ParaId`s.
      type XcmpQueue = TransformOrigin<MessageQueue, AggregateMessageOrigin, ParaId, ParaIdToSibling>;
      
      // Process XCMP messages from siblings. This is type-safe to only accept `ParaId`s. They will be dispatched
      // with origin `Junction::Sibling(…)`.
      type XcmpProcessor = ProcessFromSibling<
      	ProcessXcmMessage<
      		AggregateMessageOrigin,
      		xcm_executor::XcmExecutor<xcm_config::XcmConfig>,
      		RuntimeCall,
      	>,
      >;
      
      // Not really important what to choose here. Just something larger than the maximal number of channels.
      type MaxInboundSuspended = sp_core::ConstU32<1_000>;
      ```
      
      The `InboundXcmpStatus` storage item was replaced by
      `InboundXcmpSuspended` since it now only tracks inbound queue suspension
      and no message indices anymore.
      
      Now only sends the most recent channel `Signals`, as all prio ones are
      out-dated anyway.
      
      ### Parachain System pallet
      
      For `DMP` messages instead of forwarding them to the `DMP` pallet, it
      now pushes them to the configured `DmpQueue`. The message processing
      which was triggered in `set_validation_data` is now being done by the MQ
      pallet `on_initialize`.
      
      XCMP messages are still handed off to the `XcmpMessageHandler`
      (XCMP-Queue pallet) - no change here.
      
      New config items for the parachain system pallet:
      ```rust
      /// Queues inbound downward messages for delayed processing. 
      ///
      /// Analogous to the `XcmpQueue` of the XCMP queue pallet.
      type DmpQueue: EnqueueMessage<AggregateMessageOrigin>;
      ``` 
      
      How to configure:
      ```rust
      /// Use the MQ pallet to store DMP messages for delayed processing.
      type DmpQueue = MessageQueue;
      ``` 
      
      ## Message Flow
      
      The flow of messages on the parachain side. Messages come in from the
      left via the `Validation Data` and finally end up at the `Xcm Executor`
      on the right.
      
      ![Untitled
      (1)](https://github.com/paritytech/cumulus/assets/10380170/6cf8b377-88c9-4aed-96df-baace266e04d)
      
      ## Further changes
      
      - Bumped the default suspension, drop and resume thresholds in
      `QueueConfigData::default()`.
      - `XcmpQueue::{suspend_xcm_execution, resume_xcm_execution}` errors when
      they would be a noop.
      - Properly validate the `QueueConfigData` before setting it.
      - Marked weight files as auto-generated so they wont auto-expand in the
      MR files view.
      - Move the `hypothetical` asserts to `frame_support` under the name
      `experimental_hypothetically`
      
      Questions:
      - [ ] What about the ugly `#[cfg(feature = \"runtime-benchmarks\")]` in
      the runtimes? Not sure how to best fix. Just having them like this makes
      tests fail that rely on the real message processor when the feature is
      enabled.
      - [ ] Need a good weight for `MessageQueueServiceWeight`. The scheduler
      already takes 80% so I put it to 10% but that is quite low.
      
      TODO:
      - [x] Remove c&p code after
      https://github.com/paritytech/polkadot/pull/6271
      - [x] Use `HandleMessage` once it is public in Substrate
      - [x] fix `runtime-benchmarks` feature
      https://github.com/paritytech/polkadot/pull/6966
      
      
      - [x] Benchmarks
      - [x] Tests
      - [ ] Migrate `InboundXcmpStatus` to `InboundXcmpSuspended`
      - [x] Possibly cleanup Migrations (DMP+XCMP)
      - [x] optional: create `TransformProcessMessageOrigin` in Substrate and
      replace `ProcessFromSibling`
      - [ ] Rerun weights on ref HW
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      Co-authored-by: default avatarLiam Aharon <[email protected]>
      Co-authored-by: default avatarjoe petrowski <[email protected]>
      Co-authored-by: default avatarKian Paimani <[email protected]>
      Co-authored-by: command-bot <>
      e1c033eb
  35. Nov 01, 2023