Skip to content
Snippets Groups Projects
  1. Feb 18, 2025
  2. Feb 17, 2025
  3. Feb 14, 2025
    • Kian Paimani's avatar
      [AHM] Multi-block staking election pallet (#7282) · a025562b
      Kian Paimani authored
      ## Multi Block Election Pallet
      
      This PR adds the first iteration of the multi-block staking pallet. 
      
      From this point onwards, the staking and its election provider pallets
      are being customized to work in AssetHub. While usage in solo-chains is
      still possible, it is not longer the main focus of this pallet. For a
      safer usage, please fork and user an older version of this pallet.
      
      ---
      
      ## Replaces
      
      - [x] https://github.com/paritytech/polkadot-sdk/pull/6034 
      - [x] https://github.com/paritytech/polkadot-sdk/pull/5272
      
      ## Related PRs: 
      
      - [x] https://github.com/paritytech/polkadot-sdk/pull/7483
      - [ ] https://github.com/paritytech/polkadot-sdk/pull/7357
      - [ ] https://github.com/paritytech/polkadot-sdk/pull/7424
      - [ ] https://github.com/paritytech/polkadot-staking-miner/pull/955
      
      This branch can be periodically merged into
      https://github.com/paritytech/polkadot-sdk/pull/7358 ->
      https://github.com/paritytech/polkadot-sdk/pull/6996
      
      ## TODOs: 
      
      - [x] rebase to master 
      - Benchmarking for staking critical path
        - [x] snapshot
        - [x] election result
      - Benchmarking for EPMB critical path
        - [x] snapshot
        - [x] verification
        - [x] submission
        - [x] unsigned submission
        - [ ] election results fetching
      - [ ] Fix deletion weights. Either of
        - [ ] Garbage collector + lazy removal of all paged storage items
        - [ ] Confirm that deletion is small PoV footprint.
      - [ ] Move election prediction to be push based. @tdimitrov 
      - [ ] integrity checks for bounds 
      - [ ] Properly benchmark this as a part of CI -- for now I will remove
      them as they are too slow
      - [x] add try-state to all pallets
      - [x] Staking to allow genesis dev accounts to be created internally
      - [x] Decouple miner config so @niklasad1 can work on the miner
      72841b73
      - [x] duplicate snapshot page reported by @niklasad1
      
       
      - [ ] https://github.com/paritytech/polkadot-sdk/pull/6520 or equivalent
      -- during snapshot, `VoterList` must be locked
      - [ ] Move target snapshot to a separate block
      
      ---------
      
      Co-authored-by: default avatarGonçalo Pestana <g6pestana@gmail.com>
      Co-authored-by: default avatarAnkan <10196091+Ank4n@users.noreply.github.com>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarGuillaume Thiolliere <gui.thiolliere@gmail.com>
      Co-authored-by: default avatarGiuseppe Re <giuseppe.re@parity.io>
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
    • Bastian Köcher's avatar
    • Oliver Tale-Yazdi's avatar
      [mq pallet] Custom next queue selectors (#6059) · 7aac8861
      Oliver Tale-Yazdi authored
      
      Changes:
      - Expose a `force_set_head` function from the `MessageQueue` pallet via
      a new trait: `ForceSetHead`. This can be used to force the MQ pallet to
      process this queue next.
      - The change only exposes an internal function through a trait, no audit
      is required.
      
      ## Context
      
      For the Asset Hub Migration (AHM) we need a mechanism to prioritize the
      inbound upward messages and the inbound downward messages on the AH. To
      achieve this, a minimal (and no breaking) change is done to the MQ
      pallet in the form of adding the `force_set_head` function.
      
      An example use of how to achieve prioritization is then demonstrated in
      `integration_test.rs::AhmPrioritizer`. Normally, all queues are
      scheduled round-robin like this:
      
      `| Relay | Para(1) | Para(2) | ... | Relay | ... `
      
      The prioritizer listens to changes to its queue and triggers if either:
      - The queue processed in the last block (to keep the general round-robin
      scheduling)
      - The queue did not process since `n` blocks (to prevent starvation if
      there are too many other queues)
      
      In either situation, it schedules the queue for a streak of three
      consecutive blocks, such that it would become:
      
      `| Relay | Relay | Relay | Para(1) | Para(2) | ... | Relay | Relay |
      Relay | ... `
      
      It basically transforms the round-robin into an elongated round robin.
      Although different strategies can be injected into the pallet at
      runtime, this one seems to strike a good balance between general service
      level and prioritization.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      Co-authored-by: default avatarmuharem <ismailov.m.h@gmail.com>
  4. Feb 13, 2025
  5. Feb 12, 2025
  6. Feb 10, 2025
    • Dhiraj Sah's avatar
      transfer function Preservation is changed to Expendable (#7243) · f96da6f3
      Dhiraj Sah authored
      # Description
      
      Fixes #7039
      
      The Preservation of transfer method of fungible and fungibles adapters
      is changed from Preserve to Expendable. So the behavior of the
      TransferAsset will be consistent with the WithdrawAsset function, as in
      [fungible](https://github.com/paritytech/polkadot-sdk/blob/f3ab3854
      
      /polkadot/xcm/xcm-builder/src/fungible_adapter.rs#L217)
      and [fungibles](https://github.com/paritytech/polkadot-sdk/issues/url)
      adapter.
      
      This pull request includes changes to the `fungible_adapter.rs` and
      `fungibles_adapter.rs` files in the `polkadot/xcm/xcm-builder`
      directory. The main change involves modifying the transfer method to use
      the `Expendable` strategy instead of the `Preserve` strategy.
      
      Changes to transfer strategy:
      
      *
      [`polkadot/xcm/xcm-builder/src/fungible_adapter.rs`](diffhunk://#diff-6ebd77385441f2c8b023c480e818a01c4b43ae892c73ca30144cd64ee960bd66L67-R67):
      Changed the transfer method to use `Expendable` instead of `Preserve`.
      *
      [`polkadot/xcm/xcm-builder/src/fungibles_adapter.rs`](diffhunk://#diff-82221429de4c4c88be3d2976ece6475ef4fa56a32abc70290911bd47191f8e17L61-R61):
      Changed the transfer method to use `Expendable` instead of `Preserve`.
      
      ---------
      
      Co-authored-by: default avatarFrancisco Aguirre <franciscoaguirreperez@gmail.com>
      Co-authored-by: default avatarAdrian Catangiu <adrian@parity.io>
  7. Feb 08, 2025
  8. Feb 07, 2025
  9. Feb 04, 2025
    • Alexander Theißen's avatar
      revive: Include immutable storage deposit into the contracts `storage_base_deposit` (#7230) · 4c28354b
      Alexander Theißen authored
      
      This PR is centered around a main fix regarding the base deposit and a
      bunch of drive by or related fixtures that make sense to resolve in one
      go. It could be broken down more but I am constantly rebasing this PR
      and would appreciate getting those fixes in as-one.
      
      **This adds a multi block migration to Westend AssetHub that wipes the
      pallet state clean. This is necessary because of the changes to the
      `ContractInfo` storage item. It will not delete the child storage
      though. This will leave a tiny bit of garbage behind but won't cause any
      problems. They will just be orphaned.**
      
      ## Record the deposit for immutable data into the `storage_base_deposit`
      
      The `storage_base_deposit` are all the deposit a contract has to pay for
      existing. It included the deposit for its own metadata and a deposit
      proportional (< 1.0x) to the size of its code. However, the immutable
      code size was not recorded there. This would lead to the situation where
      on terminate this portion wouldn't be refunded staying locked into the
      contract. It would also make the calculation of the deposit changes on
      `set_code_hash` more complicated when it updates the immutable data (to
      be done in #6985). Reason is because it didn't know how much was payed
      before since the storage prices could have changed in the mean time.
      
      In order for this solution to work I needed to delay the deposit
      calculation for a new contract for after the contract is done executing
      is constructor as only then we know the immutable data size. Before, we
      just charged this eagerly in `charge_instantiate` before we execute the
      constructor. Now, we merely send the ED as free balance before the
      constructor in order to create the account. After the constructor is
      done we calculate the contract base deposit and charge it. This will
      make `set_code_hash` much easier to implement.
      
      As a side effect it is now legal to call `set_immutable_data` multiple
      times per constructor (even though I see no reason to do so). It simply
      overrides the immutable data with the new value. The deposit accounting
      will be done after the constructor returns (as mentioned above) instead
      of when setting the immutable data.
      
      ## Don't pre-charge for reading immutable data
      
      I noticed that we were pre-charging weight for the max allowable
      immutable data when reading those values and then refunding after read.
      This is not necessary as we know its length without reading the storage
      as we store it out of band in contract metadata. This makes reading it
      free. Less pre-charging less problems.
      
      ## Remove delegate locking
      
      Fixes #7092
      
      This is also in the spirit of making #6985 easier to implement. The
      locking complicates `set_code_hash` as we might need to block settings
      the code hash when locks exist. Check #7092 for further rationale.
      
      ## Enforce "no terminate in constructor" eagerly
      
      We used to enforce this rule after the contract execution returned. Now
      we error out early in the host call. This makes it easier to be sure to
      argue that a contract info still exists (wasn't terminated) when a
      constructor successfully returns. All around this his just much simpler
      than dealing this check.
      
      ## Moved refcount functions to `CodeInfo`
      
      They never really made sense to exist on `Stack`. But now with the
      locking gone this makes even less sense. The refcount is stored inside
      `CodeInfo` to lets just move them there.
      
      ## Set `CodeHashLockupDepositPercent` for test runtime
      
      The test runtime was setting `CodeHashLockupDepositPercent` to zero.
      This was trivializing many code paths and excluded them from testing. I
      set it to `30%` which is our default value and fixed up all the tests
      that broke. This should give us confidence that the lockup doeposit
      collections properly works.
      
      ## Reworked the `MockExecutable` to have both a `deploy` and a `call`
      entry point
      
      This type used for testing could only have either entry points but not
      both. In order to fix the `immutable_data_set_overrides` I needed to a
      new function `add_both` to `MockExecutable` that allows to have both
      entry points. Make sure to make use of it in the future :)
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      Co-authored-by: default avatarPG Herveou <pgherveou@gmail.com>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
  10. Feb 03, 2025
    • Alin Dima's avatar
      deprecate AsyncBackingParams (#7254) · 4cd07c56
      Alin Dima authored
      
      Part of https://github.com/paritytech/polkadot-sdk/issues/5079.
      
      Removes all usage of the static async backing params, replacing them
      with dynamically computed equivalent values (based on the claim queue
      and scheduling lookahead).
      
      Adds a new runtime API for querying the scheduling lookahead value. If
      not present, falls back to 3 (the default value that is backwards
      compatible with values we have on production networks for
      allowed_ancestry_len)
      
      Also resolves most of
      https://github.com/paritytech/polkadot-sdk/issues/4447, removing code
      that handles async backing not yet being enabled.
      While doing this, I removed the support for collation protocol version 1
      on collators, as it only worked for leaves not supporting async backing
      (which are none).
      I also unhooked the legacy v1 statement-distribution (for the same
      reason as above). That subsystem is basically dead code now, so I had to
      remove some of its tests as they would no longer pass (since the
      subsystem no longer sends messages to the legacy variant). I did not
      remove the entire legacy subsystem yet, as that would pollute this PR
      too much. We can remove the entire v1 and v2 validation protocols in a
      follow up PR.
      
      In another PR: remove test files with names `prospective_parachains`
      (it'd pollute this PR if we do now)
      
      TODO:
      - [x] add deprecation warnings
      - [x] prdoc
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
  11. Jan 31, 2025
  12. Jan 30, 2025
    • ordian's avatar
      fix pre-dispatch PoV underweight for ParasInherent (#7378) · 0d35be7b
      ordian authored
      
      This should fix the error log related to PoV pre-dispatch weight being
      lower than post-dispatch for `ParasInherent`:
      ```
      ERROR tokio-runtime-worker runtime::frame-support: Post dispatch weight is greater than pre dispatch weight. Pre dispatch weight may underestimating the actual weight. Greater post dispatch weight components are ignored.
                                              Pre dispatch weight: Weight { ref_time: 47793353978, proof_size: 1019 },
                                              Post dispatch weight: Weight { ref_time: 5030321719, proof_size: 135395 }
      ```
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
    • Stephane Gurgenidze's avatar
      malus-collator: implement malicious collator submitting same collation to all... · 48f69cca
      Stephane Gurgenidze authored
      malus-collator: implement malicious collator submitting same collation to all backing groups (#6924)
      
      ## Issues
      - [[#5049] Elastic scaling: zombienet
      tests](https://github.com/paritytech/polkadot-sdk/issues/5049)
      - [[#4526] Add zombienet tests for malicious
      collators](https://github.com/paritytech/polkadot-sdk/issues/4526)
      
      ## Description
      Modified the undying collator to include a malus mode, in which it
      submits the same collation to all assigned backing groups.
      
      ## TODO
      * [X] Implement malicious collator that submits the same collation to
      all backing groups;
      * [X] Avoid the core index check in the collation generation subsystem:
      https://github.com/paritytech/polkadot-sdk/blob/master/polkadot/node/collation-generation/src/lib.rs#L552-L553;
      * [X] Resolve the mismatch between the descriptor and the commitments
      core index: https://github.com/paritytech/polkadot-sdk/pull/7104
      * [X] Implement `duplicate_collations` test with zombienet-sdk;
      * [X] Add PRdoc.
    • Jeeyong Um's avatar
      Replace derivative dependency with derive-where (#7324) · 0d644ca0
      Jeeyong Um authored
      
      # Description
      
      Close #7122.
      
      This PR replaces the unmaintained `derivative` dependency with
      `derive-where`.
      
      ## Integration
      
      This PR doesn't change the public interfaces.
      
      ## Review Notes
      
      The `derivative` crate, previously used to derive basic traits for
      structs with generics or enums, is no longer actively maintained. It has
      been replaced with the `derive-where` crate, which offers a more
      straightforward syntax while providing the same features as
      `derivative`.
      
      ---------
      
      Co-authored-by: default avatarGuillaume Thiolliere <gui.thiolliere@gmail.com>
  13. Jan 28, 2025
    • Andrew Jones's avatar
      Implement pallet view function queries (#4722) · 0b8d7441
      Andrew Jones authored
      
      Closes #216.
      
      This PR allows pallets to define a `view_functions` impl like so:
      
      ```rust
      #[pallet::view_functions]
      impl<T: Config> Pallet<T>
      where
      	T::AccountId: From<SomeType1> + SomeAssociation1,
      {
      	/// Query value no args.
      	pub fn get_value() -> Option<u32> {
      		SomeValue::<T>::get()
      	}
      
      	/// Query value with args.
      	pub fn get_value_with_arg(key: u32) -> Option<u32> {
      		SomeMap::<T>::get(key)
      	}
      }
      ```
      ### `QueryId`
      
      Each view function is uniquely identified by a `QueryId`, which for this
      implementation is generated by:
      
      ```twox_128(pallet_name) ++ twox_128("fn_name(fnarg_types) -> return_ty")```
      
      The prefix `twox_128(pallet_name)` is the same as the storage prefix for pallets and take into account multiple instances of the same pallet.
      
      The suffix is generated from the fn type signature so is guaranteed to be unique for that pallet impl. For one of the view fns in the example above it would be `twox_128("get_value_with_arg(u32) -> Option<u32>")`. It is a known limitation that only the type names themselves are taken into account: in the case of type aliases the signature may have the same underlying types but a different id; for generics the concrete types may be different but the signatures will remain the same.
      
      The existing Runtime `Call` dispatchables are addressed by their concatenated indices `pallet_index ++ call_index`, and the dispatching is handled by the SCALE decoding of the `RuntimeCallEnum::PalletVariant(PalletCallEnum::dispatchable_variant(payload))`. For `view_functions` the runtime/pallet generated enum structure is replaced by implementing the `DispatchQuery` trait on the outer (runtime) scope, dispatching to a pallet based on the id prefix, and the inner (pallet) scope dispatching to the specific function based on the id suffix.
      
      Future implementations could also modify/extend this scheme and routing to pallet agnostic queries.
      
      ### Executing externally
      
      These view functions can be executed externally via the system runtime api:
      
      ```rust
      pub trait ViewFunctionsApi<QueryId, Query, QueryResult, Error> where
      	QueryId: codec::Codec,
      	Query: codec::Codec,
      	QueryResult: codec::Codec,
      	Error: codec::Codec,
      {
      	/// Execute a view function query.
      fn execute_query(query_id: QueryId, query: Query) -> Result<QueryResult,
      Error>;
      }
      ```
      ### `XCQ`
      Currently there is work going on by @xlc to implement [`XCQ`](https://github.com/open-web3-stack/XCQ/) which may eventually supersede this work.
      
      It may be that we still need the fixed function local query dispatching in addition to XCQ, in the same way that we have chain specific runtime dispatchables and XCM.
      
      I have kept this in mind and the high level query API is agnostic to the underlying query dispatch and execution. I am just providing the implementation for the `view_function` definition.
      
      ### Metadata
      Currently I am utilizing the `custom` section of the frame metadata, to avoid modifying the official metadata format until this is standardized.
      
      ### vs `runtime_api`
      There are similarities with `runtime_apis`, some differences being:
      - queries can be defined directly on pallets, so no need for boilerplate declarations and implementations
      - no versioning, the `QueryId` will change if the signature changes. 
      - possibility for queries to be executed from smart contracts (see below)
      
      ### Calling from contracts
      Future work would be to add `weight` annotations to the view function queries, and a host function to `pallet_contracts` to allow executing these queries from contracts.
      
      ### TODO
      
      - [x] Consistent naming (view functions pallet impl, queries, high level api?)
      - [ ] End to end tests via `runtime_api`
      - [ ] UI tests
      - [x] Mertadata tests
      - [ ] Docs
      
      ---------
      
      Co-authored-by: default avatarkianenigma <kian@parity.io>
      Co-authored-by: default avatarJames Wilson <james@jsdw.me>
      Co-authored-by: default avatarGiuseppe Re <giuseppe.re@parity.io>
      Co-authored-by: default avatarGuillaume Thiolliere <guillaume.thiolliere@parity.io>
    • Alin Dima's avatar
      cumulus: bump PARENT_SEARCH_DEPTH and add test for 12-core elastic scaling (#6983) · e6aad5b0
      Alin Dima authored
      On top of https://github.com/paritytech/polkadot-sdk/pull/6757
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/6858 by bumping
      the `PARENT_SEARCH_DEPTH` constant to a larger value (30) and adds a
      zombienet-sdk test that exercises the 12-core scenario.
      
      This is a node-side limit that restricts the number of allowed pending
      availability candidates when choosing the parent parablock during
      authoring.
      This limit is rather redundant, as the parachain runtime already
      restricts the unincluded segment length to the configured value in the
      [FixedVelocityConsensusHook](https://github.com/paritytech/polkadot-sdk/blob/88d900af
      
      /cumulus/pallets/aura-ext/src/consensus_hook.rs#L35)
      (which ideally should be equal to this `PARENT_SEARCH_DEPTH`).
      
      For 12 cores, a value of 24 should be enough, but I bumped it to 30 to
      have some extra buffer.
      
      There are two other potential ways of fixing this:
      - remove this constant altogether, as the parachain runtime already
      makes those guarantees. Chose not to do this, as it can't hurt to have
      an extra safeguard
      - set this value to be equal to the uninlcuded segment size. This value
      however is not exposed to the node-side and would require a new runtime
      API, which seems overkill for a redundant check.
      
      ---------
      
      Co-authored-by: default avatarJavier Viola <javier@parity.io>
  14. Jan 27, 2025
  15. Jan 26, 2025
  16. Jan 24, 2025
  17. Jan 23, 2025
    • Andrei Sandu's avatar
      Deprecate ParaBackingState API (#6867) · e9393a9a
      Andrei Sandu authored
      
      Currently the `para_backing_state` API is used only by the prospective
      parachains subsystems and returns 2 things: the constraints for
      parachain blocks and the candidates pending availability.
      
      This PR deprecates `para_backing_state` and introduces a new
      `backing_constraints` API that can be used together with
      `candidates_pending_availability` to get the same information provided
      by `para_backing_state`.
      
      TODO:
      - [x] PRDoc
      
      ---------
      
      Signed-off-by: default avatarAndrei Sandu <andrei-mihail@parity.io>
      Co-authored-by: command-bot <>
    • Alin Dima's avatar
      bump lookahead to 3 for testnet genesis (#7252) · cfc5b6f5
      Alin Dima authored
      This is the right value after
      https://github.com/paritytech/polkadot-sdk/pull/4880, which corresponds
      to an allowedAncestryLen of 2 (which is the default)
      
      WIll fix https://github.com/paritytech/polkadot-sdk/issues/7105
    • Branislav Kontur's avatar
      Bridges small nits/improvements (#7307) · 085da479
      Branislav Kontur authored
      This PR contains small fixes identified during work on the larger PR:
      https://github.com/paritytech/polkadot-sdk/issues/6906.
      
      ---------
      
      Co-authored-by: command-bot <>