Skip to content
  1. Jan 22, 2024
  2. Jan 18, 2024
  3. Jan 17, 2024
  4. Jan 16, 2024
    • Francisco Aguirre's avatar
      XCMv4 (#1230) · 8428f678
      Francisco Aguirre authored
      
      
      # Note for reviewer
      
      Most changes are just syntax changes necessary for the new version.
      Most important files should be the ones under the `xcm` folder.
      
      # Description 
      
      Added XCMv4.
      
      ## Removed `Multi` prefix
      The following types have been renamed:
      - MultiLocation -> Location
      - MultiAsset -> Asset
      - MultiAssets -> Assets
      - InteriorMultiLocation -> InteriorLocation
      - MultiAssetFilter -> AssetFilter
      - VersionedMultiAsset -> VersionedAsset
      - WildMultiAsset -> WildAsset
      - VersionedMultiLocation -> VersionedLocation
      
      In order to fix a name conflict, the `Assets` in `xcm-executor` were
      renamed to `HoldingAssets`, as they represent assets in holding.
      
      ## Removed `Abstract` asset id
      
      It was not being used anywhere and this simplifies the code.
      
      Now assets are just constructed as follows:
      
      ```rust
      let asset: Asset = (AssetId(Location::new(1, Here)), 100u128).into();
      ```
      
      No need for specifying `Concrete` anymore.
      
      ## Outcome is now a named fields struct
      
      Instead of
      
      ```rust
      pub enum Outcome {
        Complete(Weight),
        Incomplete(Weight, Error),
        Error(Error),
      }
      ```
      
      we now have
      
      ```rust
      pub enum Outcome {
        Complete { used: Weight },
        Incomplete { used: Weight, error: Error },
        Error { error: Error },
      }
      ```
      
      ## Added Reanchorable trait
      
      Now both locations and assets implement this trait, making it easier to
      reanchor both.
      
      ## New syntax for building locations and junctions
      
      Now junctions are built using the following methods:
      
      ```rust
      let location = Location {
          parents: 1,
          interior: [Parachain(1000), PalletInstance(50), GeneralIndex(1984)].into()
      };
      ```
      
      or
      
      ```rust
      let location = Location::new(1, [Parachain(1000), PalletInstance(50), GeneralIndex(1984)]);
      ```
      
      And they are matched like so:
      
      ```rust
      match location.unpack() {
        (1, [Parachain(id)]) => ...
        (0, Here) => ...,
        (1, [_]) => ...,
      }
      ```
      
      This syntax is mandatory in v4, and has been also implemented for v2 and
      v3 for easier migration.
      
      This was needed to make all sizes smaller.
      
      # TODO
      - [x] Scaffold v4
      - [x] Port github.com/paritytech/polkadot/pull/7236
      - [x] Remove `Multi` prefix
      - [x] Remove `Abstract` asset id
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarKeith Yeung <[email protected]>
      8428f678
  5. Jan 10, 2024
    • joe petrowski's avatar
      Unique Usernames in Identity Pallet (#2651) · d1f678c0
      joe petrowski authored
      
      
      This PR allows _username authorities_ to issue unique usernames that
      correspond with an account. It also provides two-way lookup, that is
      from `AccountId` to a single, "primary" `Username` (alongside
      `Registration`) and multiple unique `Username`s to an `AccountId`.
      
      Key features:
      
      - Username Authorities added (and removed) via privileged origin.
      - Authorities have a `suffix` and an `allocation`. They can grant up to
      `allocation` usernames. Their `suffix` will be appended to the usernames
      that they issue. A suffix may be up to 7 characters long.
      - Users can ask an authority to grant them a username. This will take
      the form `myusername.suffix`. The entire name (including suffix) must be
      less than or equal to 32 alphanumeric characters.
      - Users can approve a username for themselves in one of two ways (that
      is, authorities cannot grant them arbitrarily):
      - Pre-sign the entire username (including suffix) with a secret key that
      corresponds to their `AccountId` (for keyed accounts, obviously); or
      - Accept the username after it has been granted by an authority (it will
      be queued until accepted) (for non-keyed accounts like pure proxies or
      multisigs).
      - The system does not require any funds or deposits. Users without an
      identity will be given a default one (presumably all fields set to
      `None`). If they update this info, they will need to place the normal
      storage deposit.
      - If a user does not have any username, their first one will be set as
      `Primary`, and their `AccountId` will map to that one. If they get
      subsequent usernames, they can choose which one to be their primary via
      `set_primary_username`.
      - There are some state cleanup functions to remove expired usernames
      that have not been accepted and dangling usernames whose owners have
      called `clear_identity`.
      
      TODO:
      
      - [x] Add migration to runtimes
      - [x] Probably do off-chain migration into People Chain genesis
      - [x] Address a few TODO questions in code (please review)
      
      ---------
      
      Co-authored-by: default avatarLiam Aharon <[email protected]>
      Co-authored-by: default avatarGonçalo Pestana <[email protected]>
      Co-authored-by: default avatarOliver Tale-Yazdi <[email protected]>
      Co-authored-by: default avatarDónal Murray <[email protected]>
      d1f678c0
  6. Jan 06, 2024
  7. Dec 21, 2023
  8. Dec 15, 2023
  9. Dec 14, 2023
  10. Dec 13, 2023
    • Alexandru Gheorghe's avatar
      Approve multiple candidates with a single signature (#1191) · a84dd0db
      Alexandru Gheorghe authored
      Initial implementation for the plan discussed here: https://github.com/paritytech/polkadot-sdk/issues/701
      Built on top of https://github.com/paritytech/polkadot-sdk/pull/1178
      v0: https://github.com/paritytech/polkadot/pull/7554,
      
      ## Overall idea
      
      When approval-voting checks a candidate and is ready to advertise the
      approval, defer it in a per-relay chain block until we either have
      MAX_APPROVAL_COALESCE_COUNT candidates to sign or a candidate has stayed
      MAX_APPROVALS_COALESCE_TICKS in the queue, in both cases we sign what
      candidates we have available.
      
      This should allow us to reduce the number of approvals messages we have
      to create/send/verify. The parameters are configurable, so we should
      find some values that balance:
      
      - Security of the network: Delaying broadcasting of an approval
      shouldn't but the finality at risk and to make sure that never happens
      we won't delay sending a vote if we are past 2/3 from the no-show time.
      - Scalability of the network: MAX_APPROVAL_COALESCE_COUNT = 1 &
      MAX_APPROVALS_COALESCE_TICKS =0, is what we have now and we know from
      the measurements we did on versi, it bottlenecks
      approval-distribution/approval-voting when increase significantly the
      number of validators and parachains
      - Block storage: In case of disputes we have to import this votes on
      chain and that increase the necessary storage with
      MAX_APPROVAL_COALESCE_COUNT * CandidateHash per vote. Given that
      disputes are not the normal way of the network functioning and we will
      limit MAX_APPROVAL_COALESCE_COUNT in the single digits numbers, this
      should be good enough. Alternatively, we could try to create a better
      way to store this on-chain through indirection, if that's needed.
      
      ## Other fixes:
      - Fixed the fact that we were sending random assignments to
      non-validators, that was wrong because those won't do anything with it
      and they won't gossip it either because they do not have a grid topology
      set, so we would waste the random assignments.
      - Added metrics to be able to debug potential no-shows and
      mis-processing of approvals/assignments.
      
      ## TODO:
      - [x] Get feedback, that this is moving in the right direction. @ordian
      @sandreim @eskimor @burdges, let me know what you think.
      - [x] More and more testing.
      - [x]  Test in versi.
      - [x] Make MAX_APPROVAL_COALESCE_COUNT &
      MAX_APPROVAL_COALESCE_WAIT_MILLIS a parachain host configuration.
      - [x] Make sure the backwards compatibility works correctly
      - [x] Make sure this direction is compatible with other streams of work:
      https://github.com/paritytech/polkadot-sdk/issues/635 &
      https://github.com/paritytech/polkadot-sdk/issues/742
      
      
      - [x] Final versi burn-in before merging
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <[email protected]>
      a84dd0db
  11. Dec 12, 2023
    • Chevdor's avatar
      Changelogs local generation (#1411) · 42a3afba
      Chevdor authored
      
      
      This PR introduces a script and some templates to use the prdoc involved
      in a release and build:
      - the changelog
      - a simple draft of audience documentation
      
      Since the prdoc presence was enforced in the middle of the version
      1.5.0, not all PRs did come with a `prdoc` file.
      This PR creates all the missing `prdoc` files with some minimum content
      allowing to properly generate the changelog.
      The generated content is **not** suitable for the audience
      documentation.
      
      The audience documentation will be possible with the next version, when
      all PR come with a proper `prdoc`.
      
      ## Assumptions
      
      - the prdoc files for release `vX.Y.Z` have been moved under
      `prdoc/X.Y.Z`
      - the changelog requires for now for the prdoc files to contain author +
      topic. Thos fields are optional.
      
      The build script can  be called as:
      ```
      VERSION=X.Y.Z ./scripts/release/build-changelogs.sh
      ```
      
      Related:
      -  #1408
      
      ---------
      
      Co-authored-by: default avatarEgorPopelyaev <[email protected]>
      42a3afba
    • Ross Bulat's avatar
      Staking: Add `deprecate_controller_batch` AdminOrigin call (#2589) · 048a9c27
      Ross Bulat authored
      
      
      Partially Addresses #2500
      
      Adds a `deprecate_controller_batch` call to the staking pallet that is
      callable by `Root` and `StakingAdmin`. To be used for controller account
      deprecation and removed thereafter. Adds
      `MaxControllersDeprecationBatch` pallet constant that defines max
      possible deprecations per call.
      
      - [x] Add `deprecate_controller_batch` call, and
      `MaxControllersInDeprecationBatch` constant.
      - [x] Add tests, benchmark, weights. Tests that weight is only consumed
      if unique pair.
      - [x] Adds `StakingAdmin` origin to staking's `AdminOrigin` type in
      westend runtime.
      - [x] Determined that worst case 5,900 deprecations does fit into
      `maxBlock` `proofSize` and `refTime` in both normal and operational
      thresholds, meaning we can deprecate all controllers for each network in
      one call.
      
      ## Block Weights
      
      By querying `consts.system.blockWeights` we can see that the
      `deprecate_controller_batch` weights fit within the `normal` threshold
      on Polkadot.
      
      #### `controller_deprecation_batch` where i = 5900:
      #### Ref time: 69,933,325,300
      #### Proof size: 21,040,390
      
      ### Polkadot 
      
      ```
      // consts.query.blockWeights
      
      maxBlock: {
              refTime: 2,000,000,000,000
              proofSize: 18,446,744,073,709,551,615
      }
      normal: {
       maxExtrinsic: {
      	refTime: 1,479,873,955,000
      	proofSize: 13,650,590,614,545,068,195
       }
       maxTotal: {
      	refTime: 1,500,000,000,000
      	proofSize: 13,835,058,055,282,163,711
       }
      }
      ```
      
      ### Kusama
      
      ```
      // consts.query.blockWeights
      
        maxBlock: {
          refTime: 2,000,000,000,000
          proofSize: 18,446,744,073,709,551,615
        }
          normal: {
            maxExtrinsic: {
              refTime: 1,479,875,294,000
              proofSize: 13,650,590,614,545,068,195
            }
            maxTotal: {
              refTime: 1,500,000,000,000
              proofSize: 13,835,058,055,282,163,711
            }
      }
      ```
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarGonçalo Pestana <[email protected]>
      048a9c27
  12. Dec 11, 2023
  13. Dec 07, 2023
  14. Dec 06, 2023
    • Adrian Catangiu's avatar
      pallet-xcm: add new flexible `transfer_assets()` call/extrinsic (#2388) · e7651cf4
      Adrian Catangiu authored
      # Motivation (+testing)
      
      ### Enable easy `ForeignAssets` transfers using `pallet-xcm` 
      
      We had just previously added capabilities to teleport fees during
      reserve-based transfers, but what about reserve-transferring fees when
      needing to teleport some non-fee asset?
      
      This PR aligns everything under either explicit reserve-transfer,
      explicit teleport, or this new flexible `transfer_assets()` which can
      mix and match as needed with fewer artificial constraints imposed to the
      user.
      
      This will enable, for example, a (non-system) parachain to teleport
      their `ForeignAssets` assets to AssetHub while using DOT to pay fees.
      (the assets are teleported - as foreign assets should from their owner
      chain - while DOT used for fees can only be reserve-based transferred
      between said parachain and AssetHub).
      
      Added `xcm-emulator` tests for this scenario ^.
      
      # Description
      
      Reverts `(limited_)reserve_transfer_assets` to only allow reserve-based
      transfers for all `assets` including fees.
      
      Similarly `(limited_)teleport_assets` only allows teleports for all
      `assets` including fees.
          
      For complex combinations of asset transfers where assets and fees may
      have different reserves or different reserve/teleport trust
      configurations, users can use the newly added `transfer_assets()`
      extrinsic which is more flexible in allowing more complex scenarios.
      
      `assets` (excluding `fees`) must have same reserve location or otherwise
      be teleportable to `dest`.
      No limitations imposed on `fees`.
      
      - for local reserve: transfer assets to sovereign account of destination
      chain and forward a notification XCM to `dest` to mint and deposit
      reserve-based assets to `beneficiary`.
      - for destination reserve: burn local assets and forward a notification
      to `dest` chain to withdraw the reserve assets from this chain's
      sovereign account and deposit them to `beneficiary`.
      - for remote reserve: burn local assets, forward XCM to reserve chain to
      move reserves from this chain's SA to `dest` chain's SA, and forward
      another XCM to `dest` to mint and deposit reserve-based assets to
      `beneficiary`.
      - for teleports: burn local assets and forward XCM to `dest` chain to
      mint/teleport assets and deposit them to `beneficiary`.
      
      ## Review notes
      
      Only around 500 lines are prod code (see `pallet_xcm/src/lib.rs`), the
      rest of the PR is new tests and improving existing tests.
      
      ---------
      
      Co-authored-by: command-bot <>
      e7651cf4
  15. Dec 05, 2023
  16. Nov 29, 2023
  17. Nov 28, 2023
    • s0me0ne-unkn0wn's avatar
    • Ross Bulat's avatar
      Pools: Add ability to configure commission claiming permissions (#2474) · 75062717
      Ross Bulat authored
      Addresses #409.
      
      This request has been raised by multiple community members - the ability
      for the nomination pool root role to configure permissionless commission
      claiming:
      
      > Would it be possible to have a claim_commission_other extrinsic for
      claiming commission of nomination pools permissionless?
      
      This PR does not quite introduce this additional call, but amends
      `do_claim_commission` to check a new `claim_permission` field in the
      `Commission` struct, configured by an enum:
      
      ```
      enum CommissionClaimPermission {
         Permissionless,
         Account(AccountId),
      }
      ```
      This can be optionally set in a bonded pool's
      `commission.claim_permission` field:
      
      ```
      struct BondedPool {
         commission: {
            <snip>
            claim_permission: Option<CommissionClaimPermission<T::AccountId>>,
         },
         <snip>
      }
      ```
      
      This is a new field and requires a migration to add it to existing
      pools. This will be `None` on pool creation, falling back to the `root`
      role having sole access to claim commission if it is not set; this is
      the behaviour as it is today. Once set, the field _can_ be set to `None`
      again.
      
      #### Changes
      - [x] Add `commision.claim_permission` field.
      - [x] Add `can_claim_commission` and amend `do_claim_commission`.
      - [x] Add `set_commission_claim_permission` call.
      - [x] Test to cover new configs and call.
      - [x] Add and amend benchmarks.
      - [x] Generate new weights + slot into call
      `set_commission_claim_permission`.
      - [x] Add migration to introduce `commission.claim_permission`, bump
      storage version.
      - [x] Update Westend weights.
      - [x] Migration working.
      
      ---------
      
      Co-authored-by: command-bot <>
      75062717
  18. Nov 27, 2023
  19. Nov 17, 2023
  20. Nov 16, 2023
  21. Nov 15, 2023
    • joe petrowski's avatar
      Identity Deposits Relay to Parachain Migration (#1814) · c79b234b
      joe petrowski authored
      The goal of this PR is to migrate Identity deposits from the Relay Chain
      to a system parachain.
      
      The problem I want to solve is that `IdentityOf` and `SubsOf` both store
      an amount that's held in reserve as a storage deposit. When migrating to
      a parachain, we can take a snapshot of the actual `IdentityInfo` and
      sub-account mappings, but should migrate (off chain) the `deposit`s to
      zero, since the chain (and by extension, accounts) won't have any funds
      at genesis.
      
      The good news is that we expect parachain deposits to be significantly
      lower (possibly 100x) on the parachain. That is, a deposit of 21 DOT on
      the Relay Chain would need 0.21 DOT on a parachain. This PR proposes to
      migrate the deposits in the following way:
      
      1. Introduces a new pallet with two extrinsics: 
      - `reap_identity`: Has a configurable `ReapOrigin`, which would be set
      to `EnsureSigned` on the Relay Chain (i.e. callable by anyone) and
      `EnsureRoot` on the parachain (we don't want identities reaped from
      there).
      - `poke_deposit`: Checks what deposit the pallet holds (at genesis,
      zero) and attempts to update the amount based on the calculated deposit
      for storage data.
      2. `reap_identity` clears all storage data for a `target` account and
      unreserves their deposit.
      3. A `ReapIdentityHandler` teleports the necessary DOT to the parachain
      and calls `poke_deposit`. Since the parachain deposit is much lower, and
      was just unreserved, we know we have enough.
      
      One awkwardness I ran into was that the XCMv3 instruction set does not
      provide a way for the system to teleport assets without a fee being
      deducted on reception. Users shouldn't have to pay a fee for the system
      to migrate their info to a more efficient location. So I wrote my own
      program and did the `InitiateTeleport` accounting on my own to send a
      program with `UnpaidExecution`. Have discussed an
      `InitiateUnpaidTeleport` instruction with @franciscoaguirre . Obviously
      any chain executing this would have to pass a `Barrier` for free
      execution.
      
      TODO:
      
      - [x] Confirm People Chain ParaId
      - [x] Confirm People Chain deposit rates (determined in
      https://github.com/paritytech/polkadot-sdk/pull/2281
      
      )
      - [x] Add pallet to Westend
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      c79b234b
  22. Nov 14, 2023
  23. Nov 13, 2023
    • Adrian Catangiu's avatar
      pallet-xcm: enhance `reserve_transfer_assets` to support remote reserves (#1672) · 18257373
      Adrian Catangiu authored
      ## Motivation
      
      `pallet-xcm` is the main user-facing interface for XCM functionality,
      including assets manipulation functions like `teleportAssets()` and
      `reserve_transfer_assets()` calls.
      
      While `teleportAsset()` works both ways, `reserve_transfer_assets()`
      works only for sending reserve-based assets to a remote destination and
      beneficiary when the reserve is the _local chain_.
      
      ## Solution
      
      This PR enhances `pallet_xcm::(limited_)reserve_withdraw_assets` to
      support transfers when reserves are other chains.
      This will allow complete, **bi-directional** reserve-based asset
      transfers user stories using `pallet-xcm`.
      
      Enables following scenarios:
      - transferring assets with local reserve (was previously supported iff
      asset used as fee also had local reserve - now it works in all cases),
      - transferring assets with reserve on destination,
      - transferring assets with reserve on remote/third-party chain (iff
      assets and fees have same remote reserve),
      - transferring assets with reserve different than the reserve of the
      asset to be used as fees - meaning can be used to transfer random asset
      with local/dest reserve while using DOT for fees on all involved chains,
      even if DOT local/dest reserve doesn't match asset reserve,
      - transferring assets with any type of local/dest reserve while using
      fees which can be teleported between involved chains.
      
      All of the above is done by pallet inner logic without the user having
      to specify which scenario/reserves/teleports/etc. The correct scenario
      and corresponding XCM programs are identified, and respectively, built
      automatically based on runtime configuration of trusted teleporters and
      trusted reserves.
      
      #### Current limitations:
      - while `fees` and "non-fee" `assets` CAN have different reserves (or
      fees CAN be teleported), the remaining "non-fee" `assets` CANNOT, among
      themselves, have different reserve locations (this is also implicitly
      enforced by `MAX_ASSETS_FOR_TRANSFER=2`, but this can be safely
      increased in the future).
      - `fees` and "non-fee" `assets` CANNOT have **different remote**
      reserves (this could also be supported in the future, but adds even more
      complexity while possibly not being worth it - we'll see what the future
      holds).
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/1584
      Fixes https://github.com/paritytech/polkadot-sdk/issues/2055
      
      
      
      ---------
      
      Co-authored-by: default avatarFrancisco Aguirre <[email protected]>
      Co-authored-by: default avatarBranislav Kontur <[email protected]>
      18257373
    • Bastian Köcher's avatar
      pallet-grandpa: Remove `GRANDPA_AUTHORITIES_KEY` (#2181) · ebcf0a0f
      Bastian Köcher authored
      
      
      Remove the `GRANDPA_AUTHORITIES_KEY` key and its usage. Apparently this
      was used in the early days to communicate the grandpa authorities to the
      node. However, we have now a runtime api that does this for us. So, this
      pull request is moving from the custom managed storage item to a FRAME
      managed storage item.
      
      This pr also includes a migration for doing the switch on a running
      chain.
      
      ---------
      
      Co-authored-by: default avatarDavide Galassi <[email protected]>
      ebcf0a0f
  24. Nov 10, 2023
  25. Nov 03, 2023
    • georgepisaltu's avatar
      Identity pallet improvements (#2048) · 21fbc00d
      georgepisaltu authored
      This PR is a follow up to #1661 
      
      - [x] rename the `simple` module to `legacy`
      - [x] fix benchmarks to disregard the number of additional fields
      - [x] change the storage deposits to charge per encoded byte of the
      identity information instance, removing the need for `fn
      additional(&self) -> usize` in `IdentityInformationProvider`
      - [x] ~add an extrinsic to rejig deposits to account for the change
      above~
      - [ ] ~ensure through proper configuration that the new byte-based
      deposit is always lower than whatever is reserved now~
      - [x] remove `IdentityFields` from the `set_fields` extrinsic signature,
      as per [this
      discussion](https://github.com/paritytech/polkadot-sdk/pull/1661#discussion_r1371703403)
      
      > ensure through proper configuration that the new byte-based deposit is
      always lower than whatever is reserved now
      
      Not sure this is needed anymore. If the new deposits are higher than
      what is currently on chain and users don't have enough funds to reserve
      what is needed, the extrinisc fails and they're basically grandfathered
      and frozen until they add more funds and/or make a change to their
      identity. This behavior seems fine to me. Original idea
      [here](https://github.com/paritytech/polkadot-sdk/pull/1661#issuecomment-1779606319).
      
      > add an extrinsic to rejig deposits to account for the change above
      
      This was initially implemented but now removed from this PR in favor of
      the implementation detailed
      [here](https://github.com/paritytech/polkadot-sdk/pull/2088
      
      ).
      
      ---------
      
      Signed-off-by: default avatargeorgepisaltu <[email protected]>
      Co-authored-by: default avatarjoepetrowski <[email protected]>
      21fbc00d
  26. Nov 01, 2023
    • Ankan's avatar
      [NPoS] Paging reward payouts in order to scale rewardable nominators (#1189) · 00b85c51
      Ankan authored
      helps https://github.com/paritytech/polkadot-sdk/issues/439.
      closes https://github.com/paritytech/polkadot-sdk/issues/473.
      
      PR link in the older substrate repository:
      https://github.com/paritytech/substrate/pull/13498.
      
      # Context
      Rewards payout is processed today in a single block and limited to
      `MaxNominatorRewardedPerValidator`. This number is currently 512 on both
      Kusama and Polkadot.
      
      This PR tries to scale the nominators payout to an unlimited count in a
      multi-block fashion. Exposures are stored in pages, with each page
      capped to a certain number (`MaxExposurePageSize`). Starting out, this
      number would be the same as `MaxNominatorRewardedPerValidator`, but
      eventually, this number can be lowered through new runtime upgrades to
      limit the rewardeable nominators per dispatched call instruction.
      
      The changes in the PR are backward compatible.
      
      ## How payouts would work like after this change
      Staking exposes two calls, 1) the existing `payout_stakers` and 2)
      `payout_stakers_by_page`.
      
      ### payout_stakers
      This remains backward compatible with no signature change. If for a
      given era a validator has multiple pages, they can call `payout_stakers`
      multiple times. The pages are executed in an ascending sequence and the
      runtime takes care of preventing double claims.
      
      ### payout_stakers_by_page
      Very similar to `payout_stakers` but also accepts an extra param
      `page_index`. An account can choose to payout rewards only for an
      explicitly passed `page_index`.
      
      **Lets look at an example scenario**
      Given an active validator on Kusama had 1100 nominators,
      `MaxExposurePageSize` set to 512 for Era e. In order to pay out rewards
      to all nominators, the caller would need to call `payout_stakers` 3
      times.
      
      - `payout_stakers(origin, stash, e)` => will pay the first 512
      nominators.
      - `payout_stakers(origin, stash, e)` => will pay the second set of 512
      nominators.
      - `payout_stakers(origin, stash, e)` => will pay the last set of 76
      nominators.
      ...
      - `payout_stakers(origin, stash, e)` => calling it the 4th time would
      return an error `InvalidPage`.
      
      The above calls can also be replaced by `payout_stakers_by_page` and
      passing a `page_index` explicitly.
      
      ## Commission note
      Validator commission is paid out in chunks across all the pages where
      each commission chunk is proportional to the total stake of the current
      page. This implies higher the total stake of a page, higher will be the
      commission. If all the pages of a validator's single era are paid out,
      the sum of commission paid to the validator across all pages should be
      equal to what the commission would have been if we had a non-paged
      exposure.
      
      ### Migration Note
      Strictly speaking, we did not need to bump our storage version since
      there is no migration of storage in this PR. But it is still useful to
      mark a storage upgrade for the following reasons:
      
      - New storage items are introduced in this PR while some older storage
      items are deprecated.
      - For the next `HistoryDepth` eras, the exposure would be incrementally
      migrated to its corresponding paged storage item.
      - Runtimes using staking pallet would strictly need to wait at least
      `HistoryDepth` eras with current upgraded version (14) for the migration
      to complete. At some era `E` such that `E >
      era_at_which_V14_gets_into_effect + HistoryDepth`, we will upgrade to
      version X which will remove the deprecated storage items.
      In other words, it is a strict requirement that E<sub>x</sub> -
      E<sub>14</sub> > `HistoryDepth`, where
      E<sub>x</sub> = Era at which deprecated storages are removed from
      runtime,
      E<sub>14</sub> = Era at which runtime is upgraded to version 14.
      - For Polkadot and Kusama, there is a [tracker
      ticket](https://github.com/paritytech/polkadot-sdk/issues/433) to clean
      up the deprecated storage items.
      
      ### Storage Changes
      
      #### Added
      - ErasStakersOverview
      - ClaimedRewards
      - ErasStakersPaged
      
      #### Deprecated
      The following can be cleaned up after 84 eras which is tracked
      [here](https://github.com/paritytech/polkadot-sdk/issues/433).
      
      - ErasStakers.
      - ErasStakersClipped.
      - StakingLedger.claimed_rewards, renamed to
      StakingLedger.legacy_claimed_rewards.
      
      ### Config Changes
      - Renamed MaxNominatorRewardedPerValidator to MaxExposurePageSize.
      
      ### TODO
      - [x] Tracker ticket for cleaning up the old code after 84 eras.
      - [x] Add companion.
      - [x] Redo benchmarks before merge.
      - [x] Add Changelog for pallet_staking.
      - [x] Pallet should be configurable to enable/disable paged rewards.
      - [x] Commission payouts are distributed across pages.
      - [x] Review documentation thoroughly.
      - [x] Rename `MaxNominatorRewardedPerValidator` ->
      `MaxExposurePageSize`.
      - [x] NMap for `ErasStakersPaged`.
      - [x] Deprecate ErasStakers.
      - [x] Integrity tests.
      
      ### Followup issues
      [Runtime api for deprecated ErasStakers storage
      item](https://github.com/paritytech/polkadot-sdk/issues/426
      
      )
      
      ---------
      
      Co-authored-by: default avatarJavier Viola <[email protected]>
      Co-authored-by: default avatarRoss Bulat <[email protected]>
      Co-authored-by: command-bot <>
      00b85c51
  27. Oct 31, 2023
    • Adel Arja's avatar
      1953 defensive testing extrinsic (#1998) · 6e2f94f8
      Adel Arja authored
      
      
      # Description
      
      The `trigger_defensive` call has been added to the `root-testing`
      pallet. The idea is to have this pallet running on `Rococo/Westend` and
      use it to verify if the runtime monitoring works end-to-end.
      
      To accomplish this, `trigger_defensive` dispatches an event when it is
      called.
      
      Closes #1953
      
      # Checklist
      
      - [x] My PR includes a detailed description as outlined in the
      "Description" section above
      - [ ] My PR follows the [labeling requirements](CONTRIBUTING.md#Process)
      of this project (at minimum one label for `T`
        required)
      - [ ] I have made corresponding changes to the documentation (if
      applicable)
      - [ ] I have added tests that prove my fix is effective or that my
      feature works (if applicable)
      
      You can remove the "Checklist" section once all have been checked. Thank
      you for your contribution!
      
      ✄
      -----------------------------------------------------------------------------
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      Co-authored-by: default avatarOliver Tale-Yazdi <[email protected]>
      6e2f94f8
  28. Oct 24, 2023
    • georgepisaltu's avatar
      Make `IdentityInfo` generic in `pallet-identity` (#1661) · 91851951
      georgepisaltu authored
      Fixes #179 
      
      # Description
      
      This PR makes the structure containing identity information used in
      `pallet-identity` generic through the pallet `Config`. Additionally, the
      old structure is now available in a separate module called `simple`
      (pending rename) and is compatible with the new interface.
      
      Another change in this PR is that while the `additional` field in
      `IdentityInfo` stays for backwards compatibility reasons, the associated
      costs are stil present in the pallet through the `additional` function
      in the `IdentityInformationProvider` interface. This function is marked
      as deprecated as it is only a temporary solution to the backwards
      compatibility problem we had. In short, we could have removed the
      additional fields in the struct and done a migration, but we chose to
      wait and do it off-chain through the genesis of the system parachain.
      After we move the identity pallet to the parachain, additional fields
      will be migrated into the existing fields and the `additional` key-value
      store will be removed. Until that happens, this interface will provide
      the necessary information to properly account for the associated costs.
      
      Additionally, this PR fixes an unrelated issue; the `IdentityField` enum
      used to represent the fields as bitflags couldn't store more than 8
      fields, even though it was marked as `#[repr(u64)]`. This was because of
      the `derive` implementation of `TypeInfo`, which assumed `u8` semantics.
      The custom implementation of this trait in
      https://github.com/paritytech/polkadot-sdk/commit/0105cc0396b7a53d0b290f48b1225847f6d17321
      
      
      fixes the issue.
      
      ---------
      
      Signed-off-by: default avatargeorgepisaltu <[email protected]>
      Co-authored-by: default avatarSam Johnson <[email protected]>
      Co-authored-by: default avatarjoe petrowski <[email protected]>
      91851951
    • Kian Paimani's avatar
      Ensure correct variant count in `Runtime[Hold/Freeze]Reason` (#1900) · 35eb133b
      Kian Paimani authored
      closes https://github.com/paritytech/polkadot-sdk/issues/1882
      
      
      
      ## Breaking Changes
      
      This PR introduces a new item to `pallet_balances::Config`:
      
      ```diff
      trait Config {
      ++    type RuntimeFreezeReasons;
      }
      ```
      
      This value is only used to check it against `type MaxFreeze`. A similar
      check has been added for `MaxHolds` against `RuntimeHoldReasons`, which
      is already given to `pallet_balances`.
      
      In all contexts, you should pass the real `RuntimeFreezeReasons`
      generated by `construct_runtime` to `type RuntimeFreezeReasons`. Passing
      `()` would also work, but it would imply that the runtime uses no
      freezes at all.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      Co-authored-by: default avatarOliver Tale-Yazdi <[email protected]>
      35eb133b
  29. Oct 23, 2023
  30. Oct 18, 2023
    • Keith Yeung's avatar
      Introduce XcmFeesToAccount fee manager (#1234) · 3dece311
      Keith Yeung authored
      
      
      Combination of paritytech/polkadot#7005, its addon PR
      paritytech/polkadot#7585 and its companion paritytech/cumulus#2433.
      
      This PR introduces a new XcmFeesToAccount struct which implements the
      `FeeManager` trait, and assigns this struct as the `FeeManager` in the
      XCM config for all runtimes.
      
      The struct simply deposits all fees handled by the XCM executor to a
      specified account. In all runtimes, the specified account is configured
      as the treasury account.
      
      XCM __delivery__ fees are now being introduced (unless the root origin
      is sending a message to a system parachain on behalf of the originating
      chain).
      
      # Note for reviewers
      
      Most file changes are tests that had to be modified to account for the
      new fees.
      Main changes are in:
      - cumulus/pallets/xcmp-queue/src/lib.rs <- To make it track the delivery
      fees exponential factor
      - polkadot/xcm/xcm-builder/src/fee_handling.rs <- Added. Has the
      FeeManager implementation
      - All runtime xcm_config files <- To add the FeeManager to the XCM
      configuration
      
      # Important note
      
      After this change, instructions that create and send a new XCM (Query*,
      Report*, ExportMessage, InitiateReserveWithdraw, InitiateTeleport,
      DepositReserveAsset, TransferReserveAsset, LockAsset and RequestUnlock)
      will require the corresponding origin account in the origin register to
      pay for transport delivery fees, and the onward message will fail to be
      sent if the origin account does not have the required amount. This
      delivery fee is on top of what we already collect as tx fees in
      pallet-xcm and XCM BuyExecution fees!
      
      Wallet UIs that want to expose the new delivery fee can do so using the
      formula:
      
      ```
      delivery_fee_factor * (base_fee + encoded_msg_len * per_byte_fee)
      ```
      
      where the delivery fee factor can be obtained from the corresponding
      pallet based on which transport you are using (UMP, HRMP or bridges),
      the base fee is a constant, the encoded message length from the message
      itself and the per byte fee is the same as the configured per byte fee
      for txs (i.e. `TransactionByteFee`).
      
      ---------
      
      Co-authored-by: default avatarBranislav Kontur <[email protected]>
      Co-authored-by: default avatarjoe petrowski <[email protected]>
      Co-authored-by: default avatarGiles Cope <[email protected]>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarFrancisco Aguirre <[email protected]>
      Co-authored-by: default avatarLiam Aharon <[email protected]>
      Co-authored-by: default avatarKian Paimani <[email protected]>
      3dece311
  31. Oct 12, 2023
  32. Oct 07, 2023
    • Muharem Ismailov's avatar
      Treasury spends various asset kinds (#1333) · cb944dc5
      Muharem Ismailov authored
      
      
      ### Summary 
      
      This PR introduces new dispatchables to the treasury pallet, allowing
      spends of various asset types. The enhanced features of the treasury
      pallet, in conjunction with the asset-rate pallet, are set up and
      enabled for Westend and Rococo.
      
      ### Westend and Rococo runtimes.
      
      Polkadot/Kusams/Rococo Treasury can accept proposals for `spends` of
      various asset kinds by specifying the asset's location and ID.
      
      #### Treasury Instance New Dispatchables:
      - `spend(AssetKind, AssetBalance, Beneficiary, Option<ValidFrom>)` -
      propose and approve a spend;
      - `payout(SpendIndex)` - payout an approved spend or retry a failed
      payout
      - `check_payment(SpendIndex)` - check the status of a payout;
      - `void_spend(SpendIndex)` - void previously approved spend;
      > existing spend dispatchable renamed to spend_local
      
      in this context, the `AssetKind` parameter contains the asset's location
      and it's corresponding `asset_id`, for example:
      `USDT` on `AssetHub`,
      ``` rust
      location = MultiLocation(0, X1(Parachain(1000)))
      asset_id = MultiLocation(0, X2(PalletInstance(50), GeneralIndex(1984)))
      ```
      
      the `Beneficiary` parameter is a `MultiLocation` in the context of the
      asset's location, for example
      ``` rust
      // the Fellowship salary pallet's location / account
      FellowshipSalaryPallet = MultiLocation(1, X2(Parachain(1001), PalletInstance(64)))
      // or custom `AccountId`
      Alice = MultiLocation(0, AccountId32(network: None, id: [1,...]))
      ```
      
      the `AssetBalance` represents the amount of the `AssetKind` to be
      transferred to the `Beneficiary`. For permission checks, the asset
      amount is converted to the native amount and compared against the
      maximum spendable amount determined by the commanding spend origin.
      
      the `spend` dispatchable allows for batching spends with different
      `ValidFrom` arguments, enabling milestone-based spending. If the
      expectations tied to an approved spend are not met, it is possible to
      void the spend later using the `void_spend` dispatchable.
      
      Asset Rate Pallet provides the conversion rate from the `AssetKind` to
      the native balance.
      
      #### Asset Rate Instance Dispatchables:
      - `create(AssetKind, Rate)` - initialize a conversion rate to the native
      balance for the given asset
      - `update(AssetKind, Rate)` - update the conversion rate to the native
      balance for the given asset
      - `remove(AssetKind)` - remove an existing conversion rate to the native
      balance for the given asset
      
      the pallet's dispatchables can be executed by the Root or Treasurer
      origins.
      
      ### Treasury Pallet
      
      Treasury Pallet can accept proposals for `spends` of various asset kinds
      and pay them out through the implementation of the `Pay` trait.
      
      New Dispatchables:
      - `spend(Config::AssetKind, AssetBalance, Config::Beneficiary,
      Option<ValidFrom>)` - propose and approve a spend;
      - `payout(SpendIndex)` - payout an approved spend or retry a failed
      payout;
      - `check_payment(SpendIndex)` - check the status of a payout;
      - `void_spend(SpendIndex)` - void previously approved spend;
      > existing spend dispatchable renamed to spend_local
      
      The parameters' types of the `spend` dispatchable exposed via the
      pallet's `Config` and allows to propose and accept a spend of a certain
      amount.
      
      An approved spend can be claimed via the `payout` within the
      `Config::SpendPeriod`. Clients provide an implementation of the `Pay`
      trait which can pay an asset of the `AssetKind` to the `Beneficiary` in
      `AssetBalance` units.
      
      The implementation of the Pay trait might not have an immediate final
      payment status, for example if implemented over `XCM` and the actual
      transfer happens on a remote chain.
      
      The `check_status` dispatchable can be executed to update the spend's
      payment state and retry the `payout` if the payment has failed.
      
      ---------
      
      Co-authored-by: default avatarjoe petrowski <[email protected]>
      Co-authored-by: command-bot <>
      cb944dc5
  33. Oct 01, 2023
  34. Sep 29, 2023
    • Ankan's avatar
      [NPoS] Fix for Reward Deficit in the pool (#1255) · f820dc0a
      Ankan authored
      closes https://github.com/paritytech/polkadot-sdk/issues/158.
      partially addresses
      https://github.com/paritytech/polkadot-sdk/issues/226.
      
      Instead of fragile calculation of current balance by looking at `free
      balance - ED`, Nomination Pool now freezes ED in the pool reward account
      to restrict an account from going below minimum balance. This also has a
      nice side effect that if ED changes, we know how much is the imbalance
      in ED frozen in the pool and the current required ED. A pool operator
      can diligently top up the pool with the deficit in ED or vice versa,
      withdraw the excess they transferred to the pool.
      
      ## Notable changes
      - New call `adjust_pool_deposit`: Allows to top up the deficit or
      withdraw the excess deposited funds to the pool.
      - Uses Fungible trait (instead of Currency trait). Since NP was not
      doing any locking/reserving previously, no migration is needed for this.
      - One time migration of freezing ED from each of the existing pools (not
      very PoV friendly but fine for relay chain).
      f820dc0a
  35. Sep 27, 2023