1. Nov 22, 2023
    • Ross Bulat's avatar
      Deprecate `RewardDestination::Controller` (#2380) · 7a32f4be
      Ross Bulat authored
      
      
      Deprecates `RewardDestination::Controller` variant.
      
      - [x] `RewardDestination::Controller` annotated with `#[deprecated]`.
      - [x] `Controller` variant is now handled the same way as `Stash` in
      `payout_stakers`.
      - [x] `set_payee` errors if `RewardDestination::Controller` is provided.
      - [x] Added `update_payee` call to lazily migrate
      `RewardDestination::Controller` `Payee` storage entries to
      `RewardDestination::Account(controller)` .
      - [x] `payout_stakers_dead_controller` has been removed from benches &
      weights - was not used.
      - [x] Tests no longer use `RewardDestination::Controller`.
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarGonçalo Pestana <[email protected]>
      Co-authored-by: default avatargeorgepisaltu <[email protected]>
      7a32f4be
  2. Nov 21, 2023
  3. Nov 20, 2023
  4. Nov 17, 2023
  5. Nov 15, 2023
    • Bastian Köcher's avatar
      frame-system: Add `last_runtime_upgrade_spec_version` (#2351) · ea4085ab
      Bastian Köcher authored
      
      
      Adds a function for querying the last runtime upgrade spec version. This
      can be useful for when writing runtime level migrations to ensure that
      they are not executed multiple times. An example would be a session key
      migration.
      
      ---------
      
      Co-authored-by: default avatarLiam Aharon <[email protected]>
      Co-authored-by: default avatarOliver Tale-Yazdi <[email protected]>
      ea4085ab
    • joe petrowski's avatar
      Identity Deposits Relay to Parachain Migration (#1814) · c79b234b
      joe petrowski authored
      The goal of this PR is to migrate Identity deposits from the Relay Chain
      to a system parachain.
      
      The problem I want to solve is that `IdentityOf` and `SubsOf` both store
      an amount that's held in reserve as a storage deposit. When migrating to
      a parachain, we can take a snapshot of the actual `IdentityInfo` and
      sub-account mappings, but should migrate (off chain) the `deposit`s to
      zero, since the chain (and by extension, accounts) won't have any funds
      at genesis.
      
      The good news is that we expect parachain deposits to be significantly
      lower (possibly 100x) on the parachain. That is, a deposit of 21 DOT on
      the Relay Chain would need 0.21 DOT on a parachain. This PR proposes to
      migrate the deposits in the following way:
      
      1. Introduces a new pallet with two extrinsics: 
      - `reap_identity`: Has a configurable `ReapOrigin`, which would be set
      to `EnsureSigned` on the Relay Chain (i.e. callable by anyone) and
      `EnsureRoot` on the parachain (we don't want identities reaped from
      there).
      - `poke_deposit`: Checks what deposit the pallet holds (at genesis,
      zero) and attempts to update the amount based on the calculated deposit
      for storage data.
      2. `reap_identity` clears all storage data for a `target` account and
      unreserves their deposit.
      3. A `ReapIdentityHandler` teleports the necessary DOT to the parachain
      and calls `poke_deposit`. Since the parachain deposit is much lower, and
      was just unreserved, we know we have enough.
      
      One awkwardness I ran into was that the XCMv3 instruction set does not
      provide a way for the system to teleport assets without a fee being
      deducted on reception. Users shouldn't have to pay a fee for the system
      to migrate their info to a more efficient location. So I wrote my own
      program and did the `InitiateTeleport` accounting on my own to send a
      program with `UnpaidExecution`. Have discussed an
      `InitiateUnpaidTeleport` instruction with @franciscoaguirre . Obviously
      any chain executing this would have to pass a `Barrier` for free
      execution.
      
      TODO:
      
      - [x] Confirm People Chain ParaId
      - [x] Confirm People Chain deposit rates (determined in
      https://github.com/paritytech/polkadot-sdk/pull/2281
      
      )
      - [x] Add pallet to Westend
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      c79b234b
  6. Nov 14, 2023
  7. Nov 13, 2023
    • gupnik's avatar
      Adds syntax for marking calls feeless (#1926) · 60c77a2e
      gupnik authored
      Fixes https://github.com/paritytech/polkadot-sdk/issues/1725
      
      
      
      This PR adds the following changes:
      1. An attribute `pallet::feeless_if` that can be optionally attached to
      a call like so:
      ```rust
      #[pallet::feeless_if(|_origin: &OriginFor<T>, something: &u32| -> bool {
      	*something == 0
      })]
      pub fn do_something(origin: OriginFor<T>, something: u32) -> DispatchResult {
           ....
      }
      ```
      The closure passed accepts references to arguments as specified in the
      call fn. It returns a boolean that denotes the conditions required for
      this call to be "feeless".
      
      2. A signed extension `SkipCheckIfFeeless<T: SignedExtension>` that
      wraps a transaction payment processor such as
      `pallet_transaction_payment::ChargeTransactionPayment`. It checks for
      all calls annotated with `pallet::feeless_if` to see if the conditions
      are met. If so, the wrapped signed extension is not called, essentially
      making the call feeless.
      
      In order to use this, you can simply replace your existing signed
      extension that manages transaction payment like so:
      ```diff
      - pallet_transaction_payment::ChargeTransactionPayment<Runtime>,
      + pallet_skip_feeless_payment::SkipCheckIfFeeless<
      +	Runtime,
      +	pallet_transaction_payment::ChargeTransactionPayment<Runtime>,
      + >,
      ```
      
      ### Todo
      - [x] Tests
      - [x] Docs
      - [x] Prdoc
      
      ---------
      
      Co-authored-by: Nikhil Gupta <>
      Co-authored-by: default avatarOliver Tale-Yazdi <[email protected]>
      Co-authored-by: default avatarFrancisco Aguirre <[email protected]>
      Co-authored-by: default avatarLiam Aharon <[email protected]>
      60c77a2e
    • Bastian Köcher's avatar
      pallet-grandpa: Remove `GRANDPA_AUTHORITIES_KEY` (#2181) · ebcf0a0f
      Bastian Köcher authored
      
      
      Remove the `GRANDPA_AUTHORITIES_KEY` key and its usage. Apparently this
      was used in the early days to communicate the grandpa authorities to the
      node. However, we have now a runtime api that does this for us. So, this
      pull request is moving from the custom managed storage item to a FRAME
      managed storage item.
      
      This pr also includes a migration for doing the switch on a running
      chain.
      
      ---------
      
      Co-authored-by: default avatarDavide Galassi <[email protected]>
      ebcf0a0f
  8. Nov 10, 2023
  9. Nov 09, 2023
  10. Nov 08, 2023
  11. Nov 07, 2023
    • Bill Laboon's avatar
      Fix "slashaed" typo (#2205) · 44c7a5eb
      Bill Laboon authored
      # Description
      
      This merely fixes a typo in the documentation, replacing the typo
      "slashaed" with "slashed". Since external entities use the comments for
      explanations of events, this will then be shown externally. I noticed
      this when reviewing [this
      event](https://polkadot.subscan.io/extrinsic/0xb6bc1e3abde0c2ed9c500c74cfc64cdb8179e5d9af97f4bf53242ce4cdd15a1d?event=18064194-6)
      on Subscan.
      
      This is not related to any other issues or PRs.
      44c7a5eb
    • vuittont60's avatar
      docs: fix typos (#2193) · 4caa3d8d
      vuittont60 authored
      4caa3d8d
    • Liam Aharon's avatar
      Initialise on-chain `StorageVersion` for pallets added after genesis (#1297) · c4211b65
      Liam Aharon authored
      Original PR https://github.com/paritytech/substrate/pull/14641
      
      ---
      
      Closes https://github.com/paritytech/polkadot-sdk/issues/109
      
      
      
      ### Problem
      Quoting from the above issue:
      
      > When adding a pallet to chain after genesis we currently don't set the
      StorageVersion. So, when calling on_chain_storage_version it returns 0
      while the pallet is maybe already at storage version 9 when it was added
      to the chain. This could lead to issues when running migrations.
      
      ### Solution
      
      - Create a new trait `BeforeAllRuntimeMigrations` with a single method
      `fn before_all_runtime_migrations() -> Weight` trait with a noop default
      implementation
      - Modify `Executive` to call
      `BeforeAllRuntimeMigrations::before_all_runtime_migrations` for all
      pallets before running any other hooks
      - Implement `BeforeAllRuntimeMigrations` in the pallet proc macro to
      initialize the on-chain version to the current pallet version if the
      pallet has no storage set (indicating it has been recently added to the
      runtime and needs to have its version initialised).
      
      ### Other changes in this PR
      
      - Abstracted repeated boilerplate to access the `pallet_name` in the
      pallet expand proc macro.
      
      ### FAQ
      
      #### Why create a new hook instead of adding this logic to the pallet
      `pre_upgrade`?
      
      `Executive` currently runs `COnRuntimeUpgrade` (custom migrations)
      before `AllPalletsWithSystem` migrations. We need versions to be
      initialized before the `COnRuntimeUpgrade` migrations are run, because
      `COnRuntimeUpgrade` migrations may use the on-chain version for critical
      logic. e.g. `VersionedRuntimeUpgrade` uses it to decide whether or not
      to execute.
      
      We cannot reorder `COnRuntimeUpgrade` and `AllPalletsWithSystem` so
      `AllPalletsWithSystem` runs first, because `AllPalletsWithSystem` have
      some logic in their `post_upgrade` hooks to verify that the on-chain
      version and current pallet version match. A common use case of
      `COnRuntimeUpgrade` migrations is to perform a migration which will
      result in the versions matching, so if they were reordered these
      `post_upgrade` checks would fail.
      
      #### Why init the on-chain version for pallets without a current storage
      version?
      
      We must init the on-chain version for pallets even if they don't have a
      defined storage version so if there is a future version bump, the
      on-chain version is not automatically set to that new version without a
      proper migration.
      
      e.g. bad scenario:
      
      1. A pallet with no 'current version' is added to the runtime
      2. Later, the pallet is upgraded with the 'current version' getting set
      to 1 and a migration is added to Executive Migrations to migrate the
      storage from 0 to 1
          a. Runtime upgrade occurs
          b. `before_all` hook initializes the on-chain version to 1
      c. `on_runtime_upgrade` of the migration executes, and sees the on-chain
      version is already 1 therefore think storage is already migrated and
      does not execute the storage migration
      Now, on-chain version is 1 but storage is still at version 0.
      
      By always initializing the on-chain version when the pallet is added to
      the runtime we avoid that scenario.
      
      ---------
      
      Co-authored-by: default avatarKian Paimani <[email protected]>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      c4211b65
  12. Nov 06, 2023
  13. Nov 05, 2023
  14. Nov 04, 2023
  15. Nov 03, 2023
    • georgepisaltu's avatar
      Identity pallet improvements (#2048) · 21fbc00d
      georgepisaltu authored
      This PR is a follow up to #1661 
      
      - [x] rename the `simple` module to `legacy`
      - [x] fix benchmarks to disregard the number of additional fields
      - [x] change the storage deposits to charge per encoded byte of the
      identity information instance, removing the need for `fn
      additional(&self) -> usize` in `IdentityInformationProvider`
      - [x] ~add an extrinsic to rejig deposits to account for the change
      above~
      - [ ] ~ensure through proper configuration that the new byte-based
      deposit is always lower than whatever is reserved now~
      - [x] remove `IdentityFields` from the `set_fields` extrinsic signature,
      as per [this
      discussion](https://github.com/paritytech/polkadot-sdk/pull/1661#discussion_r1371703403)
      
      > ensure through proper configuration that the new byte-based deposit is
      always lower than whatever is reserved now
      
      Not sure this is needed anymore. If the new deposits are higher than
      what is currently on chain and users don't have enough funds to reserve
      what is needed, the extrinisc fails and they're basically grandfathered
      and frozen until they add more funds and/or make a change to their
      identity. This behavior seems fine to me. Original idea
      [here](https://github.com/paritytech/polkadot-sdk/pull/1661#issuecomment-1779606319).
      
      > add an extrinsic to rejig deposits to account for the change above
      
      This was initially implemented but now removed from this PR in favor of
      the implementation detailed
      [here](https://github.com/paritytech/polkadot-sdk/pull/2088
      
      ).
      
      ---------
      
      Signed-off-by: default avatargeorgepisaltu <[email protected]>
      Co-authored-by: default avatarjoepetrowski <[email protected]>
      21fbc00d
  16. Nov 02, 2023
    • Richard Melkonian's avatar
      Create new trait for non-dedup storage decode (#1932) · 15a34838
      Richard Melkonian authored
      - This adds the new trait `StorageDecodeNonDedupLength` and implements
      them for `BTreeSet` and its bounded types.
      - New unit test has been added to cover the case.  
      - See linked
      [issue](https://github.com/paritytech/polkadot-sdk/issues/126
      
      ) which
      outlines the original issue.
      
      Note that the added trait here doesn't add new logic but improves
      semantics.
      
      ---------
      
      Co-authored-by: default avatarjoe petrowski <[email protected]>
      Co-authored-by: default avatarKian Paimani <[email protected]>
      Co-authored-by: default avatarOliver Tale-Yazdi <[email protected]>
      Co-authored-by: command-bot <>
      15a34838
    • Oliver Tale-Yazdi's avatar
      Use `Message Queue` as DMP and XCMP dispatch queue (#1246) · e1c033eb
      Oliver Tale-Yazdi authored
      (imported from https://github.com/paritytech/cumulus/pull/2157)
      
      ## Changes
      
      This MR refactores the XCMP, Parachains System and DMP pallets to use
      the [MessageQueue](https://github.com/paritytech/substrate/pull/12485)
      for delayed execution of incoming messages. The DMP pallet is entirely
      replaced by the MQ and thereby removed. This allows for PoV-bounded
      execution and resolves a number of issues that stem from the current
      work-around.
      
      All System Parachains adopt this change.  
      The most important changes are in `primitives/core/src/lib.rs`,
      `parachains/common/src/process_xcm_message.rs`,
      `pallets/parachain-system/src/lib.rs`, `pallets/xcmp-queue/src/lib.rs`
      and the runtime configs.
      
      ### DMP Queue Pallet
      
      The pallet got removed and its logic refactored into parachain-system.
      Overweight message management can be done directly through the MQ
      pallet.
      
      Final undeployment migrations are provided by
      `cumulus_pallet_dmp_queue::UndeployDmpQueue` and `DeleteDmpQueue` that
      can be configured with an aux config trait like:
      
      ```rust
      parameter_types! {
      	pub const DmpQueuePalletName: &'static str = \"DmpQueue\" < CHANGE ME;
      	pub const RelayOrigin: AggregateMessageOrigin = AggregateMessageOrigin::Parent;
      }
      
      impl cumulus_pallet_dmp_queue::MigrationConfig for Runtime {
      	type PalletName = DmpQueuePalletName;
      	type DmpHandler = frame_support::traits::EnqueueWithOrigin<MessageQueue, RelayOrigin>;
      	type DbWeight = <Runtime as frame_system::Config>::DbWeight;
      }
      
      // And adding them to your Migrations tuple:
      pub type Migrations = (
      	...
      	cumulus_pallet_dmp_queue::UndeployDmpQueue<Runtime>,
      	cumulus_pallet_dmp_queue::DeleteDmpQueue<Runtime>,
      );
      ```
      
      ### XCMP Queue pallet
      
      Removed all dispatch queue functionality. Incoming XCMP messages are now
      either: Immediately handled if they are Signals, enqueued into the MQ
      pallet otherwise.
      
      New config items for the XCMP queue pallet:
      ```rust
      /// The actual queue implementation that retains the messages for later processing.
      type XcmpQueue: EnqueueMessage<ParaId>;
      
      /// How a XCM over HRMP from a sibling parachain should be processed.
      type XcmpProcessor: ProcessMessage<Origin = ParaId>;
      
      /// The maximal number of suspended XCMP channels at the same time.
      #[pallet::constant]
      type MaxInboundSuspended: Get<u32>;
      ```
      
      How to configure those:
      
      ```rust
      // Use the MessageQueue pallet to store messages for later processing. The `TransformOrigin` is needed since
      // the MQ pallet itself operators on `AggregateMessageOrigin` but we want to enqueue `ParaId`s.
      type XcmpQueue = TransformOrigin<MessageQueue, AggregateMessageOrigin, ParaId, ParaIdToSibling>;
      
      // Process XCMP messages from siblings. This is type-safe to only accept `ParaId`s. They will be dispatched
      // with origin `Junction::Sibling(…)`.
      type XcmpProcessor = ProcessFromSibling<
      	ProcessXcmMessage<
      		AggregateMessageOrigin,
      		xcm_executor::XcmExecutor<xcm_config::XcmConfig>,
      		RuntimeCall,
      	>,
      >;
      
      // Not really important what to choose here. Just something larger than the maximal number of channels.
      type MaxInboundSuspended = sp_core::ConstU32<1_000>;
      ```
      
      The `InboundXcmpStatus` storage item was replaced by
      `InboundXcmpSuspended` since it now only tracks inbound queue suspension
      and no message indices anymore.
      
      Now only sends the most recent channel `Signals`, as all prio ones are
      out-dated anyway.
      
      ### Parachain System pallet
      
      For `DMP` messages instead of forwarding them to the `DMP` pallet, it
      now pushes them to the configured `DmpQueue`. The message processing
      which was triggered in `set_validation_data` is now being done by the MQ
      pallet `on_initialize`.
      
      XCMP messages are still handed off to the `XcmpMessageHandler`
      (XCMP-Queue pallet) - no change here.
      
      New config items for the parachain system pallet:
      ```rust
      /// Queues inbound downward messages for delayed processing. 
      ///
      /// Analogous to the `XcmpQueue` of the XCMP queue pallet.
      type DmpQueue: EnqueueMessage<AggregateMessageOrigin>;
      ``` 
      
      How to configure:
      ```rust
      /// Use the MQ pallet to store DMP messages for delayed processing.
      type DmpQueue = MessageQueue;
      ``` 
      
      ## Message Flow
      
      The flow of messages on the parachain side. Messages come in from the
      left via the `Validation Data` and finally end up at the `Xcm Executor`
      on the right.
      
      ![Untitled
      (1)](https://github.com/paritytech/cumulus/assets/10380170/6cf8b377-88c9-4aed-96df-baace266e04d)
      
      ## Further changes
      
      - Bumped the default suspension, drop and resume thresholds in
      `QueueConfigData::default()`.
      - `XcmpQueue::{suspend_xcm_execution, resume_xcm_execution}` errors when
      they would be a noop.
      - Properly validate the `QueueConfigData` before setting it.
      - Marked weight files as auto-generated so they wont auto-expand in the
      MR files view.
      - Move the `hypothetical` asserts to `frame_support` under the name
      `experimental_hypothetically`
      
      Questions:
      - [ ] What about the ugly `#[cfg(feature = \"runtime-benchmarks\")]` in
      the runtimes? Not sure how to best fix. Just having them like this makes
      tests fail that rely on the real message processor when the feature is
      enabled.
      - [ ] Need a good weight for `MessageQueueServiceWeight`. The scheduler
      already takes 80% so I put it to 10% but that is quite low.
      
      TODO:
      - [x] Remove c&p code after
      https://github.com/paritytech/polkadot/pull/6271
      - [x] Use `HandleMessage` once it is public in Substrate
      - [x] fix `runtime-benchmarks` feature
      https://github.com/paritytech/polkadot/pull/6966
      
      
      - [x] Benchmarks
      - [x] Tests
      - [ ] Migrate `InboundXcmpStatus` to `InboundXcmpSuspended`
      - [x] Possibly cleanup Migrations (DMP+XCMP)
      - [x] optional: create `TransformProcessMessageOrigin` in Substrate and
      replace `ProcessFromSibling`
      - [ ] Rerun weights on ref HW
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      Co-authored-by: default avatarLiam Aharon <[email protected]>
      Co-authored-by: default avatarjoe petrowski <[email protected]>
      Co-authored-by: default avatarKian Paimani <[email protected]>
      Co-authored-by: command-bot <>
      e1c033eb
    • Piotr Mikołajczyk's avatar
      Make `ExecResult` encodable (#1809) · 10857d0b
      Piotr Mikołajczyk authored
      # Description
      We derive few useful traits on `ErrorOrigin` and `ExecError`, including
      `codec::Encode` and `codec::Decode`, so that `ExecResult` is
      en/decodable as well. This is required for a contract mocking feature
      (already prepared in drink:
      https://github.com/Cardinal-Cryptography/drink/pull/61). In more detail:
      `ExecResult` must be passed from runtime extension, through runtime
      interface, back to the pallet, which requires that it is serializable to
      bytes in some form (or implements some rare, auxiliary traits).
      
      **Impact on runtime size**: Since most of these traits is used directly
      in the pallet now, compiler should be able to throw it out (and thus we
      bring no new overhead). However, they are very useful in secondary tools
      like drink or other testing libraries.
      
      # Checklist
      
      - [x] My PR includes a detailed description as outlined in the
      "Description" section above
      - [ ] My PR follows the [labeling requirements](CONTRIBUTING.md#Process)
      of this project (at minimum one label for `T`
        required)
      - [x] I have made corresponding changes to the documentation (if
      applicable)
      - [x] I have added tests that prove my fix is effective or that my
      feature works (if applicable)
      10857d0b
    • Branislav Kontur's avatar
  17. Nov 01, 2023
    • Oliver Tale-Yazdi's avatar
      [FRAME] Short-circuit fungible self transfer (#2118) · c66ae375
      Oliver Tale-Yazdi authored
      
      
      Changes:
      - Change the fungible(s) logic to treat a self-transfer as No-OP (as
      long as all pre-checks pass).
      
      Note that the self-transfer case will not emit an event since no state
      was changed.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      c66ae375
    • Kevin Krone's avatar
      Improve FRAME storage docs (#1714) · b6965af4
      Kevin Krone authored
      This is a port (and hopefully a small improvement) of @Kianenigma's PR
      from the old Substrate repo:
      https://github.com/paritytech/substrate/pull/13987. Following #1689 I
      moved the documentation of all macros relevant to this PR from
      `frame_support_procedural` to `pallet_macros` while including a hint for
      RA users.
      
      Question: Again with respect to #1689: Is there a good reason why we
      should *not* enhance paths with links to our current rustdocs? For
      example, instead of
      ```rust
      /// **Rust-Analyzer users**: See the documentation of the Rust item in
      /// `frame_support::pallet_macros::storage`.
      ```
      we could write
      ```rust
      /// **Rust-Analyzer users**: See the documentation of the Rust item in
      /// [`frame_support::pallet_macros::storage`](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/attr.storage.html).
      ```
      This results in a clickable link like this:
      <img width="674" alt="image"
      src="https://github.com/paritytech/polkadot-sdk/assets/10713977/c129e622-3942-4eeb-8acf-93ee4efdc99d
      
      ">
      I don't really expect the links to become outdated any time soon, but I
      think this would be a great UX improvement over just having paths.
      
      TODOs:
      - [ ] Add documentation for `constant_name` macro
      - [x] Add proper documentation for different `QueryKinds`, i.e.
      `OptionQuery`, `ValueQuery`, `ResultQuery`. One example for each. Custom
      `OnEmpty` should be moved to `QueryKinds` trait doc page.
      - [ ] Rework `type_value` docs
      
      ---------
      
      Co-authored-by: default avatarkianenigma <[email protected]>
      b6965af4
    • Ankan's avatar
      [NPoS] Paging reward payouts in order to scale rewardable nominators (#1189) · 00b85c51
      Ankan authored
      helps https://github.com/paritytech/polkadot-sdk/issues/439.
      closes https://github.com/paritytech/polkadot-sdk/issues/473.
      
      PR link in the older substrate repository:
      https://github.com/paritytech/substrate/pull/13498.
      
      # Context
      Rewards payout is processed today in a single block and limited to
      `MaxNominatorRewardedPerValidator`. This number is currently 512 on both
      Kusama and Polkadot.
      
      This PR tries to scale the nominators payout to an unlimited count in a
      multi-block fashion. Exposures are stored in pages, with each page
      capped to a certain number (`MaxExposurePageSize`). Starting out, this
      number would be the same as `MaxNominatorRewardedPerValidator`, but
      eventually, this number can be lowered through new runtime upgrades to
      limit the rewardeable nominators per dispatched call instruction.
      
      The changes in the PR are backward compatible.
      
      ## How payouts would work like after this change
      Staking exposes two calls, 1) the existing `payout_stakers` and 2)
      `payout_stakers_by_page`.
      
      ### payout_stakers
      This remains backward compatible with no signature change. If for a
      given era a validator has multiple pages, they can call `payout_stakers`
      multiple times. The pages are executed in an ascending sequence and the
      runtime takes care of preventing double claims.
      
      ### payout_stakers_by_page
      Very similar to `payout_stakers` but also accepts an extra param
      `page_index`. An account can choose to payout rewards only for an
      explicitly passed `page_index`.
      
      **Lets look at an example scenario**
      Given an active validator on Kusama had 1100 nominators,
      `MaxExposurePageSize` set to 512 for Era e. In order to pay out rewards
      to all nominators, the caller would need to call `payout_stakers` 3
      times.
      
      - `payout_stakers(origin, stash, e)` => will pay the first 512
      nominators.
      - `payout_stakers(origin, stash, e)` => will pay the second set of 512
      nominators.
      - `payout_stakers(origin, stash, e)` => will pay the last set of 76
      nominators.
      ...
      - `payout_stakers(origin, stash, e)` => calling it the 4th time would
      return an error `InvalidPage`.
      
      The above calls can also be replaced by `payout_stakers_by_page` and
      passing a `page_index` explicitly.
      
      ## Commission note
      Validator commission is paid out in chunks across all the pages where
      each commission chunk is proportional to the total stake of the current
      page. This implies higher the total stake of a page, higher will be the
      commission. If all the pages of a validator's single era are paid out,
      the sum of commission paid to the validator across all pages should be
      equal to what the commission would have been if we had a non-paged
      exposure.
      
      ### Migration Note
      Strictly speaking, we did not need to bump our storage version since
      there is no migration of storage in this PR. But it is still useful to
      mark a storage upgrade for the following reasons:
      
      - New storage items are introduced in this PR while some older storage
      items are deprecated.
      - For the next `HistoryDepth` eras, the exposure would be incrementally
      migrated to its corresponding paged storage item.
      - Runtimes using staking pallet would strictly need to wait at least
      `HistoryDepth` eras with current upgraded version (14) for the migration
      to complete. At some era `E` such that `E >
      era_at_which_V14_gets_into_effect + HistoryDepth`, we will upgrade to
      version X which will remove the deprecated storage items.
      In other words, it is a strict requirement that E<sub>x</sub> -
      E<sub>14</sub> > `HistoryDepth`, where
      E<sub>x</sub> = Era at which deprecated storages are removed from
      runtime,
      E<sub>14</sub> = Era at which runtime is upgraded to version 14.
      - For Polkadot and Kusama, there is a [tracker
      ticket](https://github.com/paritytech/polkadot-sdk/issues/433) to clean
      up the deprecated storage items.
      
      ### Storage Changes
      
      #### Added
      - ErasStakersOverview
      - ClaimedRewards
      - ErasStakersPaged
      
      #### Deprecated
      The following can be cleaned up after 84 eras which is tracked
      [here](https://github.com/paritytech/polkadot-sdk/issues/433).
      
      - ErasStakers.
      - ErasStakersClipped.
      - StakingLedger.claimed_rewards, renamed to
      StakingLedger.legacy_claimed_rewards.
      
      ### Config Changes
      - Renamed MaxNominatorRewardedPerValidator to MaxExposurePageSize.
      
      ### TODO
      - [x] Tracker ticket for cleaning up the old code after 84 eras.
      - [x] Add companion.
      - [x] Redo benchmarks before merge.
      - [x] Add Changelog for pallet_staking.
      - [x] Pallet should be configurable to enable/disable paged rewards.
      - [x] Commission payouts are distributed across pages.
      - [x] Review documentation thoroughly.
      - [x] Rename `MaxNominatorRewardedPerValidator` ->
      `MaxExposurePageSize`.
      - [x] NMap for `ErasStakersPaged`.
      - [x] Deprecate ErasStakers.
      - [x] Integrity tests.
      
      ### Followup issues
      [Runtime api for deprecated ErasStakers storage
      item](https://github.com/paritytech/polkadot-sdk/issues/426
      
      )
      
      ---------
      
      Co-authored-by: default avatarJavier Viola <[email protected]>
      Co-authored-by: default avatarRoss Bulat <[email protected]>
      Co-authored-by: command-bot <>
      00b85c51
  18. Oct 31, 2023