Skip to content
  1. Dec 20, 2023
    • Dónal Murray's avatar
      Fix clippy lints behind feature gates and add new CI step all features (#2569) · d68868f6
      Dónal Murray authored
      
      
      Many clippy lints usually enforced by `-Dcomplexity` and `-Dcorrectness`
      are not caught by CI as they are gated by `features`, like
      `runtime-benchmarks`, while the clippy CI job runs with only the default
      features for all targets.
      
      This PR also adds a CI step to run clippy with `--all-features` to
      ensure the code quality is maintained behind feature gates from now on.
      
      To improve local development, clippy lints are downgraded to warnings,
      but they still will result in an error at CI due to the `-Dwarnings`
      rustflag.
      
      ---------
      
      Co-authored-by: default avatarLiam Aharon <[email protected]>
      d68868f6
  2. Dec 18, 2023
  3. Dec 14, 2023
  4. Dec 13, 2023
    • Alexandru Gheorghe's avatar
      Approve multiple candidates with a single signature (#1191) · a84dd0db
      Alexandru Gheorghe authored
      Initial implementation for the plan discussed here: https://github.com/paritytech/polkadot-sdk/issues/701
      Built on top of https://github.com/paritytech/polkadot-sdk/pull/1178
      v0: https://github.com/paritytech/polkadot/pull/7554,
      
      ## Overall idea
      
      When approval-voting checks a candidate and is ready to advertise the
      approval, defer it in a per-relay chain block until we either have
      MAX_APPROVAL_COALESCE_COUNT candidates to sign or a candidate has stayed
      MAX_APPROVALS_COALESCE_TICKS in the queue, in both cases we sign what
      candidates we have available.
      
      This should allow us to reduce the number of approvals messages we have
      to create/send/verify. The parameters are configurable, so we should
      find some values that balance:
      
      - Security of the network: Delaying broadcasting of an approval
      shouldn't but the finality at risk and to make sure that never happens
      we won't delay sending a vote if we are past 2/3 from the no-show time.
      - Scalability of the network: MAX_APPROVAL_COALESCE_COUNT = 1 &
      MAX_APPROVALS_COALESCE_TICKS =0, is what we have now and we know from
      the measurements we did on versi, it bottlenecks
      approval-distribution/approval-voting when increase significantly the
      number of validators and parachains
      - Block storage: In case of disputes we have to import this votes on
      chain and that increase the necessary storage with
      MAX_APPROVAL_COALESCE_COUNT * CandidateHash per vote. Given that
      disputes are not the normal way of the network functioning and we will
      limit MAX_APPROVAL_COALESCE_COUNT in the single digits numbers, this
      should be good enough. Alternatively, we could try to create a better
      way to store this on-chain through indirection, if that's needed.
      
      ## Other fixes:
      - Fixed the fact that we were sending random assignments to
      non-validators, that was wrong because those won't do anything with it
      and they won't gossip it either because they do not have a grid topology
      set, so we would waste the random assignments.
      - Added metrics to be able to debug potential no-shows and
      mis-processing of approvals/assignments.
      
      ## TODO:
      - [x] Get feedback, that this is moving in the right direction. @ordian
      @sandreim @eskimor
      
       @burdges, let me know what you think.
      - [x] More and more testing.
      - [x]  Test in versi.
      - [x] Make MAX_APPROVAL_COALESCE_COUNT &
      MAX_APPROVAL_COALESCE_WAIT_MILLIS a parachain host configuration.
      - [x] Make sure the backwards compatibility works correctly
      - [x] Make sure this direction is compatible with other streams of work:
      https://github.com/paritytech/polkadot-sdk/issues/635 &
      https://github.com/paritytech/polkadot-sdk/issues/742
      - [x] Final versi burn-in before merging
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <[email protected]>
      a84dd0db
  5. Dec 07, 2023
    • Oliver Tale-Yazdi's avatar
      [FRAME] Make MQ pallet re-entrancy safe (#2356) · 7e7fe990
      Oliver Tale-Yazdi authored
      
      
      Closes https://github.com/paritytech/polkadot-sdk/issues/2319
      
      Changes:
      - Ensure that only `enqueue_message(s)` is callable from within the
      message processor. This prevents messed up storage that can currently
      happen when the pallet is called into recursively.
      - Use `H256` instead of `[u8; 32]` for clearer API.
      
      ## Details
      
      The re-entracy check is done with the `environmental` crate by adding a
      `with_service_mutex(f)` function that runs the closure exclusively. This
      works since the MQ pallet is not instantiable.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      Co-authored-by: default avatarFrancisco Aguirre <[email protected]>
      7e7fe990
  6. Nov 28, 2023
  7. Nov 20, 2023
  8. Nov 14, 2023
    • Alin Dima's avatar
      add NodeFeatures field to HostConfiguration and runtime API (#2177) · fc12f435
      Alin Dima authored
      Adds a `NodeFeatures` bitfield value to the runtime `HostConfiguration`,
      with the purpose of coordinating the enabling of node-side features,
      such as: https://github.com/paritytech/polkadot-sdk/issues/628 and
      https://github.com/paritytech/polkadot-sdk/issues/598.
      These are features that require all validators enable them at the same
      time, assuming all/most nodes have upgraded their node versions.
      
      This PR doesn't add any feature yet. These are coming in future PRs.
      
      Also adds a runtime API for querying the state of the client features
      and an extrinsic for setting/unsetting a feature by its index in the bitfield.
      
      Note: originally part of:
      https://github.com/paritytech/polkadot-sdk/pull/1644, but posted as
      standalone to be reused by other PRs until the initial PR is merged
      fc12f435
  9. Nov 13, 2023
    • Adrian Catangiu's avatar
      pallet-xcm: enhance `reserve_transfer_assets` to support remote reserves (#1672) · 18257373
      Adrian Catangiu authored
      
      
      ## Motivation
      
      `pallet-xcm` is the main user-facing interface for XCM functionality,
      including assets manipulation functions like `teleportAssets()` and
      `reserve_transfer_assets()` calls.
      
      While `teleportAsset()` works both ways, `reserve_transfer_assets()`
      works only for sending reserve-based assets to a remote destination and
      beneficiary when the reserve is the _local chain_.
      
      ## Solution
      
      This PR enhances `pallet_xcm::(limited_)reserve_withdraw_assets` to
      support transfers when reserves are other chains.
      This will allow complete, **bi-directional** reserve-based asset
      transfers user stories using `pallet-xcm`.
      
      Enables following scenarios:
      - transferring assets with local reserve (was previously supported iff
      asset used as fee also had local reserve - now it works in all cases),
      - transferring assets with reserve on destination,
      - transferring assets with reserve on remote/third-party chain (iff
      assets and fees have same remote reserve),
      - transferring assets with reserve different than the reserve of the
      asset to be used as fees - meaning can be used to transfer random asset
      with local/dest reserve while using DOT for fees on all involved chains,
      even if DOT local/dest reserve doesn't match asset reserve,
      - transferring assets with any type of local/dest reserve while using
      fees which can be teleported between involved chains.
      
      All of the above is done by pallet inner logic without the user having
      to specify which scenario/reserves/teleports/etc. The correct scenario
      and corresponding XCM programs are identified, and respectively, built
      automatically based on runtime configuration of trusted teleporters and
      trusted reserves.
      
      #### Current limitations:
      - while `fees` and "non-fee" `assets` CAN have different reserves (or
      fees CAN be teleported), the remaining "non-fee" `assets` CANNOT, among
      themselves, have different reserve locations (this is also implicitly
      enforced by `MAX_ASSETS_FOR_TRANSFER=2`, but this can be safely
      increased in the future).
      - `fees` and "non-fee" `assets` CANNOT have **different remote**
      reserves (this could also be supported in the future, but adds even more
      complexity while possibly not being worth it - we'll see what the future
      holds).
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/1584
      Fixes https://github.com/paritytech/polkadot-sdk/issues/2055
      
      ---------
      
      Co-authored-by: default avatarFrancisco Aguirre <[email protected]>
      Co-authored-by: default avatarBranislav Kontur <[email protected]>
      18257373
  10. Nov 05, 2023
    • Michal Kucharczyk's avatar
      `chain-spec`: getting ready for native-runtime-free world (#1256) · 8ba7a6ab
      Michal Kucharczyk authored
      
      
      This PR prepares chains specs for _native-runtime-free_  world.
      
      This PR has following changes:
      - `substrate`:
        - adds support for:
      - JSON based `GenesisConfig` to `ChainSpec` allowing interaction with
      runtime `GenesisBuilder` API.
      - interacting with arbitrary runtime wasm blob to[
      `chain-spec-builder`](https://github.com/paritytech/substrate/blob/3ef576eaeb3f42610e85daecc464961cf1295570/bin/utils/chain-spec-builder/src/lib.rs#L46)
      command line util,
      - removes
      [`code`](https://github.com/paritytech/substrate/blob/3ef576eaeb3f42610e85daecc464961cf1295570/frame/system/src/lib.rs#L660)
      from `system_pallet`
        - adds `code` to the `ChainSpec`
      - deprecates
      [`ChainSpec::from_genesis`](https://github.com/paritytech/substrate/blob/3ef576eaeb3f42610e85daecc464961cf1295570/client/chain-spec/src/chain_spec.rs#L263),
      but also changes the signature of this method extending it with `code`
      argument.
      [`ChainSpec::builder()`](https://github.com/paritytech/substrate/blob/20bee680ed098be7239cf7a6b804cd4de267983e/client/chain-spec/src/chain_spec.rs#L507)
      should be used instead.
      - `polkadot`:
      - all references to `RuntimeGenesisConfig` in `node/service` are
      removed,
      - all
      `(kusama|polkadot|versi|rococo|wococo)_(staging|dev)_genesis_config`
      functions now return the JSON patch for default runtime `GenesisConfig`,
        - `ChainSpecBuilder` is used, `ChainSpec::from_genesis` is removed,
      
      - `cumulus`:
        - `ChainSpecBuilder` is used, `ChainSpec::from_genesis` is removed,
      - _JSON_ patch configuration used instead of `RuntimeGenesisConfig
      struct` in all chain specs.
        
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarJavier Viola <[email protected]>
      Co-authored-by: default avatarDavide Galassi <[email protected]>
      Co-authored-by: default avatarFrancisco Aguirre <[email protected]>
      Co-authored-by: default avatarKevin Krone <[email protected]>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      8ba7a6ab
  11. Nov 02, 2023
    • Oliver Tale-Yazdi's avatar
      Use `Message Queue` as DMP and XCMP dispatch queue (#1246) · e1c033eb
      Oliver Tale-Yazdi authored
      
      
      (imported from https://github.com/paritytech/cumulus/pull/2157)
      
      ## Changes
      
      This MR refactores the XCMP, Parachains System and DMP pallets to use
      the [MessageQueue](https://github.com/paritytech/substrate/pull/12485)
      for delayed execution of incoming messages. The DMP pallet is entirely
      replaced by the MQ and thereby removed. This allows for PoV-bounded
      execution and resolves a number of issues that stem from the current
      work-around.
      
      All System Parachains adopt this change.  
      The most important changes are in `primitives/core/src/lib.rs`,
      `parachains/common/src/process_xcm_message.rs`,
      `pallets/parachain-system/src/lib.rs`, `pallets/xcmp-queue/src/lib.rs`
      and the runtime configs.
      
      ### DMP Queue Pallet
      
      The pallet got removed and its logic refactored into parachain-system.
      Overweight message management can be done directly through the MQ
      pallet.
      
      Final undeployment migrations are provided by
      `cumulus_pallet_dmp_queue::UndeployDmpQueue` and `DeleteDmpQueue` that
      can be configured with an aux config trait like:
      
      ```rust
      parameter_types! {
      	pub const DmpQueuePalletName: &'static str = \"DmpQueue\" < CHANGE ME;
      	pub const RelayOrigin: AggregateMessageOrigin = AggregateMessageOrigin::Parent;
      }
      
      impl cumulus_pallet_dmp_queue::MigrationConfig for Runtime {
      	type PalletName = DmpQueuePalletName;
      	type DmpHandler = frame_support::traits::EnqueueWithOrigin<MessageQueue, RelayOrigin>;
      	type DbWeight = <Runtime as frame_system::Config>::DbWeight;
      }
      
      // And adding them to your Migrations tuple:
      pub type Migrations = (
      	...
      	cumulus_pallet_dmp_queue::UndeployDmpQueue<Runtime>,
      	cumulus_pallet_dmp_queue::DeleteDmpQueue<Runtime>,
      );
      ```
      
      ### XCMP Queue pallet
      
      Removed all dispatch queue functionality. Incoming XCMP messages are now
      either: Immediately handled if they are Signals, enqueued into the MQ
      pallet otherwise.
      
      New config items for the XCMP queue pallet:
      ```rust
      /// The actual queue implementation that retains the messages for later processing.
      type XcmpQueue: EnqueueMessage<ParaId>;
      
      /// How a XCM over HRMP from a sibling parachain should be processed.
      type XcmpProcessor: ProcessMessage<Origin = ParaId>;
      
      /// The maximal number of suspended XCMP channels at the same time.
      #[pallet::constant]
      type MaxInboundSuspended: Get<u32>;
      ```
      
      How to configure those:
      
      ```rust
      // Use the MessageQueue pallet to store messages for later processing. The `TransformOrigin` is needed since
      // the MQ pallet itself operators on `AggregateMessageOrigin` but we want to enqueue `ParaId`s.
      type XcmpQueue = TransformOrigin<MessageQueue, AggregateMessageOrigin, ParaId, ParaIdToSibling>;
      
      // Process XCMP messages from siblings. This is type-safe to only accept `ParaId`s. They will be dispatched
      // with origin `Junction::Sibling(…)`.
      type XcmpProcessor = ProcessFromSibling<
      	ProcessXcmMessage<
      		AggregateMessageOrigin,
      		xcm_executor::XcmExecutor<xcm_config::XcmConfig>,
      		RuntimeCall,
      	>,
      >;
      
      // Not really important what to choose here. Just something larger than the maximal number of channels.
      type MaxInboundSuspended = sp_core::ConstU32<1_000>;
      ```
      
      The `InboundXcmpStatus` storage item was replaced by
      `InboundXcmpSuspended` since it now only tracks inbound queue suspension
      and no message indices anymore.
      
      Now only sends the most recent channel `Signals`, as all prio ones are
      out-dated anyway.
      
      ### Parachain System pallet
      
      For `DMP` messages instead of forwarding them to the `DMP` pallet, it
      now pushes them to the configured `DmpQueue`. The message processing
      which was triggered in `set_validation_data` is now being done by the MQ
      pallet `on_initialize`.
      
      XCMP messages are still handed off to the `XcmpMessageHandler`
      (XCMP-Queue pallet) - no change here.
      
      New config items for the parachain system pallet:
      ```rust
      /// Queues inbound downward messages for delayed processing. 
      ///
      /// Analogous to the `XcmpQueue` of the XCMP queue pallet.
      type DmpQueue: EnqueueMessage<AggregateMessageOrigin>;
      ``` 
      
      How to configure:
      ```rust
      /// Use the MQ pallet to store DMP messages for delayed processing.
      type DmpQueue = MessageQueue;
      ``` 
      
      ## Message Flow
      
      The flow of messages on the parachain side. Messages come in from the
      left via the `Validation Data` and finally end up at the `Xcm Executor`
      on the right.
      
      ![Untitled
      (1)](https://github.com/paritytech/cumulus/assets/10380170/6cf8b377-88c9-4aed-96df-baace266e04d)
      
      ## Further changes
      
      - Bumped the default suspension, drop and resume thresholds in
      `QueueConfigData::default()`.
      - `XcmpQueue::{suspend_xcm_execution, resume_xcm_execution}` errors when
      they would be a noop.
      - Properly validate the `QueueConfigData` before setting it.
      - Marked weight files as auto-generated so they wont auto-expand in the
      MR files view.
      - Move the `hypothetical` asserts to `frame_support` under the name
      `experimental_hypothetically`
      
      Questions:
      - [ ] What about the ugly `#[cfg(feature = \"runtime-benchmarks\")]` in
      the runtimes? Not sure how to best fix. Just having them like this makes
      tests fail that rely on the real message processor when the feature is
      enabled.
      - [ ] Need a good weight for `MessageQueueServiceWeight`. The scheduler
      already takes 80% so I put it to 10% but that is quite low.
      
      TODO:
      - [x] Remove c&p code after
      https://github.com/paritytech/polkadot/pull/6271
      - [x] Use `HandleMessage` once it is public in Substrate
      - [x] fix `runtime-benchmarks` feature
      https://github.com/paritytech/polkadot/pull/6966
      - [x] Benchmarks
      - [x] Tests
      - [ ] Migrate `InboundXcmpStatus` to `InboundXcmpSuspended`
      - [x] Possibly cleanup Migrations (DMP+XCMP)
      - [x] optional: create `TransformProcessMessageOrigin` in Substrate and
      replace `ProcessFromSibling`
      - [ ] Rerun weights on ref HW
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      Co-authored-by: default avatarLiam Aharon <[email protected]>
      Co-authored-by: default avatarjoe petrowski <[email protected]>
      Co-authored-by: default avatarKian Paimani <[email protected]>
      Co-authored-by: command-bot <>
      e1c033eb
  12. Nov 01, 2023
  13. Oct 31, 2023
  14. Oct 24, 2023
  15. Oct 23, 2023
  16. Oct 18, 2023
    • Keith Yeung's avatar
      Introduce XcmFeesToAccount fee manager (#1234) · 3dece311
      Keith Yeung authored
      
      
      Combination of paritytech/polkadot#7005, its addon PR
      paritytech/polkadot#7585 and its companion paritytech/cumulus#2433.
      
      This PR introduces a new XcmFeesToAccount struct which implements the
      `FeeManager` trait, and assigns this struct as the `FeeManager` in the
      XCM config for all runtimes.
      
      The struct simply deposits all fees handled by the XCM executor to a
      specified account. In all runtimes, the specified account is configured
      as the treasury account.
      
      XCM __delivery__ fees are now being introduced (unless the root origin
      is sending a message to a system parachain on behalf of the originating
      chain).
      
      # Note for reviewers
      
      Most file changes are tests that had to be modified to account for the
      new fees.
      Main changes are in:
      - cumulus/pallets/xcmp-queue/src/lib.rs <- To make it track the delivery
      fees exponential factor
      - polkadot/xcm/xcm-builder/src/fee_handling.rs <- Added. Has the
      FeeManager implementation
      - All runtime xcm_config files <- To add the FeeManager to the XCM
      configuration
      
      # Important note
      
      After this change, instructions that create and send a new XCM (Query*,
      Report*, ExportMessage, InitiateReserveWithdraw, InitiateTeleport,
      DepositReserveAsset, TransferReserveAsset, LockAsset and RequestUnlock)
      will require the corresponding origin account in the origin register to
      pay for transport delivery fees, and the onward message will fail to be
      sent if the origin account does not have the required amount. This
      delivery fee is on top of what we already collect as tx fees in
      pallet-xcm and XCM BuyExecution fees!
      
      Wallet UIs that want to expose the new delivery fee can do so using the
      formula:
      
      ```
      delivery_fee_factor * (base_fee + encoded_msg_len * per_byte_fee)
      ```
      
      where the delivery fee factor can be obtained from the corresponding
      pallet based on which transport you are using (UMP, HRMP or bridges),
      the base fee is a constant, the encoded message length from the message
      itself and the per byte fee is the same as the configured per byte fee
      for txs (i.e. `TransactionByteFee`).
      
      ---------
      
      Co-authored-by: default avatarBranislav Kontur <[email protected]>
      Co-authored-by: default avatarjoe petrowski <[email protected]>
      Co-authored-by: default avatarGiles Cope <[email protected]>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarFrancisco Aguirre <[email protected]>
      Co-authored-by: default avatarLiam Aharon <[email protected]>
      Co-authored-by: default avatarKian Paimani <[email protected]>
      3dece311
  17. Oct 17, 2023
  18. Oct 15, 2023
    • Daan van der Plas's avatar
      fix: GoAhead signal only set when runtime upgrade is enacted from parachain side (#1176) · 91c4360c
      Daan van der Plas authored
      The runtime code of a parachain can be replaced on the relay-chain via:
      
      [cumulus]:
      [enact_authorized_upgrade](https://github.com/paritytech/polkadot-sdk/blob/1a38d6d6/cumulus/pallets/parachain-system/src/lib.rs#L661);
      this is used for a runtime upgrade when a parachain is not bricked.
      
      [polkadot] (these are used when a parachain is bricked):
      -
      [force_set_current_code](https://github.com/paritytech/polkadot-sdk/blob/1a38d6d6/polkadot/runtime/parachains/src/paras/mod.rs#L823):
      immediately changes the runtime code of a given para without a pvf check
      (root).
      -
      [force_schedule_code_upgrade](https://github.com/paritytech/polkadot-sdk/blob/1a38d6d6/polkadot/runtime/parachains/src/paras/mod.rs#L864):
      schedules a change to the runtime code of a given para including a pvf
      check of the new code (root).
      -
      [schedule_code_upgrade](https://github.com/paritytech/polkadot-sdk/blob/1a38d6d6/polkadot/runtime/common/src/paras_registrar.rs#L395):
      schedules a change to the runtime code of a given para including a pvf
      check of the new code. Besides root, the parachain or parachain manager
      can call this extrinsic given that the parachain is unlocked.
      
      Polkadot signals a parachain to be ready for a runtime upgrade through
      the
      [GoAhead](https://github.com/paritytech/polkadot-sdk/blob/e4949344
      
      /polkadot/primitives/src/v5/mod.rs#L1229)
      signal.
      
      When in cumulus `enact_authorized_upgrade` is executed, the same
      underlying helper function of `force_schedule_code_upgrade` &
      `schedule_code_upgrade`:
      [schedule_code_upgrade](https://github.com/paritytech/polkadot/blob/09b61286da11921a3dda0a8e4015ceb9ef9cffca/runtime/parachains/src/paras/mod.rs#L1778),
      is called on the relay-chain, which sets the `GoAhead` signal (if the
      pvf is accepted).
      
      If Cumulus receives the `GoAhead` signal from polkadot without having
      the `PendingValidationCode` ready, it will panic
      ([ref](https://github.com/paritytech/polkadot/pull/7412)). For
      `enact_authorized_upgrade` we know for sure the `PendingValidationCode`
      is set. On the contrary, for `force_schedule_code_upgrade` &
      `schedule_code_upgrade` this is not the case.
      
      This PR includes a flag such that the `GoAhead` signal will only be set
      when a runtime upgrade is enacted by the parachain
      (`enact_authorized_upgrade`).
      
      additional info: https://github.com/paritytech/polkadot/pull/7412
      
      Closes #641
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      91c4360c
    • Gonçalo Pestana's avatar
      Refactor staking ledger (#1484) · 8ee4042c
      Gonçalo Pestana authored
      This PR refactors the staking ledger logic to encapsulate all reads and
      mutations of `Ledger`, `Bonded`, `Payee` and stake locks within the
      `StakingLedger` struct implementation.
      
      With these changes, all the reads and mutations to the `Ledger`, `Payee`
      and `Bonded` storage map should be done through the methods exposed by
      StakingLedger to ensure the data and lock consistency of the operations.
      The new introduced methods that mutate and read Ledger are:
      
      - `ledger.update()`: inserts/updates a staking ledger in storage;
      updates staking locks accordingly (and ledger.bond(), which is synthatic
      sugar for ledger.update())
      - `ledger.kill()`: removes all Bonded and StakingLedger related data for
      a given ledger; updates staking locks accordingly;
      `StakingLedger::get(account)`: queries both the `Bonded` and `Ledger`
      storages and returns a `Option<StakingLedger>`. The pallet impl exposes
      fn ledger(account) as synthatic sugar for `StakingLedger::get(account)`.
      
      Retrieving a ledger with `StakingLedger::get()` can be done by providing
      either a stash or controller account. The input must be wrapped in a
      `StakingAccount` variant (Stash or Controller) which is treated
      accordingly. This simplifies the caller API but will eventually be
      deprecated once we completely get rid of the controller account in
      staking. However, this refactor will help with the work necessary when
      completely removing the controller.
      
      Other goals:
      
      - No logical changes have been introduced in this PR;
      - No breaking changes or updates in wallets required;
      - No new storage items or need to perform storage migrations;
      - Centralise the changes to bonds and ledger updates to simplify the
      OnStakingUpdate updates to the target list (related to
      https://github.com/paritytech/polkadot-sdk/issues/443)
      
      Note: it would be great to prevent or at least raise a warning if
      `Ledger<T>`, `Payee<T>` and `Bonded<T>` storage types are accessed
      outside the `StakingLedger` implementation. This PR should not get
      blocked by that feature, but there's a tracking issue here
      https://github.com/paritytech/polkadot-sdk/issues/149
      
      Related and step towards
      https://github.com/paritytech/polkadot-sdk/issues/443
      8ee4042c
  19. Oct 13, 2023
  20. Oct 12, 2023
    • Anton Vilhelm Ásgeirsson's avatar
      Fix links to implementers' guide (#1865) · d2fc1d7c
      Anton Vilhelm Ásgeirsson authored
      # Description
      
      In a couple of cases, there were links pointing to the w3f github pages
      domain. In other instances, there were links pointing to the old
      polkadot repo's github pages. Both of these are now pointing to the
      relevant links in
      https://paritytech.github.io/polkadot-sdk/book/index.html.
      
      These changes were made specifically because the w3f github pages
      returns a 404, and while fixing the links, the old polkadot repo links
      were touched up as well even if they do redirect properly.
      
      This shouldn't affect anything as these are documentation link changes
      only.
      d2fc1d7c
    • Tsvetomir Dimitrov's avatar
      Disabled validators runtime API (#1257) · 7aace06b
      Tsvetomir Dimitrov authored
      
      
      Exposes disabled validators list via a runtime API.
      
      ---------
      
      Co-authored-by: default avatarordian <[email protected]>
      Co-authored-by: default avatarordian <[email protected]>
      7aace06b
  21. Oct 10, 2023
    • Oliver Tale-Yazdi's avatar
      [FRAME] Warn on unchecked weight witness (#1818) · 64877492
      Oliver Tale-Yazdi authored
      Adds a warning to FRAME pallets when a function argument that starts
      with `_` is used in the weight formula.
      This is in most cases an error since the weight witness needs to be
      checked.
      
      Example:
      
      ```rust
      #[pallet::call_index(0)]
      #[pallet::weight(T::SystemWeightInfo::remark(_remark.len() as u32))]
      pub fn remark(_origin: OriginFor<T>, _remark: Vec<u8>) -> DispatchResultWithPostInfo {
      	Ok(().into())
      }
      ```
      
      Produces this warning:
      
      ```pre
      warning: use of deprecated constant `pallet::warnings::UncheckedWeightWitness_0::_w`: 
                       It is deprecated to not check weight witness data.
                       Please instead ensure that all witness data for weight calculation is checked before usage.
               
                       For more info see:
                           <https://github.com/paritytech/polkadot-sdk/pull/1818>
         --> substrate/frame/system/src/lib.rs:424:40
          |
      424 |         pub fn remark(_origin: OriginFor<T>, _remark: Vec<u8>) -> DispatchResultWithPostInfo {
          |                                              ^^^^^^^
          |
          = note: `#[warn(deprecated)]` on by default
      ```
      
      Can be suppressed like this, since in this case it is legit:
      
      ```rust
      #[pallet::call_index(0)]
      #[pallet::weight(T::SystemWeightInfo::remark(remark.len() as u32))]
      pub fn remark(_origin: OriginFor<T>, remark: Vec<u8>) -> DispatchResultWithPostInfo {
      	let _ = remark; // We dont need to check the weight witness.
      	Ok(().into())
      }
      ```
      
      Changes:
      - Add warning on uncheded weight witness
      - Respect `subkeys` limit in `System::kill_prefix`
      - Fix HRMP pallet and other warnings
      - Update`proc_macro_warning` dependency
      - Delete random folder `substrate/src/src` 🙈
      
       
      - Adding Prdoc
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      Co-authored-by: default avatarjoe petrowski <[email protected]>
      64877492
  22. Sep 28, 2023
    • Dónal Murray's avatar
      Add event field names to HRMP Event variants (#1695) · 4bc97e48
      Dónal Murray authored
      Update the HRMP pallet to use field names for Event variants to improve
      metadata for a better client experience.
      Event variants are now structs instead of unnamed tuples.
      
      Partially implements Substrate issue
      [9903](https://github.com/paritytech/substrate/issues/9903) which
      doesn't appear to have been moved to the monorepo.
      4bc97e48
  23. Sep 27, 2023
  24. Sep 22, 2023
  25. Sep 19, 2023
  26. Sep 14, 2023
  27. Sep 06, 2023
  28. Aug 31, 2023
    • Bastian Köcher's avatar
      Rename `polkadot-parachain` to `polkadot-parachain-primitives` (#1334) · a33d7922
      Bastian Köcher authored
      * Rename `polkadot-parachain` to `polkadot-parachain-primitives`
      
      While doing this it also fixes some last `rustdoc` issues and fixes
      another Cargo warning related to `pallet-paged-list`.
      
      * Fix compilation
      
      * ".git/.scripts/commands/fmt/fmt.sh"
      
      * Fix XCM docs
      
      ---------
      
      Co-authored-by: command-bot <>
      a33d7922
    • Alin Dima's avatar
      backing: move the min votes threshold to the runtime (#1200) · d6af073a
      Alin Dima authored
      
      
      * move min backing votes const to runtime
      
      also cache it per-session in the backing subsystem
      
      Signed-off-by: default avataralindima <[email protected]>
      
      * add runtime migration
      
      * introduce api versioning for min_backing votes
      
      also enable it for rococo/versi for testing
      
      * also add min_backing_votes runtime calls to statement-distribution
      
      this dependency has been recently introduced by async backing
      
      * remove explicit version runtime API call
      
      this is not needed, as the RuntimeAPISubsystem already takes care
      of versioning and will return NotSupported if the version is not
      right.
      
      * address review comments
      
      - parametrise backing votes runtime API with session index
      - remove RuntimeInfo usage in backing subsystem, as runtime API
      caches the min backing votes by session index anyway.
      - move the logic for adjusting the configured needed backing votes with the size of the backing group
      to a primitives helper.
      - move the legacy min backing votes value to a primitives helper.
      - mark JoinMultiple error as fatal, since the Canceled (non-multiple) counterpart is also fatal.
      - make backing subsystem handle fatal errors for new leaves update.
      - add HostConfiguration consistency check for zeroed backing votes threshold
      - add cumulus accompanying change
      
      * fix cumulus test compilation
      
      * fix tests
      
      * more small fixes
      
      * fix merge
      
      * bump runtime api version for westend and rollback version for rococo
      
      ---------
      
      Signed-off-by: default avataralindima <[email protected]>
      Co-authored-by: default avatarJavier Viola <[email protected]>
      d6af073a
  29. Aug 30, 2023
  30. Aug 29, 2023
  31. Aug 24, 2023
  32. Aug 22, 2023
  33. Aug 20, 2023