Skip to content
Snippets Groups Projects
  1. Feb 18, 2025
  2. Feb 17, 2025
    • Ankan's avatar
      [Staking] Bounded Slashing: Paginated Offence Processing & Slash Application (#7424) · dda2cb59
      Ankan authored
      
      closes https://github.com/paritytech/polkadot-sdk/issues/3610.
      
      helps https://github.com/paritytech/polkadot-sdk/issues/6344, but need
      to migrate storage `Offences::Reports` before we can remove exposure
      dependency in RC pallets.
      
      replaces https://github.com/paritytech/polkadot-sdk/issues/6788.
      
      ## Context  
      Slashing in staking is unbounded currently, which is a major blocker
      until staking can move to a parachain (AH).
      
      ### Current Slashing Process (Unbounded)  
      
      1. **Offence Reported**  
      - Offences include multiple validators, each with potentially large
      exposure pages.
      - Slashes are **computed immediately** and scheduled for application
      after **28 eras**.
      
      2. **Slash Applied**  
      - All unapplied slashes are executed in **one block** at the start of
      the **28th era**. This is an **unbounded operation**.
      
      
      ### Proposed Slashing Process (Bounded)  
      
      1. **Offence Queueing**  
         - Offences are **queued** after basic sanity checks.  
      
      2. **Paged Offence Processing (Computing Slash)**  
         - Slashes are **computed one validator exposure page at a time**.  
         - **Unapplied slashes** are stored in a **double map**:  
           - **Key 1 (k1):** `EraIndex`  
      - **Key 2 (k2):** `(Validator, SlashFraction, PageIndex)` — a unique
      identifier for each slash page
      
      3. **Paged Slash Application**  
      - Slashes are **applied one page at a time** across multiple blocks.
      - Slash application starts at the **27th era** (one era earlier than
      before) to ensure all slashes are applied **before stakers can unbond**
      (which starts from era 28 onwards).
      
      ---
      
      ## Worst-Case Block Calculation for Slash Application  
      
      ### Polkadot:  
      - **1 era = 24 hours**, **1 block = 6s** → **14,400 blocks/era**  
      - On parachains (**12s blocks**) → **7,200 blocks/era**  
      
      ### Kusama:  
      - **1 era = 6 hours**, **1 block = 6s** → **3,600 blocks/era**  
      - On parachains (**12s blocks**) → **1,800 blocks/era**  
      
      ### Worst-Case Assumptions:  
      - **Total stakers:** 40,000 nominators, 1000 validators. (Polkadot
      currently has ~23k nominators and 500 validators)
      - **Max slashed:** 50% so 20k nominators, 250 validators.  
      - **Page size:** Validators with multiple page: (512 + 1)/2 = 256 ,
      Validators with single page: 1
      
      ### Calculation:  
      There might be a more accurate way to calculate this worst-case number,
      and this estimate could be significantly higher than necessary, but it
      shouldn’t exceed this value.
      
      Blocks needed: 250 + 20k/256 = ~330 blocks.
      
      ##  *Potential Improvement:*  
      - Consider adding an **Offchain Worker (OCW)** task to further optimize
      slash application in future updates.
      - Dynamically batch unapplied slashes based on number of nominators in
      the page, or process until reserved weight limit is exhausted.
      
      ----
      ## Summary of Changes  
      
      ### Storage  
      - **New:**  
        - `OffenceQueue` *(StorageDoubleMap)*  
          - **K1:** Era  
          - **K2:** Offending validator account  
          - **V:** `OffenceRecord`  
        - `OffenceQueueEras` *(StorageValue)*  
          - **V:** `BoundedVec<EraIndex, BoundingDuration>`  
        - `ProcessingOffence` *(StorageValue)*  
          - **V:** `(Era, offending validator account, OffenceRecord)`  
      
      - **Changed:**  
        - `UnappliedSlashes`:  
          - **Old:** `StorageMap<K -> Era, V -> Vec<UnappliedSlash>>`  
      - **New:** `StorageDoubleMap<K1 -> Era, K2 -> (validator_acc, perbill,
      page_index), V -> UnappliedSlash>`
      
      ### Events  
      - **New:**  
        - `SlashComputed { offence_era, slash_era, offender, page }`  
        - `SlashCancelled { slash_era, slash_key, payout }`  
      
      ### Error  
      - **Changed:**  
        - `InvalidSlashIndex` → Renamed to `InvalidSlashRecord`  
      - **Removed:**  
        - `NotSortedAndUnique`  
      - **Added:**  
        - `EraNotStarted`  
      
      ### Call  
      - **Changed:**  
        - `cancel_deferred_slash(era, slash_indices: Vec<u32>)`  
          → Now takes `Vec<(validator_acc, slash_fraction, page_index)>`  
      - **New:**  
      - `apply_slash(slash_era, slash_key: (validator_acc, slash_fraction,
      page_index))`
      
      ### Runtime Config  
      - `FullIdentification` is now set to a unit type (`()`) / null identity,
      replacing the previous exposure type for all runtimes using
      `pallet_session::historical`.
      
      ## TODO
      - [x] Fixed broken `CancelDeferredSlashes`.
      - [x] Ensure on_offence called only with validator account for
      identification everywhere.
      - [ ] Ensure we never need to read full exposure.
      - [x] Tests for multi block processing and application of slash.
      - [x] Migrate UnappliedSlashes 
      - [x] Bench (crude, needs proper bench as followup)
        - [x] on_offence()
        - [x] process_offence()
        - [x] apply_slash()
       
       
      ## Followups (tracker
      [link](https://github.com/paritytech/polkadot-sdk/issues/7596))
      - [ ] OCW task to process offence + apply slashes.
      - [ ] Minimum time for governance to cancel deferred slash.
      - [ ] Allow root or staking admin to add a custom slash.
      - [ ] Test HistoricalSession proof works fine with eras before removing
      exposure as full identity.
      - [ ] Properly bench offence processing and slashing.
      - [ ] Handle Offences::Reports migration when removing validator
      exposure as identity.
      
      ---------
      
      Co-authored-by: default avatarGonçalo Pestana <g6pestana@gmail.com>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarKian Paimani <5588131+kianenigma@users.noreply.github.com>
      Co-authored-by: default avatarGuillaume Thiolliere <gui.thiolliere@gmail.com>
      Co-authored-by: default avatarkianenigma <kian@parity.io>
      Co-authored-by: default avatarGiuseppe Re <giuseppe.re@parity.io>
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      dda2cb59
    • Qiwei Yang's avatar
      Remove `yamux_window_size` from network config (#7014) · 6b6dae87
      Qiwei Yang authored
      # Description
      
      resolve #6468
      
      
      
      # Checklist
      
      * [x] My PR includes a detailed description as outlined in the
      "Description" and its two subsections above.
      * [x] My PR follows the [labeling requirements](
      
      https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process
      ) of this project (at minimum one label for `T` required)
      * External contributors: ask maintainers to put the right label on your
      PR.
      * [x] I have made corresponding changes to the documentation (if
      applicable)
      * [x] I have added tests that prove my fix is effective or that my
      feature works (if applicable)
      
      ---------
      
      Co-authored-by: command-bot <>
      6b6dae87
    • nprt's avatar
      implement web3_clientVersion (#7580) · d61032b9
      nprt authored
      
      Implements the `web3_clientVersion` method. This is a common requirement
      for external Ethereum libraries when querying a client.
      
      Fixes paritytech/contract-issues#26.
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      d61032b9
    • Alexandru Vasile's avatar
      libp2p: Enhance logging targets for granular control (#7494) · 09d37543
      Alexandru Vasile authored
      
      This PR modifies the libp2p networking-specific log targets for granular
      control (e.g., just enabling trace for req-resp).
      
      Previously, all logs were outputted to `sub-libp2p` target, flooding the
      log messages on busy validators.
      
      ### Changes
      - Discover: `sub-libp2p::discovery`
      - Notification/behaviour: `sub-libp2p::notification::behaviour`
      - Notification/handler: `sub-libp2p::notification::handler`
      - Notification/service: `sub-libp2p::notification::service`
      - Notification/upgrade: `sub-libp2p::notification::upgrade`
      - Request response: `sub-libp2p::request-response`
      
      cc @paritytech/networking
      
      ---------
      
      Signed-off-by: default avatarAlexandru Vasile <alexandru.vasile@parity.io>
      Co-authored-by: default avatarDmitry Markin <dmitry@markin.tech>
      09d37543
    • Oliver Tale-Yazdi's avatar
      [AHM] Make pallet types public (#7579) · ca91d4b5
      Oliver Tale-Yazdi authored
      
      Preparation for AHM and making stuff public.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      Co-authored-by: default avatarDónal Murray <donal.murray@parity.io>
      ca91d4b5
    • Daniel Olano's avatar
      Change pallet referenda TracksInfo::tracks to return an iterator (#2072) · c078d2f4
      Daniel Olano authored
      
      Returning an iterator in `TracksInfo::tracks()` instead of a static
      slice allows for more flexible implementations of `TracksInfo` that can
      use the chain storage without compromising a lot on the
      performance/memory penalty if we were to return an owned `Vec` instead.
      
      ---------
      
      Co-authored-by: default avatarPablo Andrés Dorado Suárez <hola@pablodorado.com>
      c078d2f4
    • PG Herveou's avatar
      [pallet-revive] rpc add --earliest-receipt-block (#7589) · 8cca727f
      PG Herveou authored
      
      Add a cli option to skip searching receipts for blocks older than the
      specified limit
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      8cca727f
    • Giuseppe Re's avatar
      Bump frame-metadata v16 to 19.0.0 (#7563) · 9015a0fc
      Giuseppe Re authored
      
      Update to latest version of `frame-metadata` in order to support pallet
      view function metadata.
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      9015a0fc
    • Pablo Andrés Dorado Suárez's avatar
      83db0474
    • rainb0w-pr0mise's avatar
      `pallet-utility: if_else` (#6321) · ead8fbdf
      rainb0w-pr0mise authored
      
      # Utility Call Fallback
      
      This introduces a new extrinsic: **`if_else`**
      
      Which first attempts to dispatch the `main` call(s). If the `main`
      call(s) fail, the `fallback` call(s) is dispatched instead. Both calls
      are executed with the same origin.
      
      In the event of a fallback failure the whole call fails with the weights
      returned.
      
      ## Use Case
      Some use cases might involve submitting a `batch` type call in either
      main, fallback or both.
      
      Resolves #6000
      
      Polkadot Address: 1HbdqutFR8M535LpbLFT41w3j7v9ptEYGEJKmc6PKpqthZ8
      
      ---------
      
      Co-authored-by: default avatarrainbow-promise <154476501+rainbow-promise@users.noreply.github.com>
      Co-authored-by: default avatarGuillaume Thiolliere <gui.thiolliere@gmail.com>
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      ead8fbdf
  3. Feb 15, 2025
  4. Feb 14, 2025
    • Kian Paimani's avatar
      [AHM] Multi-block staking election pallet (#7282) · a025562b
      Kian Paimani authored
      ## Multi Block Election Pallet
      
      This PR adds the first iteration of the multi-block staking pallet. 
      
      From this point onwards, the staking and its election provider pallets
      are being customized to work in AssetHub. While usage in solo-chains is
      still possible, it is not longer the main focus of this pallet. For a
      safer usage, please fork and user an older version of this pallet.
      
      ---
      
      ## Replaces
      
      - [x] https://github.com/paritytech/polkadot-sdk/pull/6034 
      - [x] https://github.com/paritytech/polkadot-sdk/pull/5272
      
      ## Related PRs: 
      
      - [x] https://github.com/paritytech/polkadot-sdk/pull/7483
      - [ ] https://github.com/paritytech/polkadot-sdk/pull/7357
      - [ ] https://github.com/paritytech/polkadot-sdk/pull/7424
      - [ ] https://github.com/paritytech/polkadot-staking-miner/pull/955
      
      This branch can be periodically merged into
      https://github.com/paritytech/polkadot-sdk/pull/7358 ->
      https://github.com/paritytech/polkadot-sdk/pull/6996
      
      ## TODOs: 
      
      - [x] rebase to master 
      - Benchmarking for staking critical path
        - [x] snapshot
        - [x] election result
      - Benchmarking for EPMB critical path
        - [x] snapshot
        - [x] verification
        - [x] submission
        - [x] unsigned submission
        - [ ] election results fetching
      - [ ] Fix deletion weights. Either of
        - [ ] Garbage collector + lazy removal of all paged storage items
        - [ ] Confirm that deletion is small PoV footprint.
      - [ ] Move election prediction to be push based. @tdimitrov 
      - [ ] integrity checks for bounds 
      - [ ] Properly benchmark this as a part of CI -- for now I will remove
      them as they are too slow
      - [x] add try-state to all pallets
      - [x] Staking to allow genesis dev accounts to be created internally
      - [x] Decouple miner config so @niklasad1 can work on the miner
      72841b73
      - [x] duplicate snapshot page reported by @niklasad1
      
       
      - [ ] https://github.com/paritytech/polkadot-sdk/pull/6520 or equivalent
      -- during snapshot, `VoterList` must be locked
      - [ ] Move target snapshot to a separate block
      
      ---------
      
      Co-authored-by: default avatarGonçalo Pestana <g6pestana@gmail.com>
      Co-authored-by: default avatarAnkan <10196091+Ank4n@users.noreply.github.com>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarGuillaume Thiolliere <gui.thiolliere@gmail.com>
      Co-authored-by: default avatarGiuseppe Re <giuseppe.re@parity.io>
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      a025562b
    • Michal Kucharczyk's avatar
      `txpool api`: `remove_invalid` call improved (#6661) · c94df1bc
      Michal Kucharczyk authored
      #### Description 
      Currently the transaction which is reported as invalid by a block
      builder (or `removed_invalid` by other components) is silently skipped.
      
      This PR improves this behavior. The transaction pool `report_invalid`
      function now accepts optional error associated with every reported
      transaction, and also the optional block hash which provides hints how
      reported transaction shall be handled. The following API change is
      proposed:
      
      https://github.com/paritytech/polkadot-sdk/blob/8be5ef3e/substrate/client/transaction-pool/api/src/lib.rs#L297-L318
      Depending on error, the transaction pool can decide if transaction shall
      be removed from the view only or entirely from the pool. Invalid event
      will be dispatched if required.
      
      
      #### Notes for reviewers
      
      - Actual logic of removing invalid txs is implented in
      [`ViewStore::report_invalid`](https://github.com/paritytech/polkadot-sdk/blob/0fad26c4...
      c94df1bc
    • Alexander Theißen's avatar
      pallet-revive: Fix the contract size related benchmarks (#7568) · 60146ba5
      Alexander Theißen authored
      
      Partly addresses https://github.com/paritytech/polkadot-sdk/issues/6157
      
      The benchmarks measuring the impact of contract sizes on calling or
      instantiating a contract were bogus because they needed to be written in
      assembly in order to tightly control the basic block size.
      
      This fixes the benchmarks for:
      - call_with_code_per_byte
      - upload_code
      - instantiate_with_code
      
      And adds a new benchmark that accounts for the fact that the interpreter
      will always compile whole basic blocks:
      - basic_block_compilation
      
      After this PR only the weight we assign to instructions need to be
      addressed.
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      Co-authored-by: default avatarPG Herveou <pgherveou@gmail.com>
      60146ba5
    • Tomás Senovilla Polo's avatar
      refactor: Move `T:Config` into where clause in `#[benchmarks]` macro if needed (#7418) · 4b2ca118
      Tomás Senovilla Polo authored
      # Description
      
      Currently, the `#[benchmarks]` macro always add `<T:Config>` to the
      expanded code even if a where clause is used. Using a where clause which
      also includes a trait bound for the generic `T` is triggering [this
      clippy
      warning](https://rust-lang.github.io/rust-clippy/master/index.html#multiple_bound_locations)
      from Rust 1.78 onwards. We've found that
      [here](https://github.com/freeverseio/laos/blob/main/pallets/precompiles-benchmark/src/precompiles/vesting/benchmarking.rs#L126-L132)
      in LAOS, as we need to include `T: pallet_vesting::Config` in the where
      clause, here's the outcome:
      
      ```rust
      error: bound is defined in more than one place
         --> pallets/precompiles-benchmark/src/precompiles/vesting/benchmarking.rs:130:1
          |
      130 | / #[benchmarks(
      131 | |     where
      132 | |         T: Config + pallet_vesting::Config,
          | |         ^
      133 | |         T::AccountIdToH160: ConvertBack<T::AccountId, H160>,
      134 | |   ...
      4b2ca118
    • Bastian Köcher's avatar
    • Alexander Theißen's avatar
      pallet-revive: Add env var to allow skipping of validation for testing (#7562) · b44dc3a5
      Alexander Theißen authored
      
      When trying to reproduce bugs we sometimes need to deploy code that
      wouldn't pass validation. This PR adds a new environment variable
      `REVIVE_SKIP_VALIDATION` that when set will skip all validation except
      the contract blob size limit.
      
      Please note that this only applies to when the pallet is compiled for
      `std` and hence will never be part of on-chain.
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      b44dc3a5
    • Oliver Tale-Yazdi's avatar
      [mq pallet] Custom next queue selectors (#6059) · 7aac8861
      Oliver Tale-Yazdi authored
      
      Changes:
      - Expose a `force_set_head` function from the `MessageQueue` pallet via
      a new trait: `ForceSetHead`. This can be used to force the MQ pallet to
      process this queue next.
      - The change only exposes an internal function through a trait, no audit
      is required.
      
      ## Context
      
      For the Asset Hub Migration (AHM) we need a mechanism to prioritize the
      inbound upward messages and the inbound downward messages on the AH. To
      achieve this, a minimal (and no breaking) change is done to the MQ
      pallet in the form of adding the `force_set_head` function.
      
      An example use of how to achieve prioritization is then demonstrated in
      `integration_test.rs::AhmPrioritizer`. Normally, all queues are
      scheduled round-robin like this:
      
      `| Relay | Para(1) | Para(2) | ... | Relay | ... `
      
      The prioritizer listens to changes to its queue and triggers if either:
      - The queue processed in the last block (to keep the general round-robin
      scheduling)
      - The queue did not process since `n` blocks (to prevent starvation if
      there are too many other queues)
      
      In either situation, it schedules the queue for a streak of three
      consecutive blocks, such that it would become:
      
      `| Relay | Relay | Relay | Para(1) | Para(2) | ... | Relay | Relay |
      Relay | ... `
      
      It basically transforms the round-robin into an elongated round robin.
      Although different strategies can be injected into the pallet at
      runtime, this one seems to strike a good balance between general service
      level and prioritization.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      Co-authored-by: default avatarmuharem <ismailov.m.h@gmail.com>
      7aac8861
  5. Feb 13, 2025
    • PG Herveou's avatar
      [pallet-revive] fix subxt version (#7570) · d1140047
      PG Herveou authored
      
      Cargo.lock change to subxt were rolled back 
      Fixing it and updating it in Cargo.toml so it does not happen again
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      d1140047
    • s0me0ne-unkn0wn's avatar
      Shorter availability data retention period for testnets (#7353) · 1866c3b4
      s0me0ne-unkn0wn authored
      Closes #3270
      
      ---------
      
      Co-authored-by: command-bot <>
      1866c3b4
    • Bastian Köcher's avatar
      sc-informant: Print full hash when debug logging is enabled (#7554) · 9d14b3b5
      Bastian Köcher authored
      
      When debugging stuff, it is useful to see the full hashes and not only
      the "short form". This makes it easier to read logs and follow blocks.
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      9d14b3b5
    • Michal Kucharczyk's avatar
      `fatxpool`: transaction statuses metrics added (#7505) · e5df3306
      Michal Kucharczyk authored
      #### Overview
      
      This PR introduces a new mechanism to capture and report metrics related
      to timings of transaction lifecycle events, which are currently not
      available. By exposing these timings, we aim to augment transaction-pool
      reliability dashboards and extend existing Grafana boards.
      
      A new `unknown_from_block_import_txs` metric is also introduced. It
      provides the number of transactions in imported block which are not
      known to the node's transaction pool. It allows to monitor alignment of
      transaction pools across the nodes in the network.
      
      #### Notes for reviewers
      - **[Per-event
      Metrics](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/metrics.rs#L84-L105)
      Collection**: implemented by[
      `EventsMetricsCollector`](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/metrics.rs#L353-L358)
      which allows to capture both submission timestamps and transaction
      status updates. An asynchronous
      [`EventsMetricsCollectorTask`](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/metrics.rs#L503-L526)
      processes the metrics-related messages sent by the
      `EventsMetricsCollector` and reports the timings of transaction statuses
      updates to Prometheus. This task implements event[
      de-duplication](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/metrics.rs#L458)
      using a `HashMap` of
      [`TransactionEventMetricsData`](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/metrics.rs#L424-L435)
      entries which also holds transaction submission timestamps used to
      [compute
      timings](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/metrics.rs#L489-L495).
      Transaction-related items are removed when transaction's final status is
      [reported](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/metrics.rs#L496).
      - Transaction submission timestamp is reusing the timestamp of
      `TimedTransactionSource` kept in mempool. It is reported to
      `EventsMetricsCollector` in
      [`submit_at`](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/fork_aware_txpool.rs#L735)
      and
      [`submit_and_watch`](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/fork_aware_txpool.rs#L836)
      methods of `ForkAwareTxPool`.
      - Transaction updates are reported to `EventsMetricsCollector` from
      `MultiViewListener`
      [task](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/multi_view_listener.rs#L494).
      This allows to gather metrics for _watched_ and _non-watched_
      transactions (what enables metrics on non-rpc-enabled collators).
      - New metric
      ([`unknown_from_block_import_txs`](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/metrics.rs#L59-L60))
      allowing checking alignment of pools across the network is
      [reported](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/fork_aware_txpool.rs#L1288-L1292)
      using new `TxMemPool`
      [method](https://github.com/paritytech/polkadot-sdk/blob/8a53992e
      
      /substrate/client/transaction-pool/src/fork_aware_txpool/tx_mem_pool.rs#L605-L611).
      
      fixes: #7355, #7448
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      Co-authored-by: default avatarSebastian Kunert <skunert49@gmail.com>
      Co-authored-by: default avatarIulian Barbu <14218860+iulianbarbu@users.noreply.github.com>
      e5df3306
    • seemantaggarwal's avatar
      Update Scheduler to have a configurable block provider #7434 (#7441) · 645a6f40
      seemantaggarwal authored
      
      Follow up from
      https://github.com/paritytech/polkadot-sdk/pull/6362#issuecomment-2629744365
      
      The goal of this PR is to have the scheduler pallet work on a parachain
      which does not produce blocks on a regular schedule, thus can use the
      relay chain as a block provider.
      
      Because blocks are not produced regularly, we cannot make the assumption
      that block number increases monotonically, and thus have new logic to
      handle multiple spend periods passing between blocks.
      
      Requirement: 
      
      instead of using the hard coded system block number. We add an
      associated type BlockNumberProvider
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      645a6f40
  6. Feb 12, 2025
  7. Feb 10, 2025
  8. Feb 09, 2025
    • StackOverflowExcept1on's avatar
      feat(wasm-builder): add support for new `wasm32v1-none` target (#7008) · 2970ab15
      StackOverflowExcept1on authored
      
      # Description
      
      Resolves #5777
      
      Previously `wasm-builder` used hacks such as `-Zbuild-std` (required
      `rust-src` component) and `RUSTC_BOOTSTRAP=1` to build WASM runtime
      without WASM features: `sign-ext`, `multivalue` and `reference-types`,
      but since Rust 1.84 (will be stable on 9 January, 2025) the situation
      has improved as there is new
      [`wasm32v1-none`](https://doc.rust-lang.org/beta/rustc/platform-support/wasm32v1-none.html)
      target that disables all "post-MVP" WASM features except
      `mutable-globals`.
      
      Previously, your `rust-toolchain.toml` looked like this:
      
      ```toml
      [toolchain]
      channel = "stable"
      components = ["rust-src"]
      targets = ["wasm32-unknown-unknown"]
      profile = "default"
      ```
      
      It should now be updated to something like this:
      
      ```toml
      [toolchain]
      channel = "stable"
      targets = ["wasm32v1-none"]
      profile = "default"
      ```
      
      To build the runtime:
      
      ```bash
      cargo build --package minimal-template-runtime --release
      ```
      
      ## Integration
      
      If you are using Rust 1.84 and above, then install the `wasm32v1-none`
      target instead of `wasm32-unknown-unknown` as shown above. You can also
      remove the unnecessary `rust-src` component.
      
      Also note the slight differences in conditional compilation:
      - `wasm32-unknown-unknown`: `#[cfg(all(target_family = "wasm", target_os
      = "unknown"))]`
      - `wasm32v1-none`: `#[cfg(all(target_family = "wasm", target_os =
      "none"))]`
      
      Avoid using `target_os = "unknown"` in `#[cfg(...)]` or
      `#[cfg_attr(...)]` and instead use `target_family = "wasm"` or
      `target_arch = "wasm32"` in the runtime code.
      
      ## Review Notes
      
      Wasm builder requires the following prerequisites for building the WASM
      binary:
      - Rust >= 1.68 and Rust < 1.84:
        - `wasm32-unknown-unknown` target
        - `rust-src` component
      - Rust >= 1.84:
        - `wasm32v1-none` target
      - no more `-Zbuild-std` and `RUSTC_BOOTSTRAP=1` hacks and `rust-src`
      component requirements!
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      Co-authored-by: default avatarBastian Köcher <info@kchr.de>
      2970ab15
  9. Feb 08, 2025
  10. Feb 07, 2025