Skip to content
Snippets Groups Projects
  1. Feb 15, 2025
  2. Feb 14, 2025
    • Kian Paimani's avatar
      [AHM] Multi-block staking election pallet (#7282) · a025562b
      Kian Paimani authored
      ## Multi Block Election Pallet
      
      This PR adds the first iteration of the multi-block staking pallet. 
      
      From this point onwards, the staking and its election provider pallets
      are being customized to work in AssetHub. While usage in solo-chains is
      still possible, it is not longer the main focus of this pallet. For a
      safer usage, please fork and user an older version of this pallet.
      
      ---
      
      ## Replaces
      
      - [x] https://github.com/paritytech/polkadot-sdk/pull/6034 
      - [x] https://github.com/paritytech/polkadot-sdk/pull/5272
      
      ## Related PRs: 
      
      - [x] https://github.com/paritytech/polkadot-sdk/pull/7483
      - [ ] https://github.com/paritytech/polkadot-sdk/pull/7357
      - [ ] https://github.com/paritytech/polkadot-sdk/pull/7424
      - [ ] https://github.com/paritytech/polkadot-staking-miner/pull/955
      
      This branch can be periodically merged into
      https://github.com/paritytech/polkadot-sdk/pull/7358 ->
      https://github.com/paritytech/polkadot-sdk/pull/6996
      
      ## TODOs: 
      
      - [x] rebase to master 
      - Benchmarking for staking critical path
        - [x] snapshot
        - [x] election result
      - Benchmarking for EPMB critical path
        - [x] snapshot
        - [x] verification
        - [x] submission
        - [x] unsigned submission
        - [ ] election results fetching
      - [ ] Fix deletion weights. Either of
        - [ ] Garbage collector + lazy removal of all paged storage items
        - [ ] Confirm that deletion is small PoV footprint.
      - [ ] Move election prediction to be push based. @tdimitrov 
      - [ ] integrity checks for bounds 
      - [ ] Properly benchmark this as a part of CI -- for now I will remove
      them as they are too slow
      - [x] add try-state to all pallets
      - [x] Staking to allow genesis dev accounts to be created internally
      - [x] Decouple miner config so @niklasad1 can work on the miner
      72841b73
      - [x] duplicate snapshot page reported by @niklasad1
      
       
      - [ ] https://github.com/paritytech/polkadot-sdk/pull/6520 or equivalent
      -- during snapshot, `VoterList` must be locked
      - [ ] Move target snapshot to a separate block
      
      ---------
      
      Co-authored-by: default avatarGonçalo Pestana <g6pestana@gmail.com>
      Co-authored-by: default avatarAnkan <10196091+Ank4n@users.noreply.github.com>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarGuillaume Thiolliere <gui.thiolliere@gmail.com>
      Co-authored-by: default avatarGiuseppe Re <giuseppe.re@parity.io>
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      a025562b
    • Michal Kucharczyk's avatar
      `txpool api`: `remove_invalid` call improved (#6661) · c94df1bc
      Michal Kucharczyk authored
      #### Description 
      Currently the transaction which is reported as invalid by a block
      builder (or `removed_invalid` by other components) is silently skipped.
      
      This PR improves this behavior. The transaction pool `report_invalid`
      function now accepts optional error associated with every reported
      transaction, and also the optional block hash which provides hints how
      reported transaction shall be handled. The following API change is
      proposed:
      
      https://github.com/paritytech/polkadot-sdk/blob/8be5ef3e/substrate/client/transaction-pool/api/src/lib.rs#L297-L318
      Depending on error, the transaction pool can decide if transaction shall
      be removed from the view only or entirely from the pool. Invalid event
      will be dispatched if required.
      
      
      #### Notes for reviewers
      
      - Actual logic of removing invalid txs is implented in
      [`ViewStore::report_invalid`](https://github.com/paritytech/polkadot-sdk/blob/0fad26c4...
      c94df1bc
    • Alexander Theißen's avatar
      pallet-revive: Fix the contract size related benchmarks (#7568) · 60146ba5
      Alexander Theißen authored
      
      Partly addresses https://github.com/paritytech/polkadot-sdk/issues/6157
      
      The benchmarks measuring the impact of contract sizes on calling or
      instantiating a contract were bogus because they needed to be written in
      assembly in order to tightly control the basic block size.
      
      This fixes the benchmarks for:
      - call_with_code_per_byte
      - upload_code
      - instantiate_with_code
      
      And adds a new benchmark that accounts for the fact that the interpreter
      will always compile whole basic blocks:
      - basic_block_compilation
      
      After this PR only the weight we assign to instructions need to be
      addressed.
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      Co-authored-by: default avatarPG Herveou <pgherveou@gmail.com>
      60146ba5
    • Tomás Senovilla Polo's avatar
      refactor: Move `T:Config` into where clause in `#[benchmarks]` macro if needed (#7418) · 4b2ca118
      Tomás Senovilla Polo authored
      # Description
      
      Currently, the `#[benchmarks]` macro always add `<T:Config>` to the
      expanded code even if a where clause is used. Using a where clause which
      also includes a trait bound for the generic `T` is triggering [this
      clippy
      warning](https://rust-lang.github.io/rust-clippy/master/index.html#multiple_bound_locations)
      from Rust 1.78 onwards. We've found that
      [here](https://github.com/freeverseio/laos/blob/main/pallets/precompiles-benchmark/src/precompiles/vesting/benchmarking.rs#L126-L132)
      in LAOS, as we need to include `T: pallet_vesting::Config` in the where
      clause, here's the outcome:
      
      ```rust
      error: bound is defined in more than one place
         --> pallets/precompiles-benchmark/src/precompiles/vesting/benchmarking.rs:130:1
          |
      130 | / #[benchmarks(
      131 | |     where
      132 | |         T: Config + pallet_vesting::Config,
          | |         ^
      133 | |         T::AccountIdToH160: ConvertBack<T::AccountId, H160>,
      134 | |         BalanceOf<T>: Into<U256>,
      135 | |         BlockNumberFor<T>: Into<U256>
      136 | | )]
          | |__^
          |
          = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#multiple_bound_locations
          = note: `-D clippy::multiple-bound-locations` implied by `-D warnings`
          = help: to override `-D warnings` add `#[allow(clippy::multiple_bound_locations)]`
          = note: this error originates in the attribute macro `benchmarks` (in Nightly builds, run with -Z macro-backtrace for more info)
      ```
      
      While this is a harmless warning, only thrown due to a trait bound for T
      is being defined twice in an expanded code that nobody will see, and
      while annotating the benchmarks module with
      `#[allow(clippy::multiple_bound_locations)]` is enough to get rid of it,
      it might cause unnecessary concerns.
      
      Hence, I think it's worth slightly modifying the macro to avoid this.
      
      ## Review Notes
      
      What I propose is to include `<T: Config>` (or its instance version) in
      the expanded code only if no where clause was specified, and include
      that trait bound in the where clause if one is present.
      
      I considered always creating a where clause which includes `<T: Config>`
      even if the macro doesn't specify a where clause and totally getting rid
      of `<T: Config>`, but discarded the idea for simplicity.
      
      I also considered checking if `T:Config` is present in the provided
      where clause (as it's the case in the LAOS example I provided) before
      adding it, but discarded the idea as well due to it implies a bit more
      computation and having `T:Config` defined twice in the where clause is
      harmless: the compiler will ignore the second occurrence and nobody will
      see it's there.
      
      If you think this change is worth it and one of the discarded ideas
      would be a better approach, I'm happy to push that code instead.
      4b2ca118
    • Bastian Köcher's avatar
    • Alexander Theißen's avatar
      pallet-revive: Add env var to allow skipping of validation for testing (#7562) · b44dc3a5
      Alexander Theißen authored
      
      When trying to reproduce bugs we sometimes need to deploy code that
      wouldn't pass validation. This PR adds a new environment variable
      `REVIVE_SKIP_VALIDATION` that when set will skip all validation except
      the contract blob size limit.
      
      Please note that this only applies to when the pallet is compiled for
      `std` and hence will never be part of on-chain.
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      b44dc3a5
    • Oliver Tale-Yazdi's avatar
      [mq pallet] Custom next queue selectors (#6059) · 7aac8861
      Oliver Tale-Yazdi authored
      Changes:
      - Expose a `force_set_head` function from the `MessageQueue` pallet via
      a new trait: `ForceSetHead`. This can be used to force the MQ pallet to
      process this queue next.
      - The change only exposes an internal function through a trait, no audit
      is required.
      
      ## Context
      
      For the Asset Hub Migration (AHM) we need a mechanism to prioritize the
      inbound upward messages and the inbound downward messages on the AH. To
      achieve this, a minimal (and no breaking) change is done to the MQ
      pallet in the form of adding the `force_set_head` function.
      
      An example use of how to achieve prioritization is then demonstrated in
      `integration_test.rs::AhmPrioritizer`. Normally, all queues are
      scheduled round-robin like this:
      
      `| Relay | Para(1) | Para(2) | ... | Relay | ... `
      
      The prioritizer listens to changes to its queue and triggers if either:
      - The queue processed in the last block (to keep the general round-robin
      scheduling)
      - The queue did not process since `n` blocks...
      7aac8861
  3. Feb 13, 2025
    • PG Herveou's avatar
      [pallet-revive] fix subxt version (#7570) · d1140047
      PG Herveou authored
      
      Cargo.lock change to subxt were rolled back 
      Fixing it and updating it in Cargo.toml so it does not happen again
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      d1140047
    • s0me0ne-unkn0wn's avatar
      Shorter availability data retention period for testnets (#7353) · 1866c3b4
      s0me0ne-unkn0wn authored
      Closes #3270
      
      ---------
      
      Co-authored-by: command-bot <>
      1866c3b4
    • Bastian Köcher's avatar
      sc-informant: Print full hash when debug logging is enabled (#7554) · 9d14b3b5
      Bastian Köcher authored
      
      When debugging stuff, it is useful to see the full hashes and not only
      the "short form". This makes it easier to read logs and follow blocks.
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      9d14b3b5
    • Michal Kucharczyk's avatar
      `fatxpool`: transaction statuses metrics added (#7505) · e5df3306
      Michal Kucharczyk authored
      #### Overview
      
      This PR introduces a new mechanism to capture and report metrics related
      to timings of transaction lifecycle events, which are currently not
      available. By exposing these timings, we aim to augment transaction-pool
      reliability dashboards and extend existing Grafana boards.
      
      A new `unknown_from_block_import_txs` metric is also introduced. It
      provides the number of transactions in imported block which are not
      known to the node's transaction pool. It allows to monitor alignment of
      transaction pools across the nodes in the network.
      
      #### Notes for reviewers
      - **[Per-event
      Metrics](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/metrics.rs#L84-L105)
      Collection**: implemented by[
      `EventsMetricsCollector`](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/metrics.rs#L353-L358)
      which allows to capture both submission timestamps and transaction
      status updates. An asynchronous
      [`EventsMetricsCollectorTask`](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/metrics.rs#L503-L526)
      processes the metrics-related messages sent by the
      `EventsMetricsCollector` and reports the timings of transaction statuses
      updates to Prometheus. This task implements event[
      de-duplication](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/metrics.rs#L458)
      using a `HashMap` of
      [`TransactionEventMetricsData`](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/metrics.rs#L424-L435)
      entries which also holds transaction submission timestamps used to
      [compute
      timings](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/metrics.rs#L489-L495).
      Transaction-related items are removed when transaction's final status is
      [reported](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/metrics.rs#L496).
      - Transaction submission timestamp is reusing the timestamp of
      `TimedTransactionSource` kept in mempool. It is reported to
      `EventsMetricsCollector` in
      [`submit_at`](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/fork_aware_txpool.rs#L735)
      and
      [`submit_and_watch`](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/fork_aware_txpool.rs#L836)
      methods of `ForkAwareTxPool`.
      - Transaction updates are reported to `EventsMetricsCollector` from
      `MultiViewListener`
      [task](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/multi_view_listener.rs#L494).
      This allows to gather metrics for _watched_ and _non-watched_
      transactions (what enables metrics on non-rpc-enabled collators).
      - New metric
      ([`unknown_from_block_import_txs`](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/metrics.rs#L59-L60))
      allowing checking alignment of pools across the network is
      [reported](https://github.com/paritytech/polkadot-sdk/blob/8a53992e/substrate/client/transaction-pool/src/fork_aware_txpool/fork_aware_txpool.rs#L1288-L1292)
      using new `TxMemPool`
      [method](https://github.com/paritytech/polkadot-sdk/blob/8a53992e
      
      /substrate/client/transaction-pool/src/fork_aware_txpool/tx_mem_pool.rs#L605-L611).
      
      fixes: #7355, #7448
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      Co-authored-by: default avatarSebastian Kunert <skunert49@gmail.com>
      Co-authored-by: default avatarIulian Barbu <14218860+iulianbarbu@users.noreply.github.com>
      e5df3306
    • seemantaggarwal's avatar
      Update Scheduler to have a configurable block provider #7434 (#7441) · 645a6f40
      seemantaggarwal authored
      
      Follow up from
      https://github.com/paritytech/polkadot-sdk/pull/6362#issuecomment-2629744365
      
      The goal of this PR is to have the scheduler pallet work on a parachain
      which does not produce blocks on a regular schedule, thus can use the
      relay chain as a block provider.
      
      Because blocks are not produced regularly, we cannot make the assumption
      that block number increases monotonically, and thus have new logic to
      handle multiple spend periods passing between blocks.
      
      Requirement: 
      
      instead of using the hard coded system block number. We add an
      associated type BlockNumberProvider
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      645a6f40
  4. Feb 12, 2025
  5. Feb 10, 2025
  6. Feb 09, 2025
    • StackOverflowExcept1on's avatar
      feat(wasm-builder): add support for new `wasm32v1-none` target (#7008) · 2970ab15
      StackOverflowExcept1on authored
      
      # Description
      
      Resolves #5777
      
      Previously `wasm-builder` used hacks such as `-Zbuild-std` (required
      `rust-src` component) and `RUSTC_BOOTSTRAP=1` to build WASM runtime
      without WASM features: `sign-ext`, `multivalue` and `reference-types`,
      but since Rust 1.84 (will be stable on 9 January, 2025) the situation
      has improved as there is new
      [`wasm32v1-none`](https://doc.rust-lang.org/beta/rustc/platform-support/wasm32v1-none.html)
      target that disables all "post-MVP" WASM features except
      `mutable-globals`.
      
      Previously, your `rust-toolchain.toml` looked like this:
      
      ```toml
      [toolchain]
      channel = "stable"
      components = ["rust-src"]
      targets = ["wasm32-unknown-unknown"]
      profile = "default"
      ```
      
      It should now be updated to something like this:
      
      ```toml
      [toolchain]
      channel = "stable"
      targets = ["wasm32v1-none"]
      profile = "default"
      ```
      
      To build the runtime:
      
      ```bash
      cargo build --package minimal-template-runtime --release
      ```
      
      ## Integration
      
      If you are using Rust 1.84 and above, then install the `wasm32v1-none`
      target instead of `wasm32-unknown-unknown` as shown above. You can also
      remove the unnecessary `rust-src` component.
      
      Also note the slight differences in conditional compilation:
      - `wasm32-unknown-unknown`: `#[cfg(all(target_family = "wasm", target_os
      = "unknown"))]`
      - `wasm32v1-none`: `#[cfg(all(target_family = "wasm", target_os =
      "none"))]`
      
      Avoid using `target_os = "unknown"` in `#[cfg(...)]` or
      `#[cfg_attr(...)]` and instead use `target_family = "wasm"` or
      `target_arch = "wasm32"` in the runtime code.
      
      ## Review Notes
      
      Wasm builder requires the following prerequisites for building the WASM
      binary:
      - Rust >= 1.68 and Rust < 1.84:
        - `wasm32-unknown-unknown` target
        - `rust-src` component
      - Rust >= 1.84:
        - `wasm32v1-none` target
      - no more `-Zbuild-std` and `RUSTC_BOOTSTRAP=1` hacks and `rust-src`
      component requirements!
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      Co-authored-by: default avatarBastian Köcher <info@kchr.de>
      2970ab15
  7. Feb 08, 2025
  8. Feb 07, 2025
  9. Feb 06, 2025
  10. Feb 05, 2025
    • Iulian Barbu's avatar
      omni-node: add offchain worker (#7479) · 87f4f3f0
      Iulian Barbu authored
      
      # Description
      
      Copy pasted the `parachain-template-node` offchain worker setup to
      omni-node-lib for both aura and manual seal nodes.
      
      Closes #7447 
      
      ## Integration
      
      Enabled offchain workers for both `polkadot-omni-node` and
      `polkadot-parachain` nodes. This would allow executing offchain logic in
      the runtime and considering it on the node side.
      
      ---------
      
      Signed-off-by: default avatarIulian Barbu <iulian.barbu@parity.io>
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      87f4f3f0
    • Sebastian Kunert's avatar
      omni-node: Adjust manual seal parameters (#7451) · 9c474d54
      Sebastian Kunert authored
      This PR will make omni-node dev-mode once again compatible with older
      runtimes.
      
      The changes introduced in
      https://github.com/paritytech/polkadot-sdk/pull/6825 changed constraints
      that are enforced in the runtime. For normal chains this should work
      fine, since we have real parameters there, like relay chain slots and
      parachain slots.
      
      For these manual seal parameters we need to respect the constraints,
      while faking all the parameters. This PR should fix manual seal in
      omni-node to work with runtime build before and after
      https://github.com/paritytech/polkadot-sdk/pull/6825 (I tested that).
      
      In the future, we should look into improving the parameterization here,
      possibly by introducing proper aura pre-digests so that the parachain
      slot moves forward. This will require quite a bit of refactoring on the
      manual seal node side however. Issue:
      https://github.com/paritytech/polkadot-sdk/issues/7453
      
      Also, the dev chain spec in parachain template is updated. This m...
      9c474d54
  11. Feb 04, 2025
    • Alexander Theißen's avatar
      revive: Include immutable storage deposit into the contracts `storage_base_deposit` (#7230) · 4c28354b
      Alexander Theißen authored
      
      This PR is centered around a main fix regarding the base deposit and a
      bunch of drive by or related fixtures that make sense to resolve in one
      go. It could be broken down more but I am constantly rebasing this PR
      and would appreciate getting those fixes in as-one.
      
      **This adds a multi block migration to Westend AssetHub that wipes the
      pallet state clean. This is necessary because of the changes to the
      `ContractInfo` storage item. It will not delete the child storage
      though. This will leave a tiny bit of garbage behind but won't cause any
      problems. They will just be orphaned.**
      
      ## Record the deposit for immutable data into the `storage_base_deposit`
      
      The `storage_base_deposit` are all the deposit a contract has to pay for
      existing. It included the deposit for its own metadata and a deposit
      proportional (< 1.0x) to the size of its code. However, the immutable
      code size was not recorded there. This would lead to the situation where
      on terminate this portion wouldn't be refunded staying locked into the
      contract. It would also make the calculation of the deposit changes on
      `set_code_hash` more complicated when it updates the immutable data (to
      be done in #6985). Reason is because it didn't know how much was payed
      before since the storage prices could have changed in the mean time.
      
      In order for this solution to work I needed to delay the deposit
      calculation for a new contract for after the contract is done executing
      is constructor as only then we know the immutable data size. Before, we
      just charged this eagerly in `charge_instantiate` before we execute the
      constructor. Now, we merely send the ED as free balance before the
      constructor in order to create the account. After the constructor is
      done we calculate the contract base deposit and charge it. This will
      make `set_code_hash` much easier to implement.
      
      As a side effect it is now legal to call `set_immutable_data` multiple
      times per constructor (even though I see no reason to do so). It simply
      overrides the immutable data with the new value. The deposit accounting
      will be done after the constructor returns (as mentioned above) instead
      of when setting the immutable data.
      
      ## Don't pre-charge for reading immutable data
      
      I noticed that we were pre-charging weight for the max allowable
      immutable data when reading those values and then refunding after read.
      This is not necessary as we know its length without reading the storage
      as we store it out of band in contract metadata. This makes reading it
      free. Less pre-charging less problems.
      
      ## Remove delegate locking
      
      Fixes #7092
      
      This is also in the spirit of making #6985 easier to implement. The
      locking complicates `set_code_hash` as we might need to block settings
      the code hash when locks exist. Check #7092 for further rationale.
      
      ## Enforce "no terminate in constructor" eagerly
      
      We used to enforce this rule after the contract execution returned. Now
      we error out early in the host call. This makes it easier to be sure to
      argue that a contract info still exists (wasn't terminated) when a
      constructor successfully returns. All around this his just much simpler
      than dealing this check.
      
      ## Moved refcount functions to `CodeInfo`
      
      They never really made sense to exist on `Stack`. But now with the
      locking gone this makes even less sense. The refcount is stored inside
      `CodeInfo` to lets just move them there.
      
      ## Set `CodeHashLockupDepositPercent` for test runtime
      
      The test runtime was setting `CodeHashLockupDepositPercent` to zero.
      This was trivializing many code paths and excluded them from testing. I
      set it to `30%` which is our default value and fixed up all the tests
      that broke. This should give us confidence that the lockup doeposit
      collections properly works.
      
      ## Reworked the `MockExecutable` to have both a `deploy` and a `call`
      entry point
      
      This type used for testing could only have either entry points but not
      both. In order to fix the `immutable_data_set_overrides` I needed to a
      new function `add_both` to `MockExecutable` that allows to have both
      entry points. Make sure to make use of it in the future :)
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      Co-authored-by: default avatarPG Herveou <pgherveou@gmail.com>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      4c28354b
    • Alexandre R. Baldé's avatar
      Add missing events to nomination pool extrinsincs (#7377) · a8834759
      Alexandre R. Baldé authored
      
      Found via
      https://github.com/open-web3-stack/polkadot-ecosystem-tests/pull/165.
      
      Closes #7370 .
      
      # Description
      
      Some extrinsics from `pallet_nomination_pools` were not emitting events:
      * `set_configs`
      * `set_claim_permission`
      * `set_metadata`
      * `chill`
      * `nominate`
      
      ## Integration
      
      N/A
      
      ## Review Notes
      
      N/A
      
      ---------
      
      Co-authored-by: default avatarAnkan <10196091+Ank4n@users.noreply.github.com>
      a8834759
  12. Feb 03, 2025
    • xermicus's avatar
      [pallet-revive] do not trap the caller on instantiations with duplicate contracts (#7414) · 274a781e
      xermicus authored
      
      This PR changes the behavior of `instantiate` when the resulting
      contract address already exists (because the caller tried to instantiate
      the same contract with the same salt multiple times): Instead of
      trapping the caller, return an error code.
      
      Solidity allows `catch`ing this, which doesn't work if we are trapping
      the caller. For example, the change makes the following snippet work:
      
      ```Solidity
      try new Foo{salt: hex"00"}() returns (Foo) {
          // Instantiation was successful (contract address was free and constructor did not revert)
      } catch {
          // This branch is expected to be taken if the instantiation failed because of a duplicate salt
      }
      ```
      
      `revive` PR: https://github.com/paritytech/revive/pull/188
      
      ---------
      
      Signed-off-by: default avatarCyrill Leutwiler <bigcyrill@hotmail.com>
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      274a781e
    • Alin Dima's avatar
      deprecate AsyncBackingParams (#7254) · 4cd07c56
      Alin Dima authored
      
      Part of https://github.com/paritytech/polkadot-sdk/issues/5079.
      
      Removes all usage of the static async backing params, replacing them
      with dynamically computed equivalent values (based on the claim queue
      and scheduling lookahead).
      
      Adds a new runtime API for querying the scheduling lookahead value. If
      not present, falls back to 3 (the default value that is backwards
      compatible with values we have on production networks for
      allowed_ancestry_len)
      
      Also resolves most of
      https://github.com/paritytech/polkadot-sdk/issues/4447, removing code
      that handles async backing not yet being enabled.
      While doing this, I removed the support for collation protocol version 1
      on collators, as it only worked for leaves not supporting async backing
      (which are none).
      I also unhooked the legacy v1 statement-distribution (for the same
      reason as above). That subsystem is basically dead code now, so I had to
      remove some of its tests as they would no longer pass (since the
      subsystem no longer sends messages to the legacy variant). I did not
      remove the entire legacy subsystem yet, as that would pollute this PR
      too much. We can remove the entire v1 and v2 validation protocols in a
      follow up PR.
      
      In another PR: remove test files with names `prospective_parachains`
      (it'd pollute this PR if we do now)
      
      TODO:
      - [x] add deprecation warnings
      - [x] prdoc
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      4cd07c56
  13. Jan 31, 2025