Skip to content
Snippets Groups Projects
  1. Mar 07, 2025
    • Michal Kucharczyk's avatar
      `fatxpool`: improved handling of finality stalls (#7639) · 24a4b55e
      Michal Kucharczyk authored
      #### PR Description
      
      This pull request introduces measures to handle finality stalls by :
      - notifying outdated transactions with a
      [`FinalityTimeout`](https://github.com/paritytech/polkadot-sdk/blob/d821c84d/substrate/client/transaction-pool/api/src/lib.rs#L145-L147)
      event.
      - removing outdated views from the `view_store`
      
      An item is considered _outdated_ when the difference between its
      associated block and the current block exceeds a pre-defined threshold.
      
      #### Note for Reviewers
      The core logic is provided in the following small commits:
      - `ViewStore`: new method
      [`finality_stall_view_cleanup`](https://github.com/paritytech/polkadot-sdk/blob/d821c84d/substrate/client/transaction-pool/src/fork_aware_txpool/view_store.rs#L869-L903)
      for removing stale views was added: 64267000
      - `ForkAwareTransactionPool`: core logic for tracking finality stalls
      added here: 7b37ea6f. Entry point in
      [`finality_stall_cleanup`](https://github.com/paritytech/polkadot-sdk/blob/d821c84d/substrate/client/transaction-pool/src/fork_aware_txpool/fork_aware_txpool.rs#L1096-L1136)
      - Some related renaming was made to better reflect purpose/shorten the
      names: 1a3a1284, a511601f. Also new method
      [`transactions_finality_timeout`](https://github.com/paritytech/polkadot-sdk/blob/a511601f/substrate/client/transaction-pool/src/fork_aware_txpool/multi_view_listener.rs#L771-L790)
      for triggering external events was added for `MultiViewListener`.
      - `included_transactions` which basically is mapping `block hash ->
      included transactions hashes`, is also used to find to included
      transactions.
      
      I also sneaked in some minor improvements:
      - fixed per-transaction logging: 1572f721
      - `handle_pre_finalized` method was removed, it was some old leftover
      which is no longer needed: a6f84ad0
      
      ,
      
      closes: #5482
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      Co-authored-by: default avatarSebastian Kunert <skunert49@gmail.com>
      Co-authored-by: default avatarIulian Barbu <14218860+iulianbarbu@users.noreply.github.com>
    • Iulian Barbu's avatar
      `fatxpool`: add heavy load tests based on zombienet (#7257) · ff2e5091
      Iulian Barbu authored
      
      # Description
      
      Builds up towards addressing #5497 by creating some zombienet-sdk code
      infra that can be used to spin regular networks, as described in the
      fork aware transaction pool testing setup added here #7100. It will be
      used for developing tests against such networks, and to also spawn them
      on demand locally through tooling that will be developed in follow ups.
      
      ## Integration
      
      Node/runtime developers can run tests based on the zombienet-sdk infra
      that spins frequently used networks which can be used for analyzing
      behavior of various node related components, like fork aware transaction
      pool.
      
      ## Review Notes
      
      - Uses ttxt API implemented here:
      https://github.com/michalkucharczyk/tx-test-tool/pull/22/files
      - currently, only two test scenarios are considered: 10k future & 10k
      ready txs are sent to two separate networks - one parachain and one
      relaychain, asserting at the end on the finalization of all 20k txs on
      both networks.
      
      ---------
      
      Signed-off-by: default avatarIulian Barbu <iulian.barbu@parity.io>
      Co-authored-by: default avatarJavier Viola <363911+pepoviola@users.noreply.github.com>
      Co-authored-by: default avatarMichal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com>
    • Bastian Köcher's avatar
      `apply_authorized_upgrade`: Remote authorization if the version check fails (#7812) · 6ce9948e
      Bastian Köcher authored
      
      This pr ensures that we remove the `authorization` for a runtime upgrade
      if the version check failed. If that check is failing, it means that the
      runtime upgrade is invalid and the check will never succeed.
      
      Besides that the pr is doing some clean ups.
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
  2. Mar 06, 2025
  3. Mar 05, 2025
  4. Mar 04, 2025
  5. Mar 03, 2025
    • polka.dom's avatar
      Add Serialize & Deserialize to umbrella crate derive module (#7764) · 9d9ae348
      polka.dom authored
      
      When working with storage types that are to be set in the genesis block,
      deriving serde::Serialize & serde::Deserialize is necessary (to my
      knowledge). This PR introduces Serialize and Deserialize into the
      umbrella crate derive (and indirectly prelude) module, allowing for
      similar access as the other storage value derives.
      
      ---------
      
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
    • clangenb's avatar
      [kitchensink] migrate to use genesis presets (#7741) · f9707b72
      clangenb authored
      
      Last subtask from
      https://github.com/paritytech/polkadot-sdk/issues/5704.
      
      Closes #5704.
      
      The substrate-node is not 100% free of the native runtime yet, but the
      code has become less convoluted and better documented. The final cleanup
      needs https://github.com/paritytech/polkadot-sdk/issues/7748.
      
      ---------
      
      Co-authored-by: default avatarMichal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com>
    • Andrei Eres's avatar
      Fix unspecified Hash in NodeBlock (#7756) · df99fb94
      Andrei Eres authored
      # Description
      
      Working with https://github.com/paritytech/polkadot-sdk/pull/7556 I
      encountered an internal compiler error on polkadot-omni-node-lib: see
      bellow or [in
      CI](https://github.com/paritytech/polkadot-sdk/actions/runs/13521547633/job/37781894640).
      
      ```
      error: internal compiler error: compiler/rustc_traits/src/codegen.rs:45:13: Encountered error `SignatureMismatch(SignatureMismatchData { found_trait_ref: <{closure@sp_state_machine::trie_backend_essence::TrieBackendEssence<sc_service::Arc<dyn sp_state_machine::trie_backend_essence::Storage<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>>, <<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing, sp_trie::cache::LocalTrieCache<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>, sp_trie::recorder::Recorder<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>>::storage::{closure#1}} as std::ops::FnOnce<(std::option::Option<&mut dyn trie_db::TrieRecorder<sp_core::H256>>, std::option::Option<&mut dyn trie_db::TrieCache<sp_trie::node_codec::NodeCodec<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>>>)>>, expected_trait_ref: <{closure@sp_state_machine::trie_backend_essence::TrieBackendEssence<sc_service::Arc<dyn sp_state_machine::trie_backend_essence::Storage<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>>, <<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing, sp_trie::cache::LocalTrieCache<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>, sp_trie::recorder::Recorder<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>>::storage::{closure#1}} as std::ops::FnOnce<(std::option::Option<&mut dyn trie_db::TrieRecorder<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hash>>, std::option::Option<&mut dyn trie_db::TrieCache<sp_trie::node_codec::NodeCodec<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>>>)>>, terr: Sorts(ExpectedFound { expected: Alias(Projection, AliasTy { args: [Alias(Projection, AliasTy { args: [Alias(Projection, AliasTy { args: [NodeSpec/#0], def_id: DefId(0:410 ~ polkadot_omni_node_lib[7cce]::common::spec::BaseNodeSpec::Block), .. })], def_id: DefId(0:507 ~ polkadot_omni_node_lib[7cce]::common::NodeBlock::BoundedHeader), .. })], def_id: DefId(229:1706 ~ sp_runtime[5da1]::traits::Header::Hash), .. }), found: sp_core::H256 }) })` selecting `<{closure@sp_state_machine::trie_backend_essence::TrieBackendEssence<sc_service::Arc<dyn sp_state_machine::trie_backend_essence::Storage<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>>, <<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing, sp_trie::cache::LocalTrieCache<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>, sp_trie::recorder::Recorder<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>>::storage::{closure#1}} as std::ops::FnOnce<(std::option::Option<&mut dyn trie_db::TrieRecorder<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hash>>, std::option::Option<&mut dyn trie_db::TrieCache<sp_trie::node_codec::NodeCodec<<<<NodeSpec as common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader as sp_runtime::traits::Header>::Hashing>>>)>>` during codegen
      ```
      
      Trying to parse the error I found that TrieRecorder was not supposed to
      work with H256:
      - Expected: `&mut dyn TrieRecorder<<<<NodeSpec as
      common::spec::BaseNodeSpec>::Block as common::NodeBlock>::BoundedHeader
      as Header>::Hash>>`
      - Found: `&mut dyn TrieRecorder<sp_core::H256>>`
      
      The error happened because I added to
      `new_full_parts_with_genesis_builder` interaction with Trie Cache which
      eventually uses `TrieRecorder<H256>`. Here is the path:
      - In polkadot-omni-node-lib trait BaseNodeSpec defined with Block as
      `NodeBlock: BlockT<Hash = DbHash>`, where DbHash is H256.
      - BaseNodeSpec calls [new_full_parts_record_import::<Self::Block, …
      >](https://github.com/paritytech/polkadot-sdk/blob/75726c65/cumulus/polkadot-omni-node/lib/src/common/spec.rs#L184-L189)
      and eventually it goes to
      [new_full_parts_with_genesis_builder](https://github.com/paritytech/polkadot-sdk/blob/08b30246
      
      /substrate/client/service/src/builder.rs#L195).
      - In `new_full_parts_with_genesis_builder` we accessed storage,
      initiating TrieRecorder with H256 what never happened before.
      
      I believe the compiler found a mismatch checking types for TrieRecorder:
      NodeBlock inherits from the trait `Block<Hash = DbHash>`, but it uses
      BoundedHeader, which inherits from the trait Header with the default
      Hash.
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
    • Alexander Theißen's avatar
      revive: Rework the instruction benchmark (#7721) · 4b39ff00
      Alexander Theißen authored
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/6157
      
      This fixes the last remaining benchmark that was not correct since it
      was too low level to be written in Rust. Instead, we opted.
      
      This PR changes the benchmark that determines the scaling from
      `ref_time` to PolkaVM `Gas` by benchmarking the absolute worst case of
      an instruction: One that causes two cache misses by touching two cache
      lines.
      
      The Contract itself is designed to be as simple as possible. It does
      random unaligned reads in a loop until the `r` (repetition) number is
      reached. The randomness is fully generated by the host and written to
      the guests memory before the benchmark is run. This allows the benchmark
      to determine the influence of one loop iteration via linear regression.
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      Co-authored-by: default avatarxermicus <cyrill@parity.io>
      Co-authored-by: default avatarPG Herveou <pgherveou@gmail.com>
    • Matteo Muraca's avatar
      Remove `pallet::getter` usage from `pallet-nft-fractionalization` (#7124) · 3798ff7f
      Matteo Muraca authored
      
      Description
      Part of #3326
      As per title, `pallet::getter` usage has been removed from
      `pallet-nft-fractionalization`.
      
      ---------
      
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
  6. Feb 28, 2025
    • Alexandru Vasile's avatar
      notifications/libp2p: Terminate the outbound notification substream on `std::io::Errors` (#7724) · 1bc6ca60
      Alexandru Vasile authored
      
      This PR handles a case where we called the `poll_next` on an outbound
      substream notification to check if the stream is closed. It is entirely
      possible that the `poll_next` would return an `io::error`, for example
      end of file.
      
      This PR ensures that we make the distinction between unexpected incoming
      data, and error originated from `poll_next`.
      
      While at it, the bulk of the PR change propagates the PeerID from the
      network behavior, through the notification handler, to the notification
      outbound stream for logging purposes.
      
      cc @paritytech/networking 
      
      Part of: https://github.com/paritytech/polkadot-sdk/issues/7722
      
      ---------
      
      Signed-off-by: default avatarAlexandru Vasile <alexandru.vasile@parity.io>
    • Sebastian Kunert's avatar
      Remove leftovers of leftovers of contracts-rococo (#7750) · 9adb8d28
      Sebastian Kunert authored
      Follow-up of https://github.com/paritytech/polkadot-sdk/pull/7638, which
      attempted to remove contracts-rococo.
      
      But there were some leftover weight files still chilling in the repo.
    • Alexander Theißen's avatar
      pallet_revive: Change address derivation to use hashing (#7662) · 4087e2d9
      Alexander Theißen authored
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/6723
      
      ## Motivation
      
      Internal auditors recommended to not truncate Polkadot Addresses when
      deriving Ethereum addresses from it. Reasoning is that they are raw
      public keys where truncating could lead to collisions when weaknesses in
      those curves are discovered in the future. Additionally, some pallets
      generate account addresses in a way where only the suffix we were
      truncating contains any entropy. The changes in this PR act as a safe
      guard against those two points.
      
      ## Changes made
      
      We change the `to_address` function to first hash the AccountId32 and
      then use trailing 20 bytes as `AccountId20`. If the `AccountId32` ends
      with 12x 0xEE we keep our current behaviour of just truncating those
      trailing bytes.
      
      ## Security Discussion
      
      This will allow us to still recover the original `AccountId20` because
      those are constructed by just adding those 12 bytes. Please note that
      generating an ed25519 key pair where the trailing 12 bytes are 0xEE is
      theoretically possible as 96bits is not a huge search space. However,
      this cannot be used as an attack vector. It will merely allow this
      address to interact with `pallet_revive` without registering as the
      fallback account is the same as the actual address. The ultimate vanity
      address. In practice, this is not relevant since the 0xEE addresses are
      not valid public keys for sr25519 which is used almost everywhere.
      
      tl:dr: We keep truncating in case of an Ethereum address derived account
      id. This is safe as those are already derived via keccak. In every other
      case where we have to assume that the account id might be a public key.
      Therefore we first hash and then take the trailing bytes.
      
      ## Do we need a Migration for Westend
      
      No. We changed the name of the mapping. This means the runtime will not
      try to read the old data. Ethereum keys are unaffected by this change.
      We just advise people to re-register their AccountId32 in case they need
      to use it as it is a very small circle of users (just 3 addresses
      registered). This will not cause disturbance on Westend.
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
    • Egor_P's avatar
      [Release|CI/CD] Update pgpgkms to the latest version (#7745) · 95be69ce
      Egor_P authored
      This PR updates `pgpgkms`, tool to sign releases, to the latest version
      to fix the issue wiht the `debian` publishing form the pipeline.
      Address: https://github.com/paritytech/release-engineering/issues/248
    • Serban Iorga's avatar
      Derive `DecodeWithMemTracking` for `Block` (#7655) · c11b1f85
      Serban Iorga authored
      Related to https://github.com/paritytech/polkadot-sdk/issues/7360
      
      This PR adds `DecodeWithMemTracking` as a trait bound for `Header`,
      `Block` and `TransactionExtension` and
      derives it for all the types that implement these traits in
      `polkadot-sdk`.
  7. Feb 27, 2025
    • Raymond Cheung's avatar
      Simplify event assertion with predicate-based check (#7734) · 6a3d10b3
      Raymond Cheung authored
      
      A follow-up PR to simplify event assertions by introducing
      `contains_event`, allowing event checks without needing exact field
      matches. This reduces redundancy and makes tests more flexible.
      
      Partially addresses #6119 by providing an alternative way to assert
      events.
      
      Reference: [PR #7594 -
      Discussion](https://github.com/paritytech/polkadot-sdk/pull/7594#discussion_r1965566349)
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      Co-authored-by: default avatarBranislav Kontur <bkontur@gmail.com>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
    • Utkarsh Bhardwaj's avatar
      [AHM] Poke deposits: Multisig pallet (#7700) · cc83fba1
      Utkarsh Bhardwaj authored
      
      # Description
      
      * This PR adds a new extrinsic `poke_deposit` to `pallet-multisig`. This
      extrinsic will be used to re-adjust the deposits made in the pallet to
      create a multisig operation after AHM.
      * Part of #5591 
      
      ## Review Notes
      
      * Added a new extrinsic `poke_deposit` in `pallet-multisig`.
      * Added a new event `DepositPoked` to be emitted upon a successful call
      of the extrinsic.
      * Although the immediate use of the extrinsic will be to give back some
      of the deposit after the AH-migration, the extrinsic is written such
      that it can work if the deposit decreases or increases (both).
      * The call to the extrinsic would be `free` if an actual adjustment is
      made to the deposit and `paid` otherwise.
      * Added tests to test all scenarios.
      
      ## TO-DOs
      * [x] Add Benchmark
      * [x] Run CI cmd bot to benchmark
      
      ---------
      
      Co-authored-by: default avatarcmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
      Co-authored-by: default avatarGiuseppe Re <giuseppe.re@parity.io>
    • huntbounty's avatar
      Add README.md to umbrella (#7600) · d734e79e
      huntbounty authored
      
      Resolves #7536
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
    • Egor_P's avatar
      [Backport] Version bumps form stable2412-2 (#7744) · 84b3ae9b
      Egor_P authored
      This PR backports version bumps and prdocs reorg from the latest stable
      branch back to master
    • Alexandru Vasile's avatar
      notifications/tests: Check compatiblity between litep2p and libp2p (#7484) · e3e3f481
      Alexandru Vasile authored
      
      This PR ensures compatibility in terms of expectations between the
      libp2p and litep2p network backends at the notification protocol level.
      
      The libp2p node is tested with the `Notification` behavior that contains
      the protocol controller, while litep2p is tested at the lowest level API
      (without substrate shim layers).
      
      ## Notification Behavior
      
      (I) Libp2p protocol controller will eagerly reopen a closed substream,
      even if it is the one that closed it:
      - When a node (libp2p or litep2p) closes the substream with **libp2p**,
      the **libp2p** controller will reopen the substream
      - When **libp2p** closes the substream with a node (either litep2p with
      no controller or libp2p), the **libp2p** controller will reopen the
      substream
      - However in this case, libp2p was the one closing the substream
      signaling it is no longer interested in communicating with the other
      side
      
      (II) Notifications are lost and not reported to the higher level in the
      following scenario:
      - T0: Node A opens a substream with Node B
      - T1: Node A closes the substream or the connection with Node B
      - T2: Node B sends a notification to Node A => *notification is lost*
      and never reported
      - T3: Node B detects the closed substream or connection
      
      
      ## Testing
      
      This PR effectively checks:
      - connectivity at the notification level
      - litep2p rejecting libp2p substream and keep-alive mechanism
      functionality
      - libp2p disconnecting libp2p and connection re-establishment (and all
      the other permutations)
      - idling of connections with active substreams and keep-alive mechanism
      is not enforced
      
      
      Prior work:
      - https://github.com/paritytech/polkadot-sdk/pull/7361
      
      cc @paritytech/networking
      
      ---------
      
      Signed-off-by: default avatarAlexandru Vasile <alexandru.vasile@parity.io>
      Co-authored-by: default avatarDmitry Markin <dmitry@markin.tech>