Skip to content
Snippets Groups Projects
  1. Jan 20, 2025
    • Ron's avatar
      Migrate pallet-mmr to umbrella crate (#7081) · 569ce71e
      Ron authored
      Part of https://github.com/paritytech/polkadot-sdk/issues/6504
    • Branislav Kontur's avatar
      Apply a few minor fixes found while addressing the fellows PR for weights. (#7098) · 115ff4e9
      Branislav Kontur authored
      This PR addresses a few minor issues found while working on the
      polkadot-fellows PR
      [https://github.com/polkadot-fellows/runtimes/pull/522](https://github.com/polkadot-fellows/runtimes/pull/522):
      - Incorrect generic type for `InboundLaneData` in
      `check_message_lane_weights`.
      - Renaming leftovers: `assigner_on_demand` -> `on_demand`.
    • PG Herveou's avatar
      [pallet-revive] eth-rpc error logging (#7251) · ea27696a
      PG Herveou authored
      Log error instead of failing with an error when block processing fails
      
      ---------
      
      Co-authored-by: command-bot <>
    • Sebastian Kunert's avatar
      Stabilize `ensure_execute_processes_have_correct_num_threads` test (#7253) · d5d9b127
      Sebastian Kunert authored
      Saw this test flake a few times, last time
      [here](https://github.com/paritytech/polkadot-sdk/actions/runs/12834432188/job/35791830215).
      
      We first fetch all processes in the test, then query `/proc/<pid>/stat`
      for every one of them. When the file was not found, we would error. Now
      we tolerate not finding this file. Ran 200 times locally without error,
      before would fail a few times, probably depending on process fluctuation
      (which I expect to be high on CI runners).
    • seemantaggarwal's avatar
      Use docify export for parachain template hardcoded configuration and embed it... · 4937f779
      seemantaggarwal authored
      Use docify export for parachain template hardcoded configuration and embed it in its README #6333 (#7093)
      
      Use docify export for parachain template hardcoded configuration and
      embed it in its README #6333
      
      Docify currently has a limitation of not being able to embed a
      variable/const in its code, without embedding it's definition, even if
      do something in a string like
      
      "this is a sample string ${sample_variable}"
      
      It will embed the entire string 
      "this is a sample string ${sample_variable}"
      without replacing the value of sample_variable from the code
      
      Hence, the goal was just to make it obvious in the README where the
      PARACHAIN_ID value is coming from, so a note has been added at the start
      for the same, so whenever somebody is running these commands, they will
      be aware about the value and replace accordingly.
      
      To make it simpler, we added a 
      rust ignore block so the user can just look it up in the readme itself
      and does not have to scan through the runtime directory for the value.
      
      ---------
      
      Co-authored-by: default avatarIulian Barbu <14218860+iulianbarbu@users.noreply.github.com>
    • Sebastian Kunert's avatar
      Collator: Fix `can_build_upon` by always allowing to build on included block (#7205) · 06f5d486
      Sebastian Kunert authored
      Follow-up to #6825, which introduced this bug.
      
      We use the `can_build_upon` method to ask the runtime if it is fine to
      build another block. The runtime checks this based on the
      [`ConsensusHook`](https://github.com/paritytech/polkadot-sdk/blob/c1b7c302/cumulus/pallets/aura-ext/src/consensus_hook.rs#L110-L110)
      implementation, the most popular one being the `FixedConsensusHook`.
      
      In #6825 I removed a check that would always allow us to build when we
      are building on an included block. Turns out this check is still
      required when:
      1. The [`UnincludedSegment`
      ](https://github.com/paritytech/polkadot-sdk/blob/c1b7c302
      
      /cumulus/pallets/parachain-system/src/lib.rs#L758-L758)
      storage item in pallet-parachain-system is equal or larger than the
      unincluded segment.
      2. We are calling the `can_build_upon` runtime API where the included
      block has progressed offchain to the current parent block (so last entry
      in the `UnincludedSegment` storage item).
      
      In this scenario the last entry in `UnincludedSegment` does not have a
      hash assigned yet (because it was not available in `on_finalize` of the
      previous block). So the unincluded segment will be reported at its
      maximum length which will forbid building another block.
      
      Ideally we would have a more elegant solution than to rely on the
      node-side here. But for now the check is reintroduced and a test is
      added to not break it again by accident.
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarMichal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com>
  2. Jan 17, 2025
  3. Jan 16, 2025
    • Ankan's avatar
      [Staking] Currency <> Fungible migration (#5501) · f5673cf2
      Ankan authored
      Migrate staking currency from `traits::LockableCurrency` to
      `traits::fungible::holds`.
      
      Resolves part of https://github.com/paritytech/polkadot-sdk/issues/226.
      
      ## Changes
      ### Nomination Pool
      TransferStake is now incompatible with fungible migration as old pools
      were not meant to have additional ED. Since they are anyways deprecated,
      removed its usage from all test runtimes.
      
      ### Staking
      - Config: `Currency` becomes of type `Fungible` while `OldCurrency` is
      the `LockableCurrency` used before.
      - Lazy migration of accounts. Any ledger update will create a new hold
      with no extra reads/writes. A permissionless extrinsic
      `migrate_currency()` releases the old `lock` along with some
      housekeeping.
      - Staking now requires ED to be left free. It also adds no consumer to
      staking accounts.
      - If hold cannot be applied to all stake, the un-holdable part is force
      withdrawn from the ledger.
      
      ### Delegated Staking
      The pallet does not add provider for agents anymore.
      
      ## Migration stats
      ### Polkadot
      Total accounts that can be migrated: 59564
      Accounts failing to migrate: 0
      Accounts with stake force withdrawn greater than ED: 59
      Total force withdrawal: 29591.26 DOT
      
      ### Kusama
      Total accounts that can be migrated: 26311
      Accounts failing to migrate: 0
      Accounts with stake force withdrawn greater than ED: 48
      Total force withdrawal: 1036.05 KSM
      
      
      [Full logs here](https://hackmd.io/@ak0n/BklDuFra0).
      
      ## Note about locks (freeze) vs holds
      With locks or freezes, staking could use total balance of an account.
      But with holds, the account needs to be left with at least Existential
      Deposit in free balance. This would also affect nomination pools which
      till now has been able to stake all funds contributed to it. An
      alternate version of this PR is
      https://github.com/paritytech/polkadot-sdk/pull/5658 where staking
      pallet does not add any provider, but means pools and delegated-staking
      pallet has to provide for these accounts and makes the end to end logic
      (of provider and consumer ref) lot less intuitive and prone to bug.
      
      This PR now introduces requirement for stakers to maintain ED in their
      free balance. This helps with removing the bug prone incrementing and
      decrementing of consumers and providers.
      
      ## TODO
      - [x] Test: Vesting + governance locked funds can be staked.
      - [ ] can `Call::restore_ledger` be removed? @gpestana 
      - [x] Ensure unclaimed withdrawals is not affected by no provider for
      pool accounts.
      - [x] Investigate kusama accounts with balance between 0 and ED.
      - [x] Permissionless call to release lock.
      - [x] Migration of consumer (dec) and provider (inc) for direct stakers.
      - [x] force unstake if hold cannot be applied to all stake.
      - [x] Fix try state checks (it thinks nothing is staked for unmigrated
      ledgers).
      - [x] Bench `migrate_currency`.
      - [x] Virtual Staker migration test.
      - [x] Ensure total issuance is upto date when minting rewards.
      
      ## Followup
      - https://github.com/paritytech/polkadot-sdk/issues/5742
      
      ---------
      
      Co-authored-by: command-bot <>
    • chloefeal's avatar
      chore: fix typos (#6999) · e056586b
      chloefeal authored
      ✄
      -----------------------------------------------------------------------------
      
      Thank you for your Pull Request! :pray:
      
       Please make sure it follows the
      contribution guidelines outlined in [this
      
      document](https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md)
      and fill out the
      sections below. Once you're ready to submit your PR for review, please
      delete this section and leave only the text under
      the "Description" heading.
      
      # Description
      
      Hello, I fix some typos in logs and comments. Thank you very much.
      
      
      ## Integration
      
      *In depth notes about how this PR should be integrated by downstream
      projects. This part is mandatory, and should be
      reviewed by reviewers, if the PR does NOT have the `R0-Silent` label. In
      case of a `R0-Silent`, it can be ignored.*
      
      ## Review Notes
      
      *In depth notes about the **implementation** details of your PR. This
      should be the main guide for reviewers to
      understand your approach and effectively review it. If too long, use
      
      [`<details>`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/details)*.
      
      *Imagine that someone who is depending on the old code wants to
      integrate your new code and the only information that
      they get is this section. It helps to include example usage and default
      value here, with a `diff` code-block to show
      possibly integration.*
      
      *Include your leftover TODOs, if any, here.*
      
      # Checklist
      
      * [ ] My PR includes a detailed description as outlined in the
      "Description" and its two subsections above.
      * [ ] My PR follows the [labeling requirements](
      
      https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process
      ) of this project (at minimum one label for `T` required)
      * External contributors: ask maintainers to put the right label on your
      PR.
      * [ ] I have made corresponding changes to the documentation (if
      applicable)
      * [ ] I have added tests that prove my fix is effective or that my
      feature works (if applicable)
      
      You can remove the "Checklist" section once all have been checked. Thank
      you for your contribution!
      
      ✄
      -----------------------------------------------------------------------------
      
      Signed-off-by: default avatarchloefeal <188809157+chloefeal@users.noreply.github.com>
    • Javier Viola's avatar
      Migrate substrate zombienet test poc (#7178) · 77ad8abb
      Javier Viola authored
      Zombienet substrate tests PoC (using native provider).
      
      cc: @emamihe @alvicsam
    • Dastan's avatar
      [FRAME] `pallet_asset_tx_payment`: replace `AssetId` bound from `Copy` to `Clone` (#7194) · f7baa84f
      Dastan authored
      closes https://github.com/paritytech/polkadot-sdk/issues/6911
    • Giuseppe Re's avatar
      Update `parity-publish` to v0.10.4 (#7193) · 64abc745
      Giuseppe Re authored
      The changes from v0.10.3 are only related to dependencies version. This
      should fix some failing CIs.
      
      This PR also updates the Rust cache version in CI.
    • Liam Aharon's avatar
      Implement `pallet-asset-rewards` (#3926) · be2404cc
      Liam Aharon authored
      
      Closes #3149 
      
      ## Description
      
      This PR introduces `pallet-asset-rewards`, which allows accounts to be
      rewarded for freezing `fungible` tokens. The motivation for creating
      this pallet is to allow incentivising LPs.
      
      See the pallet docs for more info about the pallet.
      
      ## Runtime changes
      
      The pallet has been added to
      - `asset-hub-rococo`
      - `asset-hub-westend`
      
      The `NativeAndAssets` `fungibles` Union did not contain `PoolAssets`, so
      it has been renamed `NativeAndNonPoolAssets`
      
      A new `fungibles` Union `NativeAndAllAssets` was created to encompass
      all assets and the native token.
      
      ## TODO
      - [x] Emulation tests
      - [x] Fill in Freeze logic (blocked
      https://github.com/paritytech/polkadot-sdk/issues/3342) and re-run
      benchmarks
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarmuharem <ismailov.m.h@gmail.com>
      Co-authored-by: default avatarGuillaume Thiolliere <gui.thiolliere@gmail.com>
  4. Jan 15, 2025
  5. Jan 14, 2025
    • Sebastian Kunert's avatar
      Parachains: Use relay chain slot for velocity measurement (#6825) · d5539aa6
      Sebastian Kunert authored
      
      closes #3967 
      
      ## Changes
      We now use relay chain slots to measure velocity on chain. Previously we
      were storing the current parachain slot. Then in `on_state_proof` of the
      `ConsensusHook` we were checking how many blocks were athored in the
      current parachain slot. This works well when the parachain slot time and
      relay chain slot time is the same. With elastic scaling, we can have
      parachain slot times lower than that of the relay chain. In these cases
      we want to measure velocity in relation to the relay chain. This PR
      adjusts that.
      
      
      ##  Migration
      This PR includes a migration. Storage item `SlotInfo` of pallet
      `aura-ext` is renamed to `RelaySlotInfo` to better reflect its new
      content. A migration has been added that just kills the old storage
      item. `RelaySlotInfo` will be `None` initially but its value will be
      adjusted after one new relay chain slot arrives.
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
    • Bastian Köcher's avatar
    • Carlo Sala's avatar
      xcm: convert properly assets in xcmpayment apis (#7134) · 85c244f6
      Carlo Sala authored
      
      Port #6459 changes to relays as well, which were probably forgotten in
      that PR.
      Thanks!
      
      ---------
      
      Co-authored-by: default avatarFrancisco Aguirre <franciscoaguirreperez@gmail.com>
      Co-authored-by: command-bot <>
    • Sebastian Kunert's avatar
      CI: Only format umbrella crate during umbrella check (#7139) · ba36b2d2
      Sebastian Kunert authored
      The umbrella crate quick-check was always failing whenever there was
      something misformated in the whole codebase.
      This leads to an error that indicates that a new crate was added, even
      when it was not.
      
      After this PR we only apply `cargo fmt` to the newly generated umbrella
      crate `polkadot-sdk`. This results in this check being independent from
      the fmt job which should check the entire codebase.
    • Alexandru Gheorghe's avatar
      approval-voting: Fix sending of assignments after restart (#6973) · d38bb953
      Alexandru Gheorghe authored
      There is a problem on restart where nodes will not trigger their needed
      assignment if they were offline while the time of the assignment passed.
      
      That happens because after restart we will hit this condition
      https://github.com/paritytech/polkadot-sdk/blob/4e805ca0
      
      /polkadot/node/core/approval-voting/src/lib.rs#L2495
      and considered will be `tick_now` which is already higher than the tick
      of our assignment.
      
      The fix is to schedule a wakeup for untriggered assignments at restart
      and let the logic of processing an wakeup decide if it needs to trigger
      the assignment or not.
      
      One thing that we need to be careful here is to make sure we don't
      schedule the wake up immediately after restart because, the node would
      still be behind with all the assignments that should have received and
      might make it wrongfully decide it needs to trigger its assignment, so I
      added a `RESTART_WAKEUP_DELAY: Tick = 12` which should be more than
      enough for the node to catch up.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
      Co-authored-by: default avatarordian <write@reusable.software>
      Co-authored-by: default avatarAndrei Eres <eresav@me.com>
    • Alexandru Gheorghe's avatar
      Retry approval on availability failure if the check is still needed (#6807) · 6878ba1f
      Alexandru Gheorghe authored
      
      Recovering the POV can fail in situation where the node just restart and
      the DHT topology wasn't fully discovered yet, so the current node can't
      connect to most of its Peers. This is bad because for gossiping the
      assignment you need to be connected to just a few peers, so because we
      can't approve the candidate and other nodes will see this as a no show.
      
      This becomes bad in the scenario where you've got a lot of nodes
      restarting at the same time, so you end up having a lot of no-shows in
      the network that are never covered, in that case it makes sense for
      nodes to actually retry approving the candidate at a later data in time
      and retry several times if the block containing the candidate wasn't
      approved.
      
      ## TODO
      - [x] Add a subsystem test.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
    • PG Herveou's avatar
      [pallet-revive-eth-rpc] persist eth transaction hash (#6836) · 023763da
      PG Herveou authored
      
      Add an option to persist EVM transaction hash to a SQL db.
      This should make it possible to run a full archive ETH RPC node
      (assuming the substrate node is also a full archive node)
      
      Some queries such as eth_getTransactionByHash,
      eth_getBlockTransactionCountByHash, and other need to work with a
      transaction hash indexes, which are not stored in Substrate and need to
      be stored by the eth-rpc proxy.
      
      The refactoring break down the Client into a `BlockInfoProvider` and
      `ReceiptProvider`
      - BlockInfoProvider does not need any persistence data, as we can fetch
      all block info from the source substrate chain
      - ReceiptProvider comes in two flavor, 
        - An in memory cache implementation - This is the one we had so far.
      - A DB implementation - This one persist rows with the block_hash, the
      transaction_index and the transaction_hash, so that we can later fetch
      the block and extrinsic for that receipt and reconstruct the ReceiptInfo
      object.
      
      This PR also adds a new binary eth-indexer, that iterate past and new
      blocks and write the receipt hashes to the DB using the new
      ReceiptProvider.
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <action@github.com>
      Co-authored-by: command-bot <>
    • Alexandru Vasile's avatar
      litep2p: Sufix litep2p to the identify agent version for visibility (#7133) · 105c5b94
      Alexandru Vasile authored
      This PR adds the `(litep2p)` suffix to the agent version (user agent) of
      the identify protocol.
      
      The change is needed to gain visibility into network backends and
      determine exactly the number of validators that are running litep2p.
      Using tools like subp2p-explorer, we can determine if the validators are
      running litep2p nodes.
      
      This reflects on the identify protocol:
      
      ```
      info=Identify {
        protocol_version: Some("/substrate/1.0"),
        agent_version: Some("polkadot-parachain/v1.17.0-967989c5
      
       (kusama-node-name-01) (litep2p)")
        ...
      }
      ```
      
      cc @paritytech/networking
      
      ---------
      
      Signed-off-by: default avatarAlexandru Vasile <alexandru.vasile@parity.io>
    • Michal Kucharczyk's avatar
      `fatxpool`: proper handling of priorities when mempool is full (#6647) · f4743b00
      Michal Kucharczyk authored
      
      Higher-priority transactions can now replace lower-priority transactions
      even when the internal _tx_mem_pool_ is full.
      
      **Notes for reviewers:**
      - The _tx_mem_pool_ now maintains information about transaction
      priority. Although _tx_mem_pool_ itself is stateless, transaction
      priority is updated after submission to the view. An alternative
      approach could involve validating transactions at the `at` block, but
      this is computationally expensive. To avoid additional validation
      overhead, I opted to use the priority obtained from runtime during
      submission to the view. This is the rationale behind introducing the
      `SubmitOutcome` struct, which synchronously communicates transaction
      priority from the view to the pool. This results in a very brief window
      during which the transaction priority remains unknown - those
      transaction are not taken into consideration while dropping takes place.
      In the future, if needed, we could update transaction priority using
      view revalidation results to keep this information fully up-to-date (as
      priority of transaction may change with chain-state evolution).
      - When _tx_mem_pool_ becomes full (an event anticipated to be rare),
      transaction priority must be known to perform priority-based removal. In
      such cases, the most recent block known is utilized for validation. I
      think that speculative submission to the view and re-using the priority
      from this submission would be an unnecessary complication.
      - Once the priority is determined, lower-priority transactions whose
      cumulative size meets or exceeds the size of the new transaction are
      collected to ensure the pool size limit is not exceeded.
      - Transaction removed from _tx_mem_pool_ , also needs to be removed from
      all the views with appropriate event (which is done by
      `remove_transaction_subtree`). To ensure complete removal, the
      `PendingTxReplacement` struct was re-factored to more generic
      `PendingPreInsertTask` (introduced in #6405) which covers removal and
      submssion of transaction in the view which may be potentially created in
      the background. This is to ensure that removed transaction will not
      re-enter to the newly created view.
      - `submit_local` implementation was also improved to properly handle
      priorities in case when mempool is full. Some missing tests for this
      method were also added.
      
      Closes: #5809
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarIulian Barbu <14218860+iulianbarbu@users.noreply.github.com>
    • Alin Dima's avatar
  6. Jan 13, 2025