Skip to content
Snippets Groups Projects
  1. Aug 15, 2024
  2. Aug 14, 2024
    • Ankan's avatar
      [Pools] Fix issues with member migration to `DelegateStake` (#4822) · feacf2f3
      Ankan authored
      
      ## Context
      Pool members using the old `TransferStake` strategy were able to
      transfer all their funds to the pool. With `DelegateStake` changes, we
      want to ensure similar behaviour by allowing members to delegate all
      their stake to the pool.
      
      ## Changes
      - Ensure all the balance including ED of an account can be delegated
      (and used in the pool) by adding a provider for delegators.
      - Gates calls that mutates the pool or pool member if they are in
      unmigrated state. Closes
      https://github.com/paritytech-secops/srlabs_findings/issues/409.
      - Adds remote test to migrate all pools and members to `DelegateStake`
      which can be used with `Kusama` and `Polkadot` runtime state. closes
      https://github.com/paritytech/polkadot-sdk/issues/4629.
      - Add new runtime apis to read pool and member balance.
      
      ## Addressing possible migration errors 
      Pool members migrating can run into two types of errors:
      - Already Staking: If the pool member is already staking, we cannot
      migrate them to `DelegateStake` since this may mean they are able to use
      the same staked funds in the pool. Users would need to withdraw all
      their funds from staking, in order to migrate their pool funds.
      - Pool contribution below ED: For these cases transfer from pool account
      to member account would fail. The affected users can top up their
      accounts and redo migration.
      
      Another error that was earlier possible was when member's free balance
      is below ED. This PR adds a provider to delegator allowing all user
      balance including ED can be contributed towards the pool. This helps
      `1095` accounts in Polkadot and `41` accounts in Kusama to migrate now
      which would have earlier failed.
      
      ## Results from RemoteExternalities Tests.
      
      ### Kusama
      `Migration stats: success: 3017, direct_stakers: 361, unexpected_errors:
      0`
      
      ### Polkadot
      `Migration stats: success: 42859, direct_stakers: 643,
      unexpected_errors: 0`
      
      ## TODO
      - [x] Add runtime api for member total balance.
      - [x] New
      [issue](https://github.com/paritytech/polkadot-sdk/issues/5009) to reap
      pool members with contribution below ED.
      - [x] Add provider for delegators so whole balance including ED can be
      held while contributing to pools.
      - [x] Gate all pool extrinsics if pool/member is in non-migrated state.
      
      ---------
      
      Co-authored-by: default avatarGonçalo Pestana <g6pestana@gmail.com>
    • Nazar Mokrynskyi's avatar
      Unify `no_genesis` check (#5360) · 5a9396f4
      Nazar Mokrynskyi authored
      
      The same exact `matches!()` was duplicated in
      `Configuration::no_genesis()` method and inline in full node parts
      creation. Since this is the same exact logic and reason, it makes sense
      to de-duplicate them.
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
    • Serban Iorga's avatar
      Beefy: add benchmarks for `report_fork_voting()` (#5188) · 81d8f0c0
      Serban Iorga authored
      
      Related to #4523 
      
      This PR adds benchmarks for `report_fork_voting()`.
      
      **Important: Even though the benchmarks are now available, we still use
      `Weight::MAX`. That's because I realized while working on this PR that
      there's still one missing piece. We should also check that the ancestry
      proof is optimal. I plan to do this in a future PR, hopefully the last
      one related to #4523.**
      
      ---------
      
      Co-authored-by: default avatarBranislav Kontur <bkontur@gmail.com>
      Co-authored-by: command-bot <>
    • Muharem Ismailov's avatar
      Make ticket non-optional and add ensure_successful method to Consideration trait (#5359) · 00946b10
      Muharem Ismailov authored
      Make ticket non-optional and add ensure_successful method to
      Consideration trait.
      
      Reverts the optional return ticket type for the new function introduced
      in
      [polkadot-sdk/4596](https://github.com/paritytech/polkadot-sdk/pull/4596)
      and adds a helper `ensure_successful` function for the runtime
      benchmarks.
      Since the existing FRAME pallet represents zero cost with a zero balance
      rather than `None` in an option, maintaining the ticket type as a
      non-optional balance is beneficial for backward compatibility and helps
      avoid unnecessary migrations.
    • Francisco Aguirre's avatar
      Migrate foreign assets v3::Location to v4::Location (#4129) · be74fe92
      Francisco Aguirre authored
      
      In the move from XCMv3 to XCMv4, the `AssetId` for `ForeignAssets` in
      `asset-hub-rococo` and `asset-hub-westend` was left as `v3::Location` to
      be later migrated to `v4::Location`.
      
      This is that migration PR.
      
      Because the encoding of `v3::Location` and `v4::Location` is the same,
      we don't need to do any data migration, the keys will still be
      decodable.
      The [original idea by
      Jan](https://github.com/paritytech/polkadot/pull/7236) was to make the
      v4 changes in v3 since the ABI (the encoding/decoding) didn't change.
      Corroborated the ABI is the same iterating over all storage, the code is
      on [another
      branch](https://github.com/paritytech/polkadot-sdk/blob/cisco-assert-v3-v4-encodings-equal/cumulus/parachains/runtimes/assets/migrations/src/foreign_assets_to_v4/mod.rs).
      
      We will need a data migration when we want to update from `v4::Location`
      to `v5::Location` because of [the accepted RFC changing the NetworkId
      enum](https://github.com/polkadot-fellows/RFCs/pull/108).
      I'll configure MBMs (Multi-Block Migrations) then and make the actual
      migration.
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/4128
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: command-bot <>
  3. Aug 13, 2024
    • Jeeyong Um's avatar
      Minor clean up (#5284) · 0cd577ba
      Jeeyong Um authored
      
      This PR performs minor code cleanup to reduce verbosity. Since the
      compiler has already optimized out indirect calls in the existing code,
      these changes improve readability but do not affect performance.
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
    • Ankan's avatar
      [Pools] Ensure members can always exit the pool gracefully (#4998) · 42eb4ec0
      Ankan authored
      
      Resolves https://github.com/paritytech-secops/srlabs_findings/issues/412
      
      ## Changes
      - Clear any dangling delegation when member is removed.
      - Agents need to be killed explicitly when pools are destroyed.
      - Member withdraw amount is max of their locked funds and the value of
      their points.
      
      ---------
      
      Co-authored-by: default avatarGonçalo Pestana <g6pestana@gmail.com>
      Co-authored-by: command-bot <>
    • Sebastian Kunert's avatar
      StorageWeightReclaim: set to node pov size if higher (#5281) · 055eb537
      Sebastian Kunert authored
      This PR adds an additional defensive check to the reclaim SE. 
      
      Since it can happen that we miss some storage accesses on other SEs
      pre-dispatch, we should double check
      that the bookkeeping of the runtime stays ahead of the node-side
      pov-size.
      
      If we discover a mismatch and the node-side pov-size is indeed higher,
      we should set the runtime bookkeeping to the node-side value. In cases
      such as #5229, we would stop including extrinsics and not run `on_idle`
      at least.
      
      cc @gui1117
      
      ---------
      
      Co-authored-by: command-bot <>
  4. Aug 12, 2024
  5. Aug 08, 2024
  6. Aug 07, 2024
  7. Aug 06, 2024
  8. Aug 05, 2024
    • Sergej Sakac's avatar
      Coretime auto-renew (#4424) · f170af61
      Sergej Sakac authored
      
      This PR adds functionality that allows tasks to enable auto-renewal.
      Each task eligible for renewal can enable auto-renewal.
      
      A new storage value is added to track all the cores with auto-renewal
      enabled and the associated task running on the core. The `BoundedVec` is
      sorted by `CoreIndex` to make disabling auto-renewal more efficient.
      
      Cores are renewed at the start of a new bulk sale. If auto-renewal
      fails(e.g. due to the sovereign account of the task not holding
      sufficient balance), an event will be emitted, and the renewal will
      continue for the other cores.
      
      The two added extrinsics are:
      - `enable_auto_renew`: Extrinsic for enabling auto renewal.
      - `disable_auto_renew`: Extrinsic for disabling auto renewal.
      
      TODOs:
      - [x] Write benchmarks for the newly added extrinsics.
      
      Closes: #4351
      
      ---------
      
      Co-authored-by: default avatarDónal Murray <donalm@seadanda.dev>
    • Alexandru Vasile's avatar
      network/strategy: Backoff and ban overloaded peers to avoid submitting the... · 6619277b
      Alexandru Vasile authored
      network/strategy: Backoff and ban overloaded peers to avoid submitting the same request multiple times (#5029)
      
      This PR avoids submitting the same block or state request multiple times
      to the same slow peer.
      
      Previously, we submitted the same request to the same slow peer, which
      resulted in reputation bans on the slow peer side.
      Furthermore, the strategy selected the same slow peer multiple times to
      submit queries to, although a better candidate may exist.
      
      Instead, in this PR we:
      - introduce a `DisconnectedPeers` via LRU with 512 peer capacity to only
      track the state of disconnected peers with a request in flight
      - when the `DisconnectedPeers` detects a peer disconnected with a
      request in flight, the peer is backed off
        - on the first disconnection: 60 seconds
        - on second disconnection: 120 seconds
      - on the third disconnection the peer is banned, and the peer remains
      banned until the peerstore decays its reputation
        
      This PR lifts the pressure from overloaded nodes that cannot process
      requests in due time.
      And if a peer is detected to be slow after backoffs, the peer is banned.
      
      Theoretically, submitting the same request multiple times can still
      happen when:
      - (a) we backoff and ban the peer 
      - (b) the network does not discover other peers -- this may also be a
      test net
      - (c) the peer gets reconnected after the reputation decay and is still
      slow to respond
      
      
      
      Aims to improve:
      - https://github.com/paritytech/polkadot-sdk/issues/4924
      - https://github.com/paritytech/polkadot-sdk/issues/531
      
      Next Steps:
      - Investigate the network after this is deployed, possibly bumping the
      keep-alive timeout or seeing if there's something else misbehaving
      
      
      
      
      This PR builds on top of:
      - https://github.com/paritytech/polkadot-sdk/pull/4987
      
      
      ### Testing Done
      - Added a couple of unit tests where test harness were set in place
      
      - Local testnet
      
      ```bash
      13:13:25.102 DEBUG tokio-runtime-worker sync::persistent_peer_state: Added first time peer 12D3KooWHdiAxVd8uMQR1hGWXccidmfCwLqcMpGwR6QcTP6QRMuD
      
      13:14:39.102 DEBUG tokio-runtime-worker sync::persistent_peer_state: Remove known peer 12D3KooWHdiAxVd8uMQR1hGWXccidmfCwLqcMpGwR6QcTP6QRMuD state: DisconnectedPeerState { num_disconnects: 2, last_disconnect: Instant { tv_sec: 93355, tv_nsec: 942016062 } }, should ban: false
      
      13:16:49.107 DEBUG tokio-runtime-worker sync::persistent_peer_state: Remove known peer 12D3KooWHdiAxVd8uMQR1hGWXccidmfCwLqcMpGwR6QcTP6QRMuD state: DisconnectedPeerState { num_disconnects: 3, last_disconnect: Instant { tv_sec: 93485, tv_nsec: 947551051 } }, should ban: true
      
      13:16:49.108  WARN tokio-runtime-worker peerset: Report 12D3KooWHdiAxVd8uMQR1hGWXccidmfCwLqcMpGwR6QcTP6QRMuD: -2147483648 to -2147483648. Reason: Slow peer after backoffs. Banned, disconnecting.
      ```
      
      cc @paritytech/networking
      
      ---------
      
      Signed-off-by: default avatarAlexandru Vasile <alexandru.vasile@parity.io>
    • Kian Paimani's avatar
      Fix frame crate usage doc (#5222) · ad1e556e
      Kian Paimani authored
    • Sebastian Kunert's avatar
      beefy: Tolerate pruned state on runtime API call (#5197) · 2abd03ef
      Sebastian Kunert authored
      While working on #5129 I noticed that after warp sync, nodes would
      print:
      ```
      2024-07-29 17:59:23.898 ERROR ⋮beefy: 🥩 Error: ConsensusReset. Restarting voter.    
      ```
      
      After some debugging I found that we enter the following loop:
      1. Wait for beefy pallet to be available: Pallet is detected available
      directly after warp sync since we are at the tip.
      2. Wait for headers from tip to beefy genesis to be available: During
      this time we don't process finality notifications, since we later want
      to inspect all the headers for authority set changes.
      3. Gap sync finishes, route to beefy genesis is available.
      4. The worker starts acting, tries to fetch beefy genesis block. It
      fails, since we are acting on old finality notifications where the state
      is already pruned.
      5. Whole beefy subsystem is being restarted, loading the state from db
      again and iterating a lot of headers.
      
      This already happened before #5129.
  9. Aug 02, 2024
    • Alexandru Vasile's avatar
      rpc: Enable ChainSpec for polkadot-parachain (#5205) · ce6938ae
      Alexandru Vasile authored
      
      This PR enables the `chainSpec_v1` class for the polkadot-parachian. 
      The chainSpec is part of the rpc-v2 which is spec-ed at:
      https://github.com/paritytech/json-rpc-interface-spec/blob/main/src/api/chainSpec.md.
      
      This also paves the way for enabling a future `chainSpec_unstable_spec`
      on all nodes.
      
      Closes: https://github.com/paritytech/polkadot-sdk/issues/5191
      
      cc @paritytech/subxt-team
      
      ---------
      
      Signed-off-by: default avatarAlexandru Vasile <alexandru.vasile@parity.io>
    • Francisco Aguirre's avatar
      Add an adapter for configuring AssetExchanger (#5130) · 8ccb6b33
      Francisco Aguirre authored
      
      Added a new adapter to xcm-builder, the `SingleAssetExchangeAdapter`.
      This adapter makes it easy to use `pallet-asset-conversion` for
      configuring the `AssetExchanger` XCM config item.
      
      I also took the liberty of adding a new function to the `AssetExchange`
      trait, with the following signature:
      
      ```rust
      fn quote_exchange_price(give: &Assets, want: &Assets, maximal: bool) -> Option<Assets>;
      ```
      
      The signature is meant to be fairly symmetric to that of
      `exchange_asset`.
      The way they interact can be seen in the doc comment for it in the
      `AssetExchange` trait.
      
      This is a breaking change but is needed for
      https://github.com/paritytech/polkadot-sdk/pull/5131.
      Another idea is to create a new trait for this but that would require
      setting it in the XCM config which is also breaking.
      
      Old PR: https://github.com/paritytech/polkadot-sdk/pull/4375.
      
      ---------
      
      Co-authored-by: default avatarAdrian Catangiu <adrian@parity.io>
  10. Aug 01, 2024
  11. Jul 31, 2024
    • Alexandru Vasile's avatar
      litep2p/discovery: Publish authority records with external addresses only (#5176) · 7d0aa896
      Alexandru Vasile authored
      This PR reduces the occurrences for identified observed addresses.
      
      Litep2p discovers its external addresses by inspecting the
      `IdentifyInfo::ObservedAddress` field reported by other peers.
      After we get 5 confirmations of the same external observed address (the
      address the peer dialed to reach us), the address is reported through
      the network layer.
      
      The PR effectively changes this from 5 to 2.
      This has a subtle implication on freshly started nodes for the
      authority-discovery discussed below.
      
      The PR also makes the authority discovery a bit more robust by not
      publishing records if the node doesn't have addresses yet to report.
      This aims to fix a scenario where:
      - the litep2p node has started, it has some pending observed addresses
      but less than 5
      - the authorit-discovery publishes a record, but at this time the node
      doesn't have any addresses discovered and the record is published
      without addresses -> this means other nodes w...
    • thiolliere's avatar
      Run UI tests in CI for some other crates (#5167) · 39daa61e
      thiolliere authored
      
      The test name is `test-frame-ui` I don't know if I can also change it to
      `test-ui` without breaking other stuff. So I kept the name unchanged.
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
  12. Jul 30, 2024
  13. Jul 29, 2024