Skip to content
  1. Oct 17, 2024
  2. Oct 15, 2024
    • Michal Kucharczyk's avatar
      fork-aware transaction pool added (#4639) · 26c11fc5
      Michal Kucharczyk authored
      ### Fork-Aware Transaction Pool Implementation
      
      This PR introduces a fork-aware transaction pool (fatxpool) enhancing
      transaction management by maintaining the valid state of txpool for
      different forks.
      
      ### High-level overview
      The high level overview was added to
      [`sc_transaction_pool::fork_aware_txpool`](https://github.com/paritytech/polkadot-sdk/blob/3ad0a1b7/substrate/client/transaction-pool/src/fork_aware_txpool/mod.rs#L21)
      module. Use:
      ```
      cargo  doc --document-private-items -p sc-transaction-pool --open
      ```
      to build the doc. It should give a good overview and nice entry point
      into the new pool's mechanics.
      
      <details>
        <summary>Quick overview (documentation excerpt)</summary>
      
      #### View
      For every fork, a view is created. The view is a persisted state of the
      transaction pool computed and updated at the tip of the fork. The view
      is built around the existing `ValidatedPool` structure.
      
      A view is created on every new best block notification. To create a
      view, one of the existing views is chosen and cloned.
      
      When the chain progresses, the view is kept in the cache
      (`retracted_views`) to allow building blocks upon intermediary blocks in
      the fork.
      
      The views are deleted on finalization: views lower than the finalized
      block are removed.
      
      The views are updated with the transactions from the mempool—all
      transactions are sent to the newly created views.
      A maintain process is also executed for the newly created
      views—basically resubmitting and pruning transactions from the
      appropriate tree route.
      
      ##### View store
      View store is the helper structure that acts as a container for all the
      views. It provides some convenient methods.
      
      ##### Submitting transactions
      Every transaction is submitted to every view at the tips of the forks.
      Retracted views are not updated.
      Every transaction also goes into the mempool.
      
      ##### Internal mempool
      Shortly, the main purpose of an internal mempool is to prevent a
      transaction from being lost. That could happen when a transaction is
      invalid on one fork and could be valid on another. It also allows the
      txpool to accept transactions when no blocks have been reported yet.
      
      The mempool removes its transactions when they get finalized.
      Transactions are also periodically verified on every finalized event and
      removed from the mempool if no longer valid.
      
      #### Events
      Transaction events from multiple views are merged and filtered to avoid
      duplicated events.
      `Ready` / `Future` / `Inblock` events are originated in the Views and
      are de-duplicated and forwarded to external listeners.
      `Finalized` events are originated in fork-aware-txpool logic.
      `Invalid` events requires special care and can be originated in both
      view and fork-aware-txpool logic.
      
      #### Light maintain
      Sometime transaction pool does not have enough time to prepare fully
      maintained view with all retracted transactions being revalidated. To
      avoid providing empty ready transaction set to block builder (what would
      result in empty block) the light maintain was implemented. It simply
      removes the imported transactions from ready iterator.
      
      #### Revalidation
      Revalidation is performed for every view. The revalidation process is
      started after a trigger is executed. The revalidation work is terminated
      just after a new best block / finalized event is notified to the
      transaction pool.
      The revalidation result is applied to the newly created view which is
      built upon the revalidated view.
      
      Additionally, parts of the mempool are also revalidated to make sure
      that no transactions are stuck in the mempool.
      
      
      #### Logs
      The most important log allowing to understand the state of the txpool
      is:
      ```
                    maintain: txs:(0, 92) views:[2;[(327, 76, 0), (326, 68, 0)]] event:Finalized { hash: 0x8...f, tree_route: [] }  took:3.463522ms
                                   ^   ^         ^     ^   ^  ^      ^   ^  ^        ^                                                   ^
      unwatched txs in mempool ────┘   │         │     │   │  │      │   │  │        │                                                   │
         watched txs in mempool ───────┘         │     │   │  │      │   │  │        │                                                   │
                           views  ───────────────┘     │   │  │      │   │  │        │                                                   │
                            1st view block # ──────────┘   │  │      │   │  │        │                                                   │
                                 number of ready tx ───────┘  │      │   │  │        │                                                   │
                                      numer of future tx ─────┘      │   │  │        │                                                   │
                                              2nd view block # ──────┘   │  │        │                                                   │
                                            number of ready tx ──────────┘  │        │                                                   │
                                                 number of future tx ───────┘        │                                                   │
                                                                       event ────────┘                                                   │
                                                                             duration  ──────────────────────────────────────────────────┘
      ```
      It is logged after the maintenance is done.
      
      The `debug` level enables per-transaction logging, allowing to keep
      track of all transaction-related actions that happened in txpool.
      </details>
      
      
      ### Integration notes
      
      For teams having a custom node, the new txpool needs to be instantiated,
      typically in `service.rs` file, here is an example:
      
      https://github.com/paritytech/polkadot-sdk/blob/9c547ff3
      
      /cumulus/polkadot-omni-node/lib/src/common/spec.rs#L152-L161
      
      To enable new transaction pool the following cli arg shall be specified:
      `--pool-type=fork-aware`. If it works, there shall be information
      printed in the log:
      ```
      2024-09-20 21:28:17.528  INFO main txpool: [Parachain]  creating ForkAware txpool.
      ````
      
      For debugging the following debugs shall be enabled:
      ```
            "-lbasic-authorship=debug",
            "-ltxpool=debug",
      ```
      *note:* trace for txpool enables per-transaction logging.
      
      ### Future work
      The current implementation seems to be stable, however further
      improvements are required.
      Here is the umbrella issue for future work:
      - https://github.com/paritytech/polkadot-sdk/issues/5472
      
      
      Partially fixes: #1202
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      Co-authored-by: default avatarSebastian Kunert <[email protected]>
      Co-authored-by: default avatarIulian Barbu <[email protected]>
      26c11fc5
    • Ayevbeosa Iyamu's avatar
      pallet-xcm: added useful error logs (#2408) (#4982) · b20be7c1
      Ayevbeosa Iyamu authored
      
      
      Added error logs in pallet-xcm to help in debugging, fixes #2408 
      
      ## TODO
      
      - [x] change `log::error` to `tracing::error` format for `xcm-executor`
      - [x] check existing logs, e.g. this one can be extended with more info
      `tracing::error!(target: "xcm::reanchor", ?error, "Failed reanchoring
      with error");`
      - [x] use `tracing` instead of `log` for `pallet-xcm/src/lib.rs`
      
      ---------
      
      Co-authored-by: default avatarAyevbeosa Iyamu <[email protected]>
      Co-authored-by: default avatarAdrian Catangiu <[email protected]>
      Co-authored-by: default avatarFrancisco Aguirre <[email protected]>
      Co-authored-by: default avatarBranislav Kontur <[email protected]>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      b20be7c1
  3. Oct 14, 2024
  4. Oct 11, 2024
    • Andrei Eres's avatar
      Rename QueueEvent::StartWork (#6015) · e5ccc008
      Andrei Eres authored
      # Description
      
      When we send `QueueEvent::StartWork`, we have already completed the
      execution. This may be a leftover of a previous logic change. Currently,
      the name is misleading, so it would be better to rename it to
      `FinishWork`.
      
      
      https://github.com/paritytech/polkadot-sdk/blob/c52675ef/polkadot/node/core/pvf/src/execute/queue.rs#L632-L646
      
      
      https://github.com/paritytech/polkadot-sdk/blob/c52675ef
      
      /polkadot/node/core/pvf/src/execute/queue.rs#L361-L363
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/5910
      
      ## Integration
      
      Shouldn't affect downstream projects.
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <[email protected]>
      e5ccc008
  5. Oct 10, 2024
    • Serban Iorga's avatar
      Fix `0003-beefy-and-mmr` test (#6003) · cba7d13b
      Serban Iorga authored
      Resolves https://github.com/paritytech/polkadot-sdk/issues/5972
      
      Only needed to increase some timeouts
      cba7d13b
    • Francisco Aguirre's avatar
      Remove redundant XCMs from dry run's forwarded xcms (#5913) · 4a70b2cf
      Francisco Aguirre authored
      # Description
      
      This PR addresses
      https://github.com/paritytech/polkadot-sdk/issues/5878.
      
      After dry running an xcm on asset hub, we had redundant xcms showing up
      in the `forwarded_xcms` field of the dry run effects returned.
      These were caused by two things:
      - The `UpwardMessageSender` router always added an element even if there
      were no messages.
      - The two routers on asset hub westend related to bridging (to rococo
      and sepolia) getting the message from their queues when their queues is
      actually the same xcmp queue that was already contemplated.
      
      In order to fix this, we check for no messages in UMP and clear the
      implementation of `InspectMessageQueues` for these bridging routers.
      Keep in mind that the bridged message is still sent, as normal via the
      xcmp-queue to Bridge Hub.
      To keep on dry-running the journey of the message, the next hop to
      dry-run is Bridge Hub.
      That'll be tackled in a different PR.
      
      Added a test in `bridge-hub-westend-integration-tests` and
      `bridge-hub-rococo-integration-tests` that show that dry-running a
      transfer across the bridge from asset hub results in one and only one
      message sent to bridge hub.
      
      ## TODO
      - [x] Functionality
      - [x] Test
      
      ---------
      
      Co-authored-by: command-bot <>
      4a70b2cf
  6. Oct 09, 2024
    • Andrei Eres's avatar
      Add PVF execution priority (#4837) · e294d628
      Andrei Eres authored
      
      
      Resolves https://github.com/paritytech/polkadot-sdk/issues/4632
      
      The new logic optimizes the distribution of execution jobs for disputes,
      approvals, and backings. Testing shows improved finality lag and
      candidate checking times, especially under heavy network load.
      
      ### Approach
      
      This update adds prioritization to the PVF execution queue. The logic
      partially implements the suggestions from
      https://github.com/paritytech/polkadot-sdk/issues/4632#issuecomment-2209188695.
      
      We use thresholds to determine how much a current priority can "steal"
      from lower ones:
      -  Disputes: 70%
      -  Approvals: 80%
      -  Backing System Parachains: 100%
      -  Backing: 100%
      
      A threshold indicates the portion of the current priority that can be
      allocated from lower priorities.
      
      For example:
      -  Disputes take 70%, leaving 30% for approvals and all backings.
      - 80% of the remaining goes to approvals, which is 30% * 80% = 24% of
      the original 100%.
      - If we used parts of the original 100%, approvals couldn't take more
      than 24%, even if there are no disputes.
      
      Assuming a maximum of 12 executions per block, with a 6-second window, 2
      CPU cores, and a 2-second run time, we get these distributions:
      
      -  With disputes: 8 disputes, 3 approvals, 1 backing
      -  Without disputes: 9 approvals, 3 backings
      
      It's worth noting that when there are no disputes, if there's only one
      backing job, we continue processing approvals regardless of their
      fulfillment status.
      
      ### Versi Testing 40/20
      
      Testing showed a slight difference in finality lag and candidate
      checking time between this pull request and its base on the master
      branch. The more loaded the network, the greater the observed
      difference.
      
      Testing Parameters:
      -  40 validators (4 malicious)
      -  20 gluttons with 2 seconds of PVF execution time
      -  6 VRF modulo samples
      -  12 required approvals
      
      ![Pasted Graphic
      3](https://github.com/user-attachments/assets/8b6163a4-a1c9-44c2-bdba-ce1ef4b1eba7)
      ![Pasted Graphic
      4](https://github.com/user-attachments/assets/9f016647-7727-42e8-afe9-04f303e6c862)
      
      ### Versi Testing 80/40
      
      For this test, we compared the master branch with the branch from
      https://github.com/paritytech/polkadot-sdk/pull/5616. The second branch
      is based on the current one but removes backing jobs that have exceeded
      their time limits. We excluded malicious nodes to reduce noise from
      disputing and banning validators. The results show that, under the same
      load, nodes experience less finality lag and reduced recovery and check
      time. Even parachains are functioning with a shorter block time,
      although it remains over 6 seconds.
      
      Testing Parameters:
      -  80 validators (0 malicious)
      -  40 gluttons with 2 seconds of PVF execution time
      -  6 VRF modulo samples
      -  30 required approvals
      
      
      ![image](https://github.com/user-attachments/assets/42bcc845-9115-4ae3-9910-286b77a60bbf)
      
      ---------
      
      Co-authored-by: default avatarAndrei Sandu <[email protected]>
      e294d628
    • Andrei Eres's avatar
      Bump PoV request timeout (#5924) · 3ad12919
      Andrei Eres authored
      # Description
      
      We previously set the PoV request timeout to 1.2s based on synchronous
      backing, which allowed for 5 PoVs per relay block. With asynchronous
      backing, we no longer have a time budget and can increase the value to
      2s.
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/5885
      
      ## Integration
      
      This PR shouldn't affect downstream projects.
      
      ## Review Notes
      
      This PR can be followed by experiments with Gluttons on Kusama to
      confirm that the timeout is sufficient.
      3ad12919
  7. Oct 08, 2024
  8. Oct 07, 2024
    • Juan Ignacio Rios's avatar
      Generic slashing side-effects (#5623) · c0ddfbae
      Juan Ignacio Rios authored
      # Description
      ## What?
      Make it possible for other pallets to implement their own logic when a
      slash on a balance occurs.
      
      ## Why?
      In the [introduction of
      holds](https://github.com/paritytech/substrate/pull/12951) @gavofyork
      said:
      > Since Holds are designed to be infallibly slashed, this means that any
      logic using a Freeze must handle the possibility of the frozen amount
      being reduced, potentially to zero. A permissionless function should be
      provided in order to allow bookkeeping to be updated in this instance.
      
      At Polimec we needed to find a way to reduce the vesting schedules of
      our users after a slash was made, and after talking to @Kianenigma
      
       at
      the Web3Summit, we realized there was no easy way to implement this with
      the current traits, so we came up with this solution.
      
      
      
      ## How?
      - First we abstract the `done_slash` function of holds::Balanced to it's
      own trait that any pallet can implement.
      - Then we add a config type in pallet-balances that accepts a callback
      tuple of all the pallets that implement this trait.
      - Finally implement done_slash for pallet-balances such that it calls
      the config type.
      
      ## Integration
      The default implementation of done_slash is still an empty function, and
      the new config type of pallet-balances can be set to an empty tuple, so
      nothing changes by default.
      
      ## Review Notes
      - I suggest to focus on the first commit which contains the main logic
      changes.
      - I also have a working implementation of done_slash for pallet_vesting,
      should I add it to this PR?
      - If I run `cargo +nightly fmt --all` then I get changes to a lot of
      unrelated crates, so not sure if I should run it to avoid the fmt
      failure of the CI
      - Should I hunt down references to fungible/fungibles documentation and
      update it accordingly?
      
      **Polkadot address:** `15fj1UhQp8Xes7y7LSmDYTy349mXvUwrbNmLaP5tQKBxsQY1`
      
      # Checklist
      
      * [x] My PR includes a detailed description as outlined in the
      "Description" and its two subsections above.
      * [x] My PR follows the [labeling requirements](
      
      https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process
      ) of this project (at minimum one label for `T` required)
      * External contributors: ask maintainers to put the right label on your
      PR.
      * [ ] I have made corresponding changes to the documentation (if
      applicable)
      
      ---------
      
      Co-authored-by: default avatarKian Paimani <[email protected]>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarFrancisco Aguirre <[email protected]>
      c0ddfbae
    • Andrei Sandu's avatar
      Elastic scaling: runtime v2 descriptor support (#5423) · 4b356c4b
      Andrei Sandu authored
      
      
      Closes https://github.com/paritytech/polkadot-sdk/issues/5045 and
      https://github.com/paritytech/polkadot-sdk/issues/5046
      
      <del>On top of
      https://github.com/paritytech/polkadot-sdk/pull/5362</del>
      
      TODO:
      - [x] storage migration for allowed relay parents tracker 
      - [x] check session index
      - [x] PRdoc
      - [x] tests
      - [x] ensure UMP queue cannot be abused with this change
      - [x] Zombienet runtime upgrade test
      
      ---------
      
      Signed-off-by: default avatarAndrei Sandu <[email protected]>
      4b356c4b
  9. Oct 06, 2024
  10. Oct 05, 2024
    • Adrian Catangiu's avatar
      XCM paid execution barrier supports more origin altering instructions (#5917) · d968c941
      Adrian Catangiu authored
      
      
      The AllowTopLevelPaidExecutionFrom allows ClearOrigin instructions
      before the expected BuyExecution instruction, it also allows messages
      without any origin altering instructions.
      
      This commit enhances the barrier to also support messages that use
      AliasOrigin, or DescendOrigin. This is sometimes desired in asset
      transfer XCM programs that need to run the inbound assets instructions
      using the origin chain root origin, but then want to drop privileges for
      the rest of the program. Currently these programs drop privileges by
      clearing the origin completely, but that also unnecessarily limits the
      range of actions available to the rest of the program. Using
      DescendOrigin or AliasOrigin allows the sending chain to instruct the
      receiving chain what the deprivileged real origin is.
      
      See https://github.com/polkadot-fellows/RFCs/pull/109 and
      https://github.com/polkadot-fellows/RFCs/pull/122 for more details on
      how DescendOrigin and AliasOrigin could be used instead of ClearOrigin.
      
      ---------
      
      Signed-off-by: default avatarAdrian Catangiu <[email protected]>
      d968c941
  11. Oct 04, 2024
  12. Oct 02, 2024
  13. Oct 01, 2024
    • Andrei Eres's avatar
      Remove ValidateFromChainState (#5707) · 1617852a
      Andrei Eres authored
      # Description
      
      This PR removes the
      `CandidateValidationMessage::ValidateFromChainState`, which was
      previously used by backing, but is no longer relevant since initial
      async backing implementation
      https://github.com/paritytech/polkadot/pull/5557.
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/5643
      
      ## Integration
      
      This change should not affect downstream projects since
      `ValidateFromChainState` was already unused.
      
      ## Review Notes
      
      - Removed all occurrences of `ValidateFromChainState`.
      - Moved utility functions, previously used in candidate validation tests
      and malus, exclusively to candidate validation tests as they are no
      longer used in malus.
      - Deleted the
      `polkadot_parachain_candidate_validation_validate_from_chain_state`
      metric from Prometheus.
      - Removed `Spawner` from `ReplaceValidationResult` in malus’
      interceptors.
      - `fake_validation_error` was only used for `ValidateFromChainState`
      handling, while other cases directly used
      `InvalidCandidate::InvalidOutputs`. It has been replaced with
      `fake_validation_error`, with a fallback to
      `InvalidCandidate::InvalidOutputs`.
      - Updated overseer’s minimal example to replace `ValidateFromChainState`
      with `ValidateFromExhaustive`.
      1617852a
  14. Sep 30, 2024
  15. Sep 27, 2024
  16. Sep 26, 2024
    • Alexander Samusev's avatar
      [ci] Update CI image with rust 1.81.0 and 2024-09-11 (#5676) · 6c3219eb
      Alexander Samusev authored
      
      
      cc https://github.com/paritytech/ci_cd/issues/1035
      
      cc https://github.com/paritytech/ci_cd/issues/1023
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarMaksym H <[email protected]>
      Co-authored-by: default avatargui <[email protected]>
      Co-authored-by: default avatarOliver Tale-Yazdi <[email protected]>
      Co-authored-by: default avatarBastian Köcher <[email protected]>
      Co-authored-by: default avatarggwpez <[email protected]>
      6c3219eb
    • Alexandru Gheorghe's avatar
      [5 / 5] Introduce approval-voting-parallel (#4849) · b16237ad
      Alexandru Gheorghe authored
      This is the implementation of the approach described here:
      https://github.com/paritytech/polkadot-sdk/issues/1617#issuecomment-2150321612
      &
      https://github.com/paritytech/polkadot-sdk/issues/1617#issuecomment-2154357547
      &
      https://github.com/paritytech/polkadot-sdk/issues/1617#issuecomment-2154721395.
      
      ## Description of changes
      
      The end goal is to have an architecture where we have single
      subsystem(`approval-voting-parallel`) and multiple worker types that
      would full-fill the work that currently is fulfilled by the
      `approval-distribution` and `approval-voting` subsystems. The main loop
      of the new subsystem would do just the distribution of work to the
      workers.
      
      The new subsystem will have:
      - N approval-distribution workers: This would do the work that is
      currently being done by the approval-distribution subsystem and in
      addition to that will also perform the crypto-checks that an assignment
      is valid and that a vote is correctly signed. Work is assigned via the
      following formula: `worker_index = msg.validator % WORKER_COUNT`, this
      guarantees that all assignments and approvals from the same validator
      reach the same worker.
      - 1 approval-voting worker: This would receive an already valid message
      and do everything the approval-voting currently does, except the
      crypto-checking that has been moved already to the approval-distribution
      worker.
      
      On the hot path of processing messages **no** synchronisation and
      waiting is needed between approval-distribution and approval-voting
      workers.
      
      <img width="1431" alt="Screenshot 2024-06-07 at 11 28 08"
      src="https://github.com/paritytech/polkadot-sdk/assets/49718502/a196199b-b705-4140-87d4-c6900ba8595e">
      
      
      
      ## Guidelines for reading
      
      The full implementation is broken in 5 PRs and all of them are
      self-contained and improve things incrementally even without the
      parallelisation being implemented/enabled, the reason this approach was
      taken instead of a big-bang PR, is to make things easier to review and
      reduced the risk of breaking this critical subsystems.
      
      After reading the full description of this PR, the changes should be
      read in the following order:
      1. https://github.com/paritytech/polkadot-sdk/pull/4848, some other
      micro-optimizations for networks with a high number of validators. This
      change gives us a speed up by itself without any other changes.
      2. https://github.com/paritytech/polkadot-sdk/pull/4845 , this contains
      only interface changes to decouple the subsystem from the `Context` and
      be able to run multiple instances of the subsystem on different threads.
      **No functional changes**
      3. https://github.com/paritytech/polkadot-sdk/pull/4928, moving of the
      crypto checks from approval-voting in approval-distribution, so that the
      approval-distribution has no reason to wait after approval-voting
      anymore. This change gives us a speed up by itself without any other
      changes.
      4. https://github.com/paritytech/polkadot-sdk/pull/4846, interface
      changes to make approval-voting runnable on a separate thread. **No
      functional changes**
      5. This PR, where we instantiate an `approval-voting-parallel` subsystem
      that runs on different workers the logic currently in
      `approval-distribution` and `approval-voting`.
      6. The next step after this changes get merged and deploy would be to
      bring all the files from approval-distribution, approval-voting,
      approval-voting-parallel into a single rust crate, to make it easier to
      maintain and understand the structure.
      
      ## Results
      Running subsystem-benchmarks with 1000 validators 100 fully ocuppied
      cores and triggering all assignments and approvals for all tranches
      
      #### Approval does not lags behind. 
       Master
      ```
      Chain selection approved  after 72500 ms hash=0x0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a
      ```
      With this PoC
      ```
      Chain selection approved  after 3500 ms hash=0x0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a
      ```
      
      #### Gathering enough assignments
       
      Enough assignments are gathered in less than 500ms, so that gives un a
      guarantee that un-necessary work does not get triggered, on master on
      the same benchmark because the subsystems fall behind on work, that
      number goes above 32 seconds on master.
       
      <img width="2240" alt="Screenshot 2024-06-20 at 15 48 22"
      src="https://github.com/paritytech/polkadot-sdk/assets/49718502/d2f2b29c-5ff6-44b4-a245-5b37ab8e58bc">
      
      
      #### Cpu usage:
      Master
      ```
      CPU usage, seconds                     total   per block
      approval-distribution                96.9436      9.6944
      approval-voting                     117.4676     11.7468
      test-environment                     44.0092      4.4009
      ```
      With this PoC
      ```
      CPU usage, seconds                     total   per block
      approval-distribution                 0.0014      0.0001 --- unused
      approval-voting                       0.0437      0.0044.  --- unused
      approval-voting-parallel              5.9560      0.5956
      approval-voting-parallel-0           22.9073      2.2907
      approval-voting-parallel-1           23.0417      2.3042
      approval-voting-parallel-2           22.0445      2.2045
      approval-voting-parallel-3           22.7234      2.2723
      approval-voting-parallel-4           21.9788      2.1979
      approval-voting-parallel-5           23.0601      2.3060
      approval-voting-parallel-6           22.4805      2.2481
      approval-voting-parallel-7           21.8330      2.1833
      approval-voting-parallel-db          37.1954      3.7195.  --- the approval-voting thread.
      ```
      
      # Enablement strategy
      
      Because just some trivial plumbing is needed in approval-distribution
      and approval-voting to be able to run things in parallel and because
      this subsystems plays a critical part in the system this PR proposes
      that we keep both ways of running the approval work, as separated
      subsystems and just a single subsystem(`approval-voting-parallel`) which
      has multiple workers for the distribution work and one worker for the
      approval-voting work and switch between them with a comandline flag.
      
      The benefits for this is twofold.
      1. With the same polkadot binary we can easily switch just a few
      validators to use the parallel approach and gradually make this the
      default way of running, if now issues arise.
      2. In the worst case scenario were it becomes the default way of running
      things, but we discover there are critical issues with it we have the
      path to quickly disable it by asking validators to adjust their command
      line flags.
      
      
      # Next steps
      - [x] Make sure through various testing we are not missing anything 
      - [x] Polish the implementations to make them production ready
      - [x] Add Unittest Tests for approval-voting-parallel.
      - [x] Define and implement the strategy for rolling this change, so that
      the blast radius is minimal(single validator) in case there are problems
      with the implementation.
      - [x]  Versi long running tests.
      - [x] Add relevant metrics.
      
      @ordian @eskimor @sandreim @AndreiEres
      
      , let me know what you think.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <[email protected]>
      b16237ad
    • Andrei Sandu's avatar
      chore: bump runtime api version to v11 (#5824) · 1f3e3978
      Andrei Sandu authored
      
      
      A change that I missed to add in
      https://github.com/paritytech/polkadot-sdk/pull/5525 .
      
      ---------
      
      Signed-off-by: default avatarAndrei Sandu <[email protected]>
      1f3e3978
  17. Sep 25, 2024
  18. Sep 24, 2024
    • Branislav Kontur's avatar
      Bridges lane id agnostic for backwards compatibility (#5649) · 710e74dd
      Branislav Kontur authored
      This PR primarily fixes the issue with
      `zombienet-bridges-0001-asset-transfer-works` (see:
      https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7404903).
      
      The PR looks large, but most of the changes involve splitting `LaneId`
      into `LegacyLaneId` and `HashedLaneId`. All pallets now use `LaneId` as
      a generic parameter.
      
      The actual bridging pallets are now backward compatible and work with
      actual **substrate-relay v1.6.10**, which does not even known anything
      about permissionless lanes or the new pallet changes.
      
      
      
      ## Important
      
      - [x] added migration for `pallet_bridge_relayers` and
      `RewardsAccountParams` change order of params, which generates different
      accounts
      
      ## Deployment follow ups
      - [ ] fix monitoring for
      `at_{}_relay_{}_reward_for_msgs_from_{}_on_lane_{}`
      - [ ] check sovereign reward accounts - because of changed
      `RewardsAccountParams`
      - [ ] deploy another messages instances for permissionless lanes - on
      BHs or AHs?
      - [ ] return bac...
      710e74dd
    • Javier Viola's avatar
      Fix parachain-template-test (#5821) · 9294572a
      Javier Viola authored
      Fix `parachain-template-test` (bump `zombienet` version).
      Thx!
      9294572a
  19. Sep 23, 2024
  20. Sep 22, 2024
    • Branislav Kontur's avatar
      Moved presets to the testnet runtimes (#5327) · 8735c663
      Branislav Kontur authored
      
      
      It is a first step for switching to the `frame-omni-bencher` for CI.
      
      This PR includes several changes related to generating chain specs plus:
      
      - [x] pallet `assigned_slots` fix missing `#[serde(skip)]` for phantom
      - [x] pallet `paras_inherent` benchmark fix - cherry-picked from
      https://github.com/paritytech/polkadot-sdk/pull/5688
      - [x] migrates `get_preset` to the relevant runtimes
      - [x] fixes Rococo genesis presets - does not work
      https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7317249
      - [x] fixes Rococo benchmarks for CI 
      - [x] migrate westend genesis
      - [x] remove wococo stuff
      
      Closes: https://github.com/paritytech/polkadot-sdk/issues/5680
      
      ## Follow-ups
      - Fix for frame-omni-bencher
      https://github.com/paritytech/polkadot-sdk/pull/5655
      - Enable new short-benchmarking CI -
      https://github.com/paritytech/polkadot-sdk/pull/5706
      - Remove gitlab pipelines for short benchmarking
      - refactor all Cumulus runtimes to use `get_preset` -
      https://github.com/paritytech/polkadot-sdk/issues/5704
      - https://github.com/paritytech/polkadot-sdk/issues/5705
      - https://github.com/paritytech/polkadot-sdk/issues/5700
      - [ ] Backport to the stable
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarordian <[email protected]>
      8735c663
  21. Sep 19, 2024
    • Andrei Eres's avatar
      Use maximum allowed response size for request/response protocols (#5753) · 0c9d8fed
      Andrei Eres authored
      # Description
      
      Adjust the PoV response size to the default values used in the
      substrate.
      Fixes https://github.com/paritytech/polkadot-sdk/issues/5503
      
      ## Integration
      
      The changes shouldn't impact downstream projects since we are only
      increasing the limit.
      
      ## Review Notes
      
      You can't see it from the changes, but it affects all protocols that use
      the `POV_RESPONSE_SIZE` constant.
      - Protocol::ChunkFetchingV1
      - Protocol::ChunkFetchingV2
      - Protocol::CollationFetchingV1
      - Protocol::CollationFetchingV2
      - Protocol::PoVFetchingV1
      - Protocol::AvailableDataFetchingV1
      
      ## Increasing timeouts
      
      
      https://github.com/paritytech/polkadot-sdk/blob/fae15379
      
      /polkadot/node/network/protocol/src/request_response/mod.rs#L126-L129
      
      I assume the current PoV request timeout is set to 1.2s to handle 5
      consecutive requests during a 6s block. This setting does not relate to
      the PoV response size. I see no reason to change the current timeouts
      after adjusting the response size.
      
      However, we should consider networking speed limitations if we want to
      increase the maximum PoV size to 10 MB. With the number of parallel
      requests set to 10, validators will need the following networking
      speeds:
      - 5 MB PoV: at least 42 MB/s, ideally 50 MB/s.  
      - 10 MB PoV: at least 84 MB/s, ideally 100 MB/s.
      
      The current required speed of 50 MB/s aligns with the 62.5 MB/s
      specified [in the reference hardware
      requirements](https://wiki.polkadot.network/docs/maintain-guides-how-to-validate-polkadot#reference-hardware).
      Increasing the PoV size to 10 MB may require a higher networking speed.
      
      ---------
      
      Co-authored-by: default avatarAndrei Sandu <[email protected]>
      0c9d8fed
  22. Sep 18, 2024
  23. Sep 17, 2024
    • Nazar Mokrynskyi's avatar
      Syncing strategy refactoring (part 2) (#5666) · 43cd6fd4
      Nazar Mokrynskyi authored
      # Description
      
      Follow-up to https://github.com/paritytech/polkadot-sdk/pull/5469 and
      mostly covering https://github.com/paritytech/polkadot-sdk/issues/5333.
      
      The primary change here is that syncing strategy is no longer created
      inside of syncing engine, instead syncing strategy is an argument of
      syncing engine, more specifically it is an argument to `build_network`
      that most downstream users will use. This also extracts addition of
      request-response protocols outside of network construction, making sure
      they are physically not present when they don't need to be (imagine
      syncing strategy that uses none of Substrate's protocols in its
      implementation for example).
      
      This technically allows to completely replace syncing strategy with
      whatever strategy chain might need.
      
      There will be at least one follow-up PR that will simplify
      `SyncingStrategy` trait and other public interfaces to remove mentions
      of block/state/warp sync requests, replacing them with generic APIs,
      such that strategies where warp sync is not applicable don't have to
      provide dummy method implementations, etc.
      
      ## Integration
      
      Downstream projects will have to write a bit of boilerplate calling
      `build_polkadot_syncing_strategy` function to create previously default
      syncing strategy.
      
      ## Review Notes
      
      Please review PR through individual commits rather than the final diff,
      it will be easier that way. The changes are mostly just moving code
      around one step at a time.
      
      # Checklist
      
      * [x] My PR includes a detailed description as outlined in the
      "Description" and its two subsections above.
      * [x] My PR follows the [labeling requirements](
      
      https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process
      ) of this project (at minimum one label for `T` required)
      * External contributors: ask maintainers to put the right label on your
      PR.
      * [x] I have made corresponding changes to the documentation (if
      applicable)
      43cd6fd4
  24. Sep 16, 2024
  25. Sep 12, 2024
    • Alexandru Gheorghe's avatar
      [4 / 5] Make approval-voting runnable on a worker thread (#4846) · a34cc8df
      Alexandru Gheorghe authored
      
      
      This is part of the work to further optimize the approval subsystems, if
      you want to understand the full context start with reading
      https://github.com/paritytech/polkadot-sdk/pull/4849#issue-2364261568,
      
      # Description
      This PR contain changes to make possible the run of single
      approval-voting instance on a worker thread, so that it can be
      instantiated by the approval-voting-parallel subsystem.
      
      This does not contain any functional changes it just decouples the
      subsystem from the subsystem Context and introduces more specific trait
      dependencies for each function instead of all of them requiring a
      context.
      
      This change can be merged  independent of the followup PRs.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <[email protected]>
      Co-authored-by: default avatarAndrei Sandu <[email protected]>
      a34cc8df
  26. Sep 10, 2024
  27. Sep 06, 2024
    • Andrei Eres's avatar
      Fix PVF precompilation for Kusama (#5606) · 5040b3c2
      Andrei Eres authored
      ![image](https://github.com/user-attachments/assets/2deaee85-67c3-4119-b0c0-d2e7f818b4ea)
      
      Because on Kusama validators.len() < discovery_keys.len() we can tweak
      the PVF precompilation to allow prepare PVFs when the node is an
      authority but not a validator.
      5040b3c2