Skip to content
Snippets Groups Projects
  1. Oct 06, 2024
  2. Oct 05, 2024
    • Javier Viola's avatar
      bump zombienet version `v1.3.113` (#5935) · a4abcbdd
      Javier Viola authored
      Bump zombienet version. Including fixes for `ci` failures like 
      
      https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7511363
      https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7511379
    • Maksym H's avatar
      update runners for cmd and docs (#5938) · cb8f4665
      Maksym H authored
      Updated runners for CMD and Docs
    • Cyrill Leutwiler's avatar
      [pallet-revive] immutable data storage (#5861) · a8ebe9af
      Cyrill Leutwiler authored
      
      This PR introduces the concept of immutable storage data, used for
      [Solidity immutable
      variables](https://docs.soliditylang.org/en/latest/contracts.html#immutable).
      
      This is a minimal implementation. Immutable data is attached to a
      contract; to keep `ContractInfo` fixed in size, we only store the length
      there, and store the immutable data in a dedicated storage map instead.
      Which comes at the cost of requiring an additional storage read (costly)
      for contracts using this feature.
      
      We discussed more optimal solutions not requiring any additional storage
      accesses internally, but they turned out to be non-trivial to implement.
      Another optimization benefiting multiple calls to the same contract in a
      single call stack would be to cache the immutable data in `Stack`.
      However, this potential creates a DOS vulnerability (the attack vector
      is to call into as many contracts in a single stack as possible, where
      they all have maximum immutable data to fill the cache as efficiently as
      possible). So this either has to be guaranteed to be a non-issue by
      limits, or, more likely, to have some logic to bound the cache.
      Eventually, we should think about introducing the concept of warm and
      cold storage reads (akin to EVM). Since immutable variables are commonly
      used in contracts, this change is blocking our initial launch and we
      should only optimize it properly in follow-ups.
      
      This PR also disables the `set_code_hash` API (which isn't usable for
      Solidity contracts without pre-compiles anyways). With immutable storage
      attached to contracts, we now want to run the constructor of the new
      code hash to collect the immutable data during `set_code_hash`. This
      will be implemented in a follow up PR.
      
      ---------
      
      Signed-off-by: default avatarCyrill Leutwiler <bigcyrill@hotmail.com>
      Signed-off-by: default avatarxermicus <cyrill@parity.io>
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarAlexander Theißen <alex.theissen@me.com>
      Co-authored-by: default avatarPG Herveou <pgherveou@gmail.com>
    • Branislav Kontur's avatar
      Bridge relayer backwards compatibility for reading storage InboundLaneData/OutboundLaneData (#5921) · 73bf37ab
      Branislav Kontur authored
      For permissionless lanes, we add `lane_state` to the `InboundLaneData`
      and `OutboundLaneData` structs. However, for a period of time (until
      both BHK and BHP are upgraded to the same version), we need the relayer
      to function with runtimes where one has been migrated with `lane_state`
      and the other has not. This PR addresses the incompatibility by
      introducing wrapper structs for decoding without `lane_state`.
    • Adrian Catangiu's avatar
      XCM paid execution barrier supports more origin altering instructions (#5917) · d968c941
      Adrian Catangiu authored
      
      The AllowTopLevelPaidExecutionFrom allows ClearOrigin instructions
      before the expected BuyExecution instruction, it also allows messages
      without any origin altering instructions.
      
      This commit enhances the barrier to also support messages that use
      AliasOrigin, or DescendOrigin. This is sometimes desired in asset
      transfer XCM programs that need to run the inbound assets instructions
      using the origin chain root origin, but then want to drop privileges for
      the rest of the program. Currently these programs drop privileges by
      clearing the origin completely, but that also unnecessarily limits the
      range of actions available to the rest of the program. Using
      DescendOrigin or AliasOrigin allows the sending chain to instruct the
      receiving chain what the deprivileged real origin is.
      
      See https://github.com/polkadot-fellows/RFCs/pull/109 and
      https://github.com/polkadot-fellows/RFCs/pull/122 for more details on
      how DescendOrigin and AliasOrigin could be used instead of ClearOrigin.
      
      ---------
      
      Signed-off-by: default avatarAdrian Catangiu <adrian@parity.io>
    • Iulian Barbu's avatar
      templates: add genesis config presets for minimal/solochain (#5868) · f8807d1e
      Iulian Barbu authored
      # Description
      
      Closes [#5790](https://github.com/paritytech/polkadot-sdk/issues/5790).
      Useful for starting nodes based on minimal/solochain when doing
      development or for testing omni node with less happy code paths. It is
      reusing the presets defined for the nodes chain specs.
      
      ## Integration
      
      Specifically useful for development/testing if generating chain-specs
      for `minimal` or `solochain` runtimes from `templates` directories.
      
      ## Review Notes
      
      Added `genesis_config_presets` modules for both minimal/solochain. I
      reused the presets defined in each node `chain_spec` module
      correspondingly.
      
      ### PRDOC
      
      Not sure who uses templates, maybe node devs and runtime devs at start
      of their learning journey, but happy to get some guidance on how to
      write the prdoc if needed.
      
      ### Thinking out loud
      
      I saw concerns around sharing functionality for such genesis config
      presets between the template chains. I think there might be a case for
      doing that, on ...
  3. Oct 04, 2024
  4. Oct 03, 2024
    • Branislav Kontur's avatar
      Simplify bridges relayer cli configuration (#5912) · a995caf7
      Branislav Kontur authored
      This PR removes the requirement to set the `LaneId` in the relayer CLI
      configuration where it was not really necessary.
      
      ---------
      
      Co-authored-by: command-bot <>
    • Maksym H's avatar
      Re-establish pallet_revive weights baseline (#5845) · 72309bd8
      Maksym H authored
      - update baseline for pallet_revive
      - update cmd pipeline name
      - Fix compilation after renaming some of benchmarks in pallet_revive.
      [Runtime Dev]. Changed the "instr" benchmark so that it should no longer
      return to little weight. It is still bogus but at least benchmarking
      should not work. (by @athei
      
       )
      
      ---------
      
      Co-authored-by: default avatarGitHub Action <action@github.com>
      Co-authored-by: default avatarAlexander Theißen <alex.theissen@me.com>
      Co-authored-by: default avatarAlexander Samusev <41779041+alvicsam@users.noreply.github.com>
      Co-authored-by: command-bot <>
    • Niklas Adolfsson's avatar
      rpc v2: backpressure chainHead_v1_storage (#5741) · 33131634
      Niklas Adolfsson authored
      
      Close https://github.com/paritytech/polkadot-sdk/issues/5589
      
      This PR makes it possible for `rpc_v2::Storage::query_iter_paginated` to
      be "backpressured" which is achieved by having a channel where the
      result is sent back and when this channel is "full" we pause the
      iteration.
      
      The chainHead_follow has an internal channel which doesn't represent the
      actual connection and that is set to a very small number (16). Recall
      that the JSON-RPC server has a dedicate buffer for each connection by
      default of 64.
      
      #### Notes
      
      - Because `archive_storage` also depends on
      `rpc_v2::Storage::query_iter_paginated` I had to tweak the method to
      support limits as well. The reason is that archive_storage won't get
      backpressured properly because it's not an subscription. (it would much
      easier if it would be a subscription in rpc v2 spec because nothing
      against querying huge amount storage keys)
      - `query_iter_paginated` doesn't necessarily return the storage "in
      order" such as
      - `query_iter_paginated(vec![("key1", hash), ("key2", value)], ...)`
      could return them in arbitrary order because it's wrapped in
      FuturesUnordered but I could change that if we want to process it
      inorder (it's slower)
      - there is technically no limit on the number of storage queries in each
      `chainHead_v1_storage call` rather than the rpc max message limit which
      10MB and only allowed to max 16 calls `chainHead_v1_x` concurrently
      (this should be fine)
      
      #### Benchmarks using subxt on localhost
      
      - Iterate over 10 accounts on westend-dev -> ~2-3x faster 
      - Fetch 1024 storage values (i.e, not descedant values) -> ~50x faster
      - Fetch 1024 descendant values -> ~500x faster
      
      The reason for this is because as Josep explained in the issue is that
      one is only allowed query five storage items per call and clients has
      make lots of calls to drive it forward..
      
      ---------
      
      Co-authored-by: command-bot <>
      Co-authored-by: default avatarJames Wilson <james@jsdw.me>
    • Javier Viola's avatar
      bump zombienet version `v1.3.112` (#5916) · 00f7104c
      Javier Viola authored
      Bump `zombienet` version, including fixes (`ci`) and the latest version
      of `pjs` embedded.
      Thx!
  5. Oct 02, 2024
  6. Oct 01, 2024
    • Serban Iorga's avatar
      Beefy equivocation: check all the MMR roots (#5857) · 3de2a925
      Serban Iorga authored
      
      Normally, the BEEFY protocol only accepts a single MMR Root entry in a
      commitment's payload. But to be extra careful, when validating
      equivocation reports, let's check all the MMR roots, if there are more.
      
      ---------
      
      Co-authored-by: default avatarAdrian Catangiu <adrian@parity.io>
    • Andrei Eres's avatar
      Remove ValidateFromChainState (#5707) · 1617852a
      Andrei Eres authored
      # Description
      
      This PR removes the
      `CandidateValidationMessage::ValidateFromChainState`, which was
      previously used by backing, but is no longer relevant since initial
      async backing implementation
      https://github.com/paritytech/polkadot/pull/5557.
      
      Fixes https://github.com/paritytech/polkadot-sdk/issues/5643
      
      ## Integration
      
      This change should not affect downstream projects since
      `ValidateFromChainState` was already unused.
      
      ## Review Notes
      
      - Removed all occurrences of `ValidateFromChainState`.
      - Moved utility functions, previously used in candidate validation tests
      and malus, exclusively to candidate validation tests as they are no
      longer used in malus.
      - Deleted the
      `polkadot_parachain_candidate_validation_validate_from_chain_state`
      metric from Prometheus.
      - Removed `Spawner` from `ReplaceValidationResult` in malus’
      interceptors.
      - `fake_validation_error` was only used for `ValidateFromChainState`
      handling, while other cases directly used
      `InvalidCandidate::InvalidOutputs`. It has been replaced with
      `fake_validation_error`, with a fallback to
      `InvalidCandidate::InvalidOutputs`.
      - Updated overseer’s minimal example to replace `ValidateFromChainState`
      with `ValidateFromExhaustive`.
  7. Sep 30, 2024
  8. Sep 29, 2024
    • Shawn Tabrizi's avatar
      Improve APIs for Tries in Runtime (#5756) · 05b5fb2b
      Shawn Tabrizi authored
      
      This is a refactor and improvement from:
      https://github.com/paritytech/polkadot-sdk/pull/3881
      
      - `sp_runtime::proving_trie` now exposes a `BasicProvingTrie` for both
      `base2` and `base16`.
      - APIs for `base16` are more focused on single value proofs, also
      aligning their APIs with the `base2` trie
      - A `ProvingTrie` trait is included which wraps both the `base2` and
      `base16` trie, and exposes all APIs needed for an end to end scenario.
      - A `ProofToHashes` trait is exposed which can allow us to write proper
      benchmarks for the merkle trie.
      
      ---------
      
      Co-authored-by: default avatarAnkan <10196091+Ank4n@users.noreply.github.com>
      Co-authored-by: default avatarAdrian Catangiu <adrian@parity.io>
  9. Sep 28, 2024
    • Facundo Farall's avatar
      Clarify firing of `import_notification_stream` in doc comment (#5811) · df12fd34
      Facundo Farall authored
      
      # Description
      
      Updates the doc comment on the `import_notification_stream` to make its
      behaviour clearer.
      
      Closes [Unexpected behaviour of block
      `import_notification_stream`](https://github.com/paritytech/polkadot-sdk/issues/5596).
      
      ## Integration
      
      Doesn't apply.
      
      ## Review Notes
      
      The old comment docs caused some confusion to myself and some members of
      my team, on when this notification stream is triggered. This is
      reflected in the linked
      [issue](https://github.com/paritytech/polkadot-sdk/issues/5596), and as
      discussed there, this PR aims to prevent this confusion in future devs
      looking to make use of this functionality.
      
      # Checklist
      
      * [x] My PR includes a detailed description as outlined in the
      "Description" and its two subsections above.
      * [ ] My PR follows the [labeling requirements](
      
      https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process
      ) of this project (at minimum one label for `T` required)
      * External contributors: ask maintainers to put the right label on your
      PR.
      * [x] I have made corresponding changes to the documentation (if
      applicable)
      * [x] I have added tests that prove my fix is effective or that my
      feature works (if applicable)
      
      You can remove the "Checklist" section once all have been checked. Thank
      you for your contribution!
      
      ---------
      
      Co-authored-by: default avatarMichal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
    • Maksym H's avatar
      Update runtimes-matrix.json (#5829) · 0a569963
      Maksym H authored
      
      Just a tiny config fix
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
    • Iulian Barbu's avatar
      substrate/utils: enable wasm builder diagnostics propagation (#5838) · 58ade7a6
      Iulian Barbu authored
      
      # Description
      
      `substrate-wasm-builder` can be a build dependency for crates which
      develop FRAME runtimes. I had a tough time seeing errors happening in
      such crates (e.g. runtimes from the `templates` directory) in my IDE. I
      use a combination of rust-analyzer + nvim-lsp + nvim-lspconfig +
      rustacean.vim and all of this stack is not able to correctly parse
      errors emitted during the `build` phase.
      
      As a matter of fact there is also a cargo issue tracking specifically
      this issue where cargo doesn't propagate the `--message-format` type to
      the build phase: [here](https://github.com/rust-lang/cargo/issues/14246)
      initially and then
      [here](https://github.com/rust-lang/cargo/issues/8283). It feels like a
      solution for this use case isn't very close, so if it comes to runtimes
      development (both as an SDK user and developer), enabling wasm builder
      to emit diagnostics messages friendly to IDEs would be useful for
      regular workflows where IDEs are used for finding errors instead of
      manually running `cargo` commands.
      
      ## Integration
      
      It can be an issue if Substrate/FRAME SDKs users and developers rely on
      the runtimes' crates build phase output in certain ways. Emitting
      compilation messages as json will pollute the regular compilation output
      so people that would manually run `cargo build` or `cargo check` on
      their crates will have a tougher time extracting the non JSON output.
      
      ## Review Notes
      
      Rust IDEs based on rust-analyzer rely on cargo check/clippy to extract
      diagnostic information. The information is generated by passing flags
      like `--messages-format=json` to the `cargo` commands. The messages are
      extracted by rust-analyzer and published to LSP clients that will
      populate UIs accordingly.
      
      We need to build against the wasm target by using `message-format=json`
      too so that IDEs can show the errors for crates that have a build
      dependency on `substrate-wasm-builder`.
      
      ---------
      
      Signed-off-by: default avatarIulian Barbu <iulian.barbu@parity.io>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
  10. Sep 27, 2024
  11. Sep 26, 2024