Skip to content
Snippets Groups Projects
  1. Aug 24, 2024
  2. Aug 18, 2024
  3. Aug 15, 2024
  4. Jul 29, 2024
  5. Jul 26, 2024
    • Kian Paimani's avatar
      Replace homepage in all TOML files (#5118) · d3d1542c
      Kian Paimani authored
      A bit of a controversial move, but a good preparation for even further
      reducing the traffic on outdated content of `substrate.io`. Current
      status:
      
      <img width="728" alt="Screenshot 2024-07-15 at 11 32 48"
      src="https://github.com/user-attachments/assets/df33b164-0ce7-4ac4-bc97-a64485f12571">
      
      Previously, I was in favor of changing the domain of the rust-docs to
      something like `polkadot-sdk.parity.io` or similar, but I think the
      current format is pretty standard and has a higher chance of staying put
      over the course of time:
      
      `<org-name>.github.io/<repo-name>` ->
      `https://paritytech.github.io/polkadot-sdk/`
      
      part of https://github.com/paritytech/eng-automation/issues/10
  6. Jul 15, 2024
    • Jun Jiang's avatar
      Remove most all usage of `sp-std` (#5010) · 7ecf3f75
      Jun Jiang authored
      
      This should remove nearly all usage of `sp-std` except:
      - bridge and bridge-hubs
      - a few of frames re-export `sp-std`, keep them for now
      - there is a usage of `sp_std::Writer`, I don't have an idea how to move
      it
      
      Please review proc-macro carefully. I'm not sure I'm doing it the right
      way.
      
      Note: need `/bot fmt`
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
      Co-authored-by: command-bot <>
  7. Jul 05, 2024
    • Sebastian Kunert's avatar
      Introduce basic slot-based collator (#4097) · e44f61af
      Sebastian Kunert authored
      Part of #3168 
      On top of #3568
      
      ### Changes Overview
      - Introduces a new collator variant in
      `cumulus/client/consensus/aura/src/collators/slot_based/mod.rs`
      - Two tasks are part of that module, one for block building and one for
      collation building and submission.
      - Introduces a new variant of `cumulus-test-runtime` which has 2s slot
      duration, used for zombienet testing
      - Zombienet tests for the new collator
      
      **Note:** This collator is considered experimental and should only be
      used for testing and exploration for now.
      
      ### Comparison with `lookahead` collator
      - The new variant is slot based, meaning it waits for the next slot of
      the parachain, then starts authoring
      - The search for potential parents remains mostly unchanged from
      lookahead
      - As anchor, we use the current best relay parent
      - In general, the new collator tends to be anchored to one relay parent
      earlier. `lookahead` generally waits for a new relay block to arrive
      before it attempts to build a block. ...
  8. Jun 24, 2024
    • Oliver Tale-Yazdi's avatar
      Lift all dependencies (the big one) (#4716) · 8efa0544
      Oliver Tale-Yazdi authored
      
      After preparing in https://github.com/paritytech/polkadot-sdk/pull/4633,
      we can lift also all internal dependencies up to the workspace.
      
      This does not actually change anything, but uses `workspace = true` for
      all dependencies. You can check it with:
      ```bash
      git checkout -q $(git merge-base oty-lift-all-deps origin/master)
      cargo tree -e features > master.out
      
      git checkout -q oty-lift-all-deps
      cargo tree -e features > new.out
      diff master.out new.out
      ```
      
      It did not yet lift 100% of dependencies, some inside of `target.*` or
      some that had conflicting aliases introduced recently. But i will do
      these together in a follow-up with CI checks.
      
      Can be reproduced with [zepter](https://github.com/ggwpez/zepter/):
      `zepter transpose d lift-to-workspace "regex:.*" --version-resolver
      highest --skip-package "polkadot-sdk" --ignore-errors --fix`.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
  9. Jun 21, 2024
  10. Jun 13, 2024
  11. Jun 07, 2024
  12. Jun 05, 2024
    • Oliver Tale-Yazdi's avatar
      Unify dependency aliases (#4633) · d2fd5364
      Oliver Tale-Yazdi authored
      
      Inherited workspace dependencies cannot be renamed by the crate using
      them (see [1](https://github.com/rust-lang/cargo/issues/12546),
      [2](https://stackoverflow.com/questions/76792343/can-inherited-dependencies-in-rust-be-aliased-in-the-cargo-toml-file)).
      Since we want to use inherited workspace dependencies everywhere, we
      first need to unify all aliases that we use for a dependency throughout
      the workspace.
      The umbrella crate is currently excluded from this procedure, since it
      should be able to export the crates by their original name without much
      hassle.
      
      For example: one crate may alias `parity-scale-codec` to `codec`, while
      another crate does not alias it at all. After this change, all crates
      have to use `codec` as name. The problematic combinations were:
      - conflicting aliases: most crates aliases as `A` but some use `B`.
      - missing alias: most of the crates alias a dep but some dont.
      - superfluous alias: most crates dont alias a dep but some do.
      
      The script that i used first determines whether most crates opted to
      alias a dependency or not. From that info it decides whether to use an
      alias or not. If it decided to use an alias, the most common one is used
      everywhere.
      
      To reproduce, i used
      [this](https://github.com/ggwpez/substrate-scripts/blob/master/uniform-crate-alias.py)
      python script in combination with
      [this](https://github.com/ggwpez/zepter/blob/38ad10585fe98a5a86c1d2369738bc763a77057b/renames.json)
      error output from Zepter.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
  13. Jun 03, 2024
  14. May 27, 2024
  15. May 24, 2024
    • Oliver Tale-Yazdi's avatar
      Polkadot-SDK Umbrella Crate (#3935) · 1c7a1a58
      Oliver Tale-Yazdi authored
      
      # Umbrella Crate
      
      The Polkadot-SDK "umbrella" is a crate that re-exports all other
      published crates. This makes it
      possible to have a very small `Cargo.toml` file that only has one
      dependency, the umbrella
      crate. This helps with selecting the right combination of crate
      versions, since otherwise 3rd
      party tools are needed to select a compatible set of versions.
      
      ## Features
      
      The umbrella crate supports no-std builds and can therefore be used in
      the runtime and node.
      There are two main features: `runtime` and `node`. The `runtime` feature
      enables all `no-std`
      crates, while the `node` feature enables all `std` crates. It should be
      used like any other
      crate in the repo, with `default-features = false`.
      
      For more fine-grained control, additionally, each crate can be enabled
      selectively. The umbrella
      exposes one feature per dependency. For example, if you only want to use
      the `frame-support`
      crate, you can enable the `frame-support` feature.
      
      The umbrella exposes a few more general features:
      - `tuples-96`: Needs to be enabled for runtimes that have more than 64
      pallets.
      - `serde`: Specifically enable `serde` en/decoding support.
      - `experimental`: Experimental enable experimental features - should not
      yet used in production.
      - `with-tracing`: Enable tracing support.
      - `try-runtime`, `runtime-benchmarks` and `std`: These follow the
      standard conventions.
      - `runtime`: As described above, enable all `no-std` crates.
      - `node`: As described above, enable all `std` crates.
      - There does *not* exist a dedicated docs feature. To generate docs,
      enable the `runtime` and
      `node` feature. For docs.rs the manifest contains specific configuration
      to make it show up
        all re-exports.
      
      There is a specific `zepter` check in place to ensure that the features
      of the umbrella are
      correctly configured. This check is run in CI and locally when running
      `zepter`.
      
      ## Generation
      
      The umbrella crate needs to be updated every time when a new crate is
      added or removed from the
      workspace. It is checked in CI by calling its generation script. The
      generation script is
      located in `./scripts/generate-umbrella.py` and needs dependency
      `cargo_workspace`.
      
      Example: `python3 scripts/generate-umbrella.py --sdk . --version 1.9.0`
      
      ## Usage
      
      > Note: You can see a live example in the `staging-node-cli` and
      `kitchensink-runtime` crates.
      
      The umbrella crate can be added to your runtime crate like this:
      
      `polkadot-sdk = { path = "../../../../umbrella", features = ["runtime"],
      default-features =
      false}`
      
      or for a node:
      
      `polkadot-sdk = { path = "../../../../umbrella", features = ["node"],
      default-features = false
      }`
      
      In the code, it is then possible to bring all dependencies into scope
      via:
      
      `use polkadot_sdk::*;`
      
      ### Known Issues
      
      The only known issue so far is the fact that the `use` statement brings
      the dependencies only
      into the outer module scope - not the global crate scope. For example,
      the following code would
      need to be adjusted:
      
      ```rust
      use polkadot_sdk::*;
      
      mod foo {
         // This does sadly not compile:
         frame_support::parameter_types! { }
      
         // Instead, we need to do this (or add an equivalent `use` statement):
         polkadot_sdk::frame_support::parameter_types! { }
      }
      ```
      
      Apart from this, no issues are known. There could be some bugs with how
      macros locate their own
      re-exports. Please compile issues that arise from using this crate.
      
      ## Dependencies
      
      The umbrella crate re-exports all published crates, with a few
      exceptions:
      - Runtime crates like `rococo-runtime` etc are not exported. This
      otherwise leads to very weird
        compile errors and should not be needed anyway.
      - Example and fuzzing crates are not exported. This is currently
      detected by checking the name
      of the crate for these magic words. In the future, it will utilize
      custom metadata, as it is
        done in the `rococo-runtime` crate.
      - The umbrella crate itself. Should be obvious :)
      
      ## Follow Ups
      - [ ] Re-writing the generator in Rust - the python script is at its
      limit.
      - [ ] Using custom metadata to exclude some crates instead of filtering
      by names.
      - [ ] Finding a way to setting the version properly. Currently its
      locked in the CI script.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
    • Branislav Kontur's avatar
      Attempt to avoid specifying `BlockHashCount` for different... · ef144b1a
      Branislav Kontur authored
      Attempt to avoid specifying `BlockHashCount` for different `mocking::{MockBlock, MockBlockU32, MockBlockU128}` (#4543)
      
      While doing some migration/rebase I came in to the situation, where I
      needed to change `mocking::MockBlock` to `mocking::MockBlockU32`:
      ```
      #[derive_impl(frame_system::config_preludes::TestDefaultConfig)]
      impl frame_system::Config for TestRuntime {
      	type Block = frame_system::mocking::MockBlockU32<TestRuntime>;
      	type AccountData = pallet_balances::AccountData<ThisChainBalance>;
      }
      ```
      But actual `TestDefaultConfig` for `frame_system` is using `ConstU64`
      for `type BlockHashCount = frame_support::traits::ConstU64<10>;`
      [here](https://github.com/paritytech/polkadot-sdk/blob/master/substrate/frame/system/src/lib.rs#L303).
      Because of this, it force me to specify and add override for `type
      BlockHashCount = ConstU32<10>`.
      
      This PR tries to fix this with `TestBlockHashCount` implementation for
      `TestDefaultConfig` which supports `u32`, `u64` and `u128` as a
      `BlockNumber`.
      
      ### How to simulate error
      Just by removing `type BlockHashCount = ConstU32<250>;`
      [here](https://github.com/paritytech/polkadot-sdk/blob/master/substrate/frame/multisig/src/tests.rs#L44)
      ```
      :~/parity/olkadot-sdk$ cargo test -p pallet-multisig
         Compiling pallet-multisig v28.0.0 (/home/bparity/parity/aaa/polkadot-sdk/substrate/frame/multisig)
      error[E0277]: the trait bound `ConstU64<10>: frame_support::traits::Get<u32>` is not satisfied
         --> substrate/frame/multisig/src/tests.rs:41:1
          |
      41  | #[derive_impl(frame_system::config_preludes::TestDefaultConfig)]
          | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `frame_support::traits::Get<u32>` is not implemented for `ConstU64<10>`
          |
          = help: the following other types implement trait `frame_support::traits::Get<T>`:
                    <ConstU64<T> as frame_support::traits::Get<u64>>
                    <ConstU64<T> as frame_support::traits::Get<std::option::Option<u64>>>
      note: required by a bound in `frame_system::Config::BlockHashCount`
         --> /home/bparity/parity/aaa/polkadot-sdk/substrate/frame/system/src/lib.rs:535:24
          |
      535 |         type BlockHashCount: Get<BlockNumberFor<Self>>;
          |                              ^^^^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `Config::BlockHashCount`
          = note: this error originates in the attribute macro `derive_impl` which comes from the expansion of the macro `frame_support::macro_magic::forward_tokens_verbatim` (in Nightly builds, run with -Z macro-backtrace for more info)
      
      For more information about this error, try `rustc --explain E0277`.
      error: could not compile `pallet-multisig` (lib test) due to 1 previous error 
      ```
      
      
      
      
      ## For reviewers:
      
      (If there is a better solution, please let me know!)
      
      The first commit contains actual attempt to fix the problem:
      https://github.com/paritytech/polkadot-sdk/commit/3c5499e5
      
      .
      The second commit is just removal of `BlockHashCount` from all other
      places where not needed by default.
      
      Closes: https://github.com/paritytech/polkadot-sdk/issues/1657
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
  16. May 22, 2024
  17. May 16, 2024
    • Oliver Tale-Yazdi's avatar
      [Runtime] Bound XCMP queue (#3952) · 4adfa37d
      Oliver Tale-Yazdi authored
      
      Re-applying #2302 after increasing the `MaxPageSize`.  
      
      Remove `without_storage_info` from the XCMP queue pallet. Part of
      https://github.com/paritytech/polkadot-sdk/issues/323
      
      Changes:
      - Limit the number of messages and signals a HRMP channel can have at
      most.
      - Limit the number of HRML channels.
      
      A No-OP migration is put in place to ensure that all `BoundedVec`s still
      decode and not truncate after upgrade. The storage version is thereby
      bumped to 5 to have our tooling remind us to deploy that migration.
      
      ## Integration
      
      If you see this error in your try-runtime-cli:  
      ```pre
      Max message size for channel is too large. This means that the V5 migration can be front-run and an
      attacker could place a large message just right before the migration to make other messages un-decodable.
      Please either increase `MaxPageSize` or decrease the `max_message_size` for this channel. Channel max:
      102400, MaxPageSize: 65535
      ```
      
      Then increase the `MaxPageSize` of the `cumulus_pallet_xcmp_queue` to
      something like this:
      ```rust
      type MaxPageSize = ConstU32<{ 103 * 1024 }>;
      ```
      
      There is currently no easy way for on-chain governance to adjust the
      HRMP max message size of all channels, but it could be done:
      https://github.com/paritytech/polkadot-sdk/issues/3145.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
      Co-authored-by: default avatarFrancisco Aguirre <franciscoaguirreperez@gmail.com>
    • Oliver Tale-Yazdi's avatar
      Deprecate `dmp-queue` pallet (#4475) · 76230a15
      Oliver Tale-Yazdi authored
      `cumulus-pallet-dmp-queue` is not needed anymore since
      https://github.com/paritytech/polkadot-sdk/pull/1246.
      
      The only logic that remains in the pallet is a lazy migration in the
      [`on_idle`](https://github.com/paritytech/polkadot-sdk/blob/8d62c13b
      
      /cumulus/pallets/dmp-queue/src/lib.rs#L158)
      hook.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
  18. May 15, 2024
    • Alexandru Gheorghe's avatar
      Make vscode rustanalyzer fast again (#4470) · e31fcffb
      Alexandru Gheorghe authored
      
      This bump of versions:
      
      https://github.com/paritytech/polkadot-sdk/pull/4409/files#diff-13ee4b2252c9e516a0547f2891aa2105c3ca71c6d7a1e682c69be97998dfc87eR11936
      
      reintroduced a dependency to proc-macro-crate 2.0.0 which is suffering
      from: https://github.com/bkchr/proc-macro-crate/pull/42 this, so bump
      parity-scale-codec to a newer version to eliminate the bad
      proc-macro-crate 2.0.0 dependency.
      
      ---------
      
      Signed-off-by: default avatarAlexandru Gheorghe <alexandru.gheorghe@parity.io>
      Co-authored-by: command-bot <>
  19. May 08, 2024
    • Francisco Aguirre's avatar
      XcmDryRunApi - Dry-running extrinsics to get their XCM effects (#3872) · 7213e363
      Francisco Aguirre authored
      
      # Context
      
      Estimating fees for XCM execution and sending has been an area with bad
      UX.
      The addition of the
      [XcmPaymentApi](https://github.com/paritytech/polkadot-sdk/pull/3607)
      exposed the necessary components to be able to estimate XCM fees
      correctly, however, that was not the full story.
      The `XcmPaymentApi` works for estimating fees only if you know the
      specific XCM you want to execute or send.
      This is necessary but most UIs want to estimate the fees for extrinsics,
      they don't necessarily know the XCM program that's executed by them.
      
      # Main addition
      
      A new runtime API is introduced, the `XcmDryRunApi`, that given an
      extrinsic, or an XCM program, returns its effects:
      - Execution result
      - Local XCM (in the case of an extrinsic)
      - Forwarded XCMs
      - List of events
      
      This API can be used on its own for dry-running purposes, for
      double-checking or testing, but it mainly shines when used in
      conjunction with the `XcmPaymentApi`.
      UIs can use these two APIs to estimate transfers.
      
      # How it works
      
      New tests are added to exemplify how to incorporate both APIs.
      There's a mock test just to make sure everything works under
      `xcm-fee-payment-runtime-api`.
      There's a real-world test using Westend and AssetHubWestend under
      `cumulus/parachains/integration-tests/emulated/tests/assets/asset-hub-westend/src/tests/xcm_fee_estimation.rs`.
      Added both a test for a simple teleport between chains and a reserve
      transfer asset between two parachains going through a reserve.
      
      The steps to follow:
      - Use `XcmDryRunApi::dry_run_extrinsic` to get local XCM program and
      forwarded messages
      - For each forwarded message
      - Use `XcmPaymentApi::query_delivery_fee` LOCALLY to get the delivery
      fees
      - Use `XcmPaymentApi::query_xcm_weight` ON THE DESTINATION to get the
      remote execution weight
      - (optional) Use `XcmPaymentApi::query_acceptable_payment_assets` ON THE
      DESTINATION to know on which assets the execution fees can be paid
      - Use `XcmPaymentApi::query_weight_to_asset_fee` ON THE DESTINATION to
      convert weight to the actual remote execution fees
      - Use `XcmDryRunApi::dry_run_xcm` ON THE DESTINATION to know if a new
      message will be forwarded, if so, continue
      
      # Dear reviewer
      
      The changes in this PR are grouped as follows, and in order of
      importance:
      - Addition of new runtime API
      - Definition, mock and simple tests:
      polkadot/xcm/xcm-fee-payment-runtime-api/*
      - Implemented on Westend, Asset Hub Westend and Penpal, will implement
      on every runtime in a following PR
      - Addition of a new config item to the XCM executor for recording xcms
      about to be executed
        - Definition: polkadot/xcm/xcm-executor/*
        - Implementation: polkadot/xcm/pallet-xcm/*
      - had to update all runtime xcm_config.rs files with `type XcmRecorder =
      XcmPallet;`
      - Addition of a new trait for inspecting the messages in queues
        - Definition: polkadot/xcm/xcm-builder/src/routing.rs
        - Implemented it on all routers:
          - ChildParachainRouter: polkadot/runtime/common/src/xcm_sender.rs
      - ParentAsUmp: cumulus/primitives/utility/src/lib.rs (piggybacked on
      implementation in cumulus/pallets/parachain-system/src/lib.rs)
          - XcmpQueue: cumulus/pallets/xcmp-queue/src/lib.rs
          - Bridge: bridges/modules/xcm-bridge-hub-router/src/lib.rs
      - More complicated and useful tests:
      -
      cumulus/parachains/integration-tests/emulated/tests/assets/asset-hub-westend/src/tests/xcm_fee_estimation.rs
      
      ## Next steps
      
      With this PR, Westend, AssetHubWestend, Rococo and AssetHubRococo have
      the new API.
      UIs can test on these runtimes to create better experiences around
      cross-chain operations.
      
      Next:
      - Add XcmDryRunApi to all system parachains
      - Integrate xcm fee estimation in all emulated tests
      - Get this on the fellowship runtimes
      
      ---------
      
      Co-authored-by: default avatarAdrian Catangiu <adrian@parity.io>
  20. Apr 23, 2024
  21. Apr 17, 2024
    • Oliver Tale-Yazdi's avatar
      Fix nostd build of several crates (#4060) · 7a2c9d4a
      Oliver Tale-Yazdi authored
      
      Preparation for https://github.com/paritytech/polkadot-sdk/pull/3935
      
      Changes:
      - Add some `default-features = false` for the case that a crate and that
      dependency both support nostd builds.
      - Shuffle files around of some benchmarking-only crates. These
      conditionally disabled the `cfg_attr` for nostd and pulled in libstd.
      Example [here](https://github.com/ggwpez/zepter/pull/95). The actual
      logic is moved into a `inner.rs` to preserve nostd capability of the
      crate in case the benchmarking feature is disabled.
      - Add some `use sp_std::vec` where needed.
      - Remove some `optional = true` in cases where it was not optional.
      - Removed one superfluous `cfg_attr(not(feature = "std"), no_std..`.
      
      All in all this should be logical no-op.
      
      ---------
      
      Signed-off-by: default avatarOliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
  22. Apr 12, 2024
    • Adrian Catangiu's avatar
      pallet-xcm: add new extrinsic for asset transfers using explicit XCM transfer types (#3695) · 1e971b8d
      Adrian Catangiu authored
      
      # Description
      
      Add `transfer_assets_using()` for transferring assets from local chain
      to destination chain using explicit XCM transfer types such as:
      - `TransferType::LocalReserve`: transfer assets to sovereign account of
      destination chain and forward a notification XCM to `dest` to mint and
      deposit reserve-based assets to `beneficiary`.
      - `TransferType::DestinationReserve`: burn local assets and forward a
      notification to `dest` chain to withdraw the reserve assets from this
      chain's sovereign account and deposit them to `beneficiary`.
      - `TransferType::RemoteReserve(reserve)`: burn local assets, forward XCM
      to `reserve` chain to move reserves from this chain's SA to `dest`
      chain's SA, and forward another XCM to `dest` to mint and deposit
      reserve-based assets to `beneficiary`. Typically the remote `reserve` is
      Asset Hub.
      - `TransferType::Teleport`: burn local assets and forward XCM to `dest`
      chain to mint/teleport assets and deposit them to `beneficiary`.
      
      By default, an asset's reserve is its origin chain. But sometimes we may
      want to explicitly use another chain as reserve (as long as allowed by
      runtime `IsReserve` filter).
      
      This is very helpful for transferring assets with multiple configured
      reserves (such as Asset Hub ForeignAssets), when the transfer strictly
      depends on the used reserve.
      
      E.g. For transferring Foreign Assets over a bridge, Asset Hub must be
      used as the reserve location.
      
      # Example usage scenarios
      
      ## Transfer bridged ethereum ERC20-tokenX between ecosystem parachains.
      
      ERC20-tokenX is registered on AssetHub as a ForeignAsset by the
      Polkadot<>Ethereum bridge (Snowbridge). Its asset_id is something like
      `(parents:2, (GlobalConsensus(Ethereum), Address(tokenX_contract)))`.
      Its _original_ reserve is Ethereum (only we can't use Ethereum as a
      reserve in local transfers); but, since tokenX is also registered on
      AssetHub as a ForeignAsset, we can use AssetHub as a reserve.
      
      With this PR we can transfer tokenX from ParaA to ParaB while using
      AssetHub as a reserve.
      
      ## Transfer AssetHub ForeignAssets between parachains
      
      AssetA created on ParaA but also registered as foreign asset on Asset
      Hub. Can use AssetHub as a reserve.
      
      And all of the above can be done while still controlling transfer type
      for `fees` so mixing assets in same transfer is supported.
      
      # Tests
      
      Added integration tests for showcasing:
      - transferring local (not bridged) assets from parachain over bridge
      using local Asset Hub reserve,
      - transferring foreign assets from parachain to Asset Hub,
      - transferring foreign assets from Asset Hub to parachain,
      - transferring foreign assets from parachain to parachain using local
      Asset Hub reserve.
      
      ---------
      
      Co-authored-by: default avatarBranislav Kontur <bkontur@gmail.com>
      Co-authored-by: command-bot <>
  23. Apr 09, 2024
    • Sebastian Kunert's avatar
      Move cumulus zombienet tests to aura & async backing (#3568) · df818d29
      Sebastian Kunert authored
      Cumulus test-parachain node and test runtime were still using relay
      chain consensus and 12s blocktimes. With async backing around the corner
      on the major chains we should switch our tests too.
      
      Also needed to nicely test the changes coming to collators in #3168.
      
      ### Changes Overview
      - Followed the [migration
      guide](https://wiki.polkadot.network/docs/maintain-guides-async-backing)
      for async backing for the cumulus-test-runtime
      - Adjusted the cumulus-test-service to use the correct import-queue,
      lookahead collator etc.
      - The block validation function now uses the Aura Ext Executor so that
      the seal of the block is validated
      - Previous point requires that we seal block before calling into
      `validate_block`, I introduced a helper function for that
      - Test client adjusted to provide a slot to the relay chain proof and
      the aura pre-digest
    • Facundo Farall's avatar
      Upgrade `trie-db` from `0.28.0` to `0.29.0` (#3982) · 4e73c0fc
      Facundo Farall authored
      
      # Description
      - What does this PR do?
      1. Upgrades `trie-db`'s version to the latest release. This release
      includes, among others, an implementation of `DoubleEndedIterator` for
      the `TrieDB` struct, allowing to iterate both backwards and forwards
      within the leaves of a trie.
      2. Upgrades `trie-bench` to `0.39.0` for compatibility.
      3. Upgrades `criterion` to `0.5.1` for compatibility.
      - Why are these changes needed?
      Besides keeping up with the upgrade of `trie-db`, this specifically adds
      the functionality of iterating back on the leafs of a trie, with
      `sp-trie`. In a project we're currently working on, this comes very
      handy to verify a Merkle proof that is the response to a challenge. The
      challenge is a random hash that (most likely) will not be an existing
      leaf in the trie. So the challenged user, has to provide a Merkle proof
      of the previous and next existing leafs in the trie, that surround the
      random challenged hash.
      
      Without having DoubleEnded iterators, we're forced to iterate until we
      find the first existing leaf, like so:
      ```rust
              // ************* VERIFIER (RUNTIME) *************
              // Verify proof. This generates a partial trie based on the proof and
              // checks that the root hash matches the `expected_root`.
              let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
              let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();
      
              // Print all leaf node keys and values.
              println!("\nPrinting leaf nodes of partial tree...");
              for key in trie.key_iter().unwrap() {
                  if key.is_ok() {
                      println!("Leaf node key: {:?}", key.clone().unwrap());
      
                      let val = trie.get(&key.unwrap());
      
                      if val.is_ok() {
                          println!("Leaf node value: {:?}", val.unwrap());
                      } else {
                          println!("Leaf node value: None");
                      }
                  }
              }
      
              println!("RECONSTRUCTED TRIE {:#?}", trie);
      
              // Create an iterator over the leaf nodes.
              let mut iter = trie.iter().unwrap();
      
              // First element with a value should be the previous existing leaf to the challenged hash.
              let mut prev_key = None;
              for element in &mut iter {
                  if element.is_ok() {
                      let (key, _) = element.unwrap();
                      prev_key = Some(key);
                      break;
                  }
              }
              assert!(prev_key.is_some());
      
              // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
              assert!(prev_key.unwrap() <= challenge_hash.to_vec());
      
              // The next element should exist (meaning there is no other existing leaf between the
              // previous and next leaf) and it should be greater than the challenged hash.
              let next_key = iter.next().unwrap().unwrap().0;
              assert!(next_key >= challenge_hash.to_vec());
      ```
      
      With DoubleEnded iterators, we can avoid that, like this:
      ```rust
              // ************* VERIFIER (RUNTIME) *************
              // Verify proof. This generates a partial trie based on the proof and
              // checks that the root hash matches the `expected_root`.
              let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
              let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();
      
              // Print all leaf node keys and values.
              println!("\nPrinting leaf nodes of partial tree...");
              for key in trie.key_iter().unwrap() {
                  if key.is_ok() {
                      println!("Leaf node key: {:?}", key.clone().unwrap());
      
                      let val = trie.get(&key.unwrap());
      
                      if val.is_ok() {
                          println!("Leaf node value: {:?}", val.unwrap());
                      } else {
                          println!("Leaf node value: None");
                      }
                  }
              }
      
              // println!("RECONSTRUCTED TRIE {:#?}", trie);
              println!("\nChallenged key: {:?}", challenge_hash);
      
              // Create an iterator over the leaf nodes.
              let mut double_ended_iter = trie.into_double_ended_iter().unwrap();
      
              // First element with a value should be the previous existing leaf to the challenged hash.
              double_ended_iter.seek(&challenge_hash.to_vec()).unwrap();
              let next_key = double_ended_iter.next_back().unwrap().unwrap().0;
              let prev_key = double_ended_iter.next_back().unwrap().unwrap().0;
      
              // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
              println!("Prev key: {:?}", prev_key);
              assert!(prev_key <= challenge_hash.to_vec());
      
              println!("Next key: {:?}", next_key);
              assert!(next_key >= challenge_hash.to_vec());
      ```
      - How were these changes implemented and what do they affect?
      All that is needed for this functionality to be exposed is changing the
      version number of `trie-db` in all the `Cargo.toml`s applicable, and
      re-exporting some additional structs from `trie-db` in `sp-trie`.
      
      ---------
      
      Co-authored-by: default avatarBastian Köcher <git@kchr.de>
  24. Apr 08, 2024
  25. Apr 04, 2024
    • Liam Aharon's avatar
      Migrate fee payment from `Currency` to `fungible` (#2292) · bda4e75a
      Liam Aharon authored
      Part of https://github.com/paritytech/polkadot-sdk/issues/226 
      Related https://github.com/paritytech/polkadot-sdk/issues/1833
      
      - Deprecate `CurrencyAdapter` and introduce `FungibleAdapter`
      - Deprecate `ToStakingPot` and replace usage with `ResolveTo`
      - Required creating a new `StakingPotAccountId` struct that implements
      `TypedGet` for the staking pot account ID
      - Update parachain common utils `DealWithFees`, `ToAuthor` and
      `AssetsToBlockAuthor` implementations to use `fungible`
      - Update runtime XCM Weight Traders to use `ResolveTo` instead of
      `ToStakingPot`
      - Update runtime Transaction Payment pallets to use `FungibleAdapter`
      instead of `CurrencyAdapter`
      - [x] Blocked by https://github.com/paritytech/polkadot-sdk/pull/1296,
      needs the `Unbalanced::decrease_balance` fix
  26. Apr 02, 2024
    • Dastan's avatar
      migrations: prevent accidentally using unversioned migrations instead of... · e5427969
      Dastan authored
      migrations: prevent accidentally using unversioned migrations instead of `VersionedMigration` (#3835)
      
      closes #1324 
      
      #### Problem
      Currently, it is possible to accidentally use inner unversioned
      migration instead of `VersionedMigration` since both implement
      `OnRuntimeUpgrade`.
      
      #### Solution
      
      With this change, we make it clear that value of `Inner` is not intended
      to be used directly. It is achieved by bounding `Inner` to new trait
      `UncheckedOnRuntimeUpgrade`, which has the same interface (except
      `unchecked_` prefix) as `OnRuntimeUpgrade`.
      
      #### `try-runtime` functions
      
      Since developers can implement `try-runtime` for `Inner` value in
      `VersionedMigration` and have custom logic for it, I added the same
      `try-runtime` functions to `UncheckedOnRuntimeUpgrade`. I looked for a
      ways to not duplicate functions, but couldn't find anything that doesn't
      significantly change the codebase. So I would appreciate If you have any
      suggestions to improve this
      
      cc @liamaharon
      
       @xlc 
      
      polkadot address: 16FqwPZ8GRC5U5D4Fu7W33nA55ZXzXGWHwmbnE1eT6pxuqcT
      
      ---------
      
      Co-authored-by: default avatarLiam Aharon <liam.aharon@hotmail.com>
    • Serban Iorga's avatar
      Align dependencies with `parity-bridges-common` (#3937) · 8e95a3e1
      Serban Iorga authored
      Working towards migrating the `parity-bridges-common` repo inside
      `polkadot-sdk`. This PR upgrades some dependencies in order to align
      them with the versions used in `parity-bridges-common`
      
      Related to
      https://github.com/paritytech/parity-bridges-common/issues/2538
  27. Mar 27, 2024
  28. Mar 26, 2024
    • Dcompoze's avatar
      Fix spelling mistakes across the whole repository (#3808) · 002d9260
      Dcompoze authored
      **Update:** Pushed additional changes based on the review comments.
      
      **This pull request fixes various spelling mistakes in this
      repository.**
      
      Most of the changes are contained in the first **3** commits:
      
      - `Fix spelling mistakes in comments and docs`
      
      - `Fix spelling mistakes in test names`
      
      - `Fix spelling mistakes in error messages, panic messages, logs and
      tracing`
      
      Other source code spelling mistakes are separated into individual
      commits for easier reviewing:
      
      - `Fix the spelling of 'authority'`
      
      - `Fix the spelling of 'REASONABLE_HEADERS_IN_JUSTIFICATION_ANCESTRY'`
      
      - `Fix the spelling of 'prev_enqueud_messages'`
      
      - `Fix the spelling of 'endpoint'`
      
      - `Fix the spelling of 'children'`
      
      - `Fix the spelling of 'PenpalSiblingSovereignAccount'`
      
      - `Fix the spelling of 'PenpalSudoAccount'`
      
      - `Fix the spelling of 'insufficient'`
      
      - `Fix the spelling of 'PalletXcmExtrinsicsBenchmark'`
      
      - `Fix the spelling of 'subtracted'`
      
      - `Fix the spelling of 'CandidatePendingAvailability'`
      
      - `Fix the spelling of 'exclusive'`
      
      - `Fix the spelling of 'until'`
      
      - `Fix the spelling of 'discriminator'`
      
      - `Fix the spelling of 'nonexistent'`
      
      - `Fix the spelling of 'subsystem'`
      
      - `Fix the spelling of 'indices'`
      
      - `Fix the spelling of 'committed'`
      
      - `Fix the spelling of 'topology'`
      
      - `Fix the spelling of 'response'`
      
      - `Fix the spelling of 'beneficiary'`
      
      - `Fix the spelling of 'formatted'`
      
      - `Fix the spelling of 'UNKNOWN_PROOF_REQUEST'`
      
      - `Fix the spelling of 'succeeded'`
      
      - `Fix the spelling of 'reopened'`
      
      - `Fix the spelling of 'proposer'`
      
      - `Fix the spelling of 'InstantiationNonce'`
      
      - `Fix the spelling of 'depositor'`
      
      - `Fix the spelling of 'expiration'`
      
      - `Fix the spelling of 'phantom'`
      
      - `Fix the spelling of 'AggregatedKeyValue'`
      
      - `Fix the spelling of 'randomness'`
      
      - `Fix the spelling of 'defendant'`
      
      - `Fix the spelling of 'AquaticMammal'`
      
      - `Fix the spelling of 'transactions'`
      
      - `Fix the spelling of 'PassingTracingSubscriber'`
      
      - `Fix the spelling of 'TxSignaturePayload'`
      
      - `Fix the spelling of 'versioning'`
      
      - `Fix the spelling of 'descendant'`
      
      - `Fix the spelling of 'overridden'`
      
      - `Fix the spelling of 'network'`
      
      Let me know if this structure is adequate.
      
      **Note:** The usage of the words `Merkle`, `Merkelize`, `Merklization`,
      `Merkelization`, `Merkleization`, is somewhat inconsistent but I left it
      as it is.
      
      ~~**Note:** In some places the term `Receival` is used to refer to
      message reception, IMO `Reception` is the correct word here, but I left
      it as it is.~~
      
      ~~**Note:** In some places the term `Overlayed` is used instead of the
      more acceptable version `Overlaid` but I also left it as it is.~~
      
      ~~**Note:** In some places the term `Applyable` is used instead of the
      correct version `Applicable` but I also left it as it is.~~
      
      **Note:** Some usage of British vs American english e.g. `judgement` vs
      `judgment`, `initialise` vs `initialize`, `optimise` vs `optimize` etc.
      are both present in different places, but I suppose that's
      understandable given the number of contributors.
      
      ~~**Note:** There is a spelling mistake in `.github/CODEOWNERS` but it
      triggers errors in CI when I make changes to it, so I left it as it
      is.~~
  29. Mar 19, 2024
    • Juan Ignacio Rios's avatar
      Add HRMP notification handlers to the xcm-executor (#3696) · 8b3bf39a
      Juan Ignacio Rios authored
      
      Currently the xcm-executor returns an `Unimplemented` error if it
      receives any HRMP-related instruction.
      What I propose here, which is what we are currently doing in our forked
      executor at polimec, is to introduce a trait implemented by the executor
      which will handle those instructions.
      
      This way, if parachains want to keep the default behavior, they just use
      `()` and it will return unimplemented, but they can also implement their
      own logic to establish HRMP channels with other chains in an automated
      fashion, without requiring to go through governance.
      
      Our implementation is mentioned in the [polkadot HRMP
      docs](https://arc.net/l/quote/hduiivbu), and it was suggested to us to
      submit a PR to add these changes to polkadot-sdk.
      
      ---------
      
      Co-authored-by: default avatarBranislav Kontur <bkontur@gmail.com>
      Co-authored-by: command-bot <>
  30. Mar 18, 2024
  31. Mar 17, 2024
  32. Mar 15, 2024
  33. Mar 11, 2024
  34. Mar 07, 2024