Skip to content
Commits on Source (43)
  • dependabot[bot]'s avatar
    Bump thiserror from 1.0.57 to 1.0.58 · 9be3e0b6
    dependabot[bot] authored
    
    
    Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.57 to 1.0.58.
    - [Release notes](https://github.com/dtolnay/thiserror/releases)
    - [Commits](https://github.com/dtolnay/thiserror/compare/1.0.57...1.0.58)
    
    ---
    updated-dependencies:
    - dependency-name: thiserror
      dependency-type: direct:production
      update-type: version-update:semver-patch
    ...
    
    Signed-off-by: default avatardependabot[bot] <[email protected]>
    9be3e0b6
  • dependabot[bot]'s avatar
    Bump scale-info from 2.10.0 to 2.11.0 · d9c5e410
    dependabot[bot] authored
    
    
    Bumps [scale-info](https://github.com/paritytech/scale-info) from 2.10.0 to 2.11.0.
    - [Release notes](https://github.com/paritytech/scale-info/releases)
    - [Changelog](https://github.com/paritytech/scale-info/blob/master/CHANGELOG.md)
    - [Commits](https://github.com/paritytech/scale-info/compare/v2.10.0...v2.11.0)
    
    ---
    updated-dependencies:
    - dependency-name: scale-info
      dependency-type: direct:production
      update-type: version-update:semver-minor
    ...
    
    Signed-off-by: default avatardependabot[bot] <[email protected]>
    d9c5e410
  • dependabot[bot]'s avatar
    Bump anyhow from 1.0.80 to 1.0.81 · 61e865bc
    dependabot[bot] authored
    
    
    Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.80 to 1.0.81.
    - [Release notes](https://github.com/dtolnay/anyhow/releases)
    - [Commits](https://github.com/dtolnay/anyhow/compare/1.0.80...1.0.81)
    
    ---
    updated-dependencies:
    - dependency-name: anyhow
      dependency-type: direct:production
      update-type: version-update:semver-patch
    ...
    
    Signed-off-by: default avatardependabot[bot] <[email protected]>
    61e865bc
  • dependabot[bot]'s avatar
    Bump async-trait from 0.1.77 to 0.1.78 · 62372e74
    dependabot[bot] authored
    
    
    Bumps [async-trait](https://github.com/dtolnay/async-trait) from 0.1.77 to 0.1.78.
    - [Release notes](https://github.com/dtolnay/async-trait/releases)
    - [Commits](https://github.com/dtolnay/async-trait/compare/0.1.77...0.1.78)
    
    ---
    updated-dependencies:
    - dependency-name: async-trait
      dependency-type: direct:production
      update-type: version-update:semver-patch
    ...
    
    Signed-off-by: default avatardependabot[bot] <[email protected]>
    62372e74
  • Svyatoslav Nikolsky's avatar
    9cb8a2ca
  • Serban Iorga's avatar
    Move generic CLI logic to different crate (#2885) · 2a76cbbb
    Serban Iorga authored
    * Move generic CLI logic to separate crate
    
    * Move and rename `CliChain` trait definition
    
    Move it to `relay-substrate-client`
    
    * Move generic cli logic to substrate-relay-helper
    
    * Fix docs warnings
    2a76cbbb
  • Serban Iorga's avatar
    Backport changes from `polkadot-sdk/master` (#2887) · 28c459be
    Serban Iorga authored
    
    
    * Add two new zombienet tests for bridges (manual run) (#3072)
    
    extracted useful code from #2982
    
    This PR:
    - adds test 2 for Rococo <> Westend bridge: checks that relayer doesn't
    submit any extra headers while there are no any messages;
    - adds test 3 for Rococo <> Westend bridge: checks that relayer doesn't
    submit any extra headers when there are messages;
    - fixes most of comments from #2439 (like: log names, ability to run
    specify test number when calling `run-tests.sh`).
    
    Right now of all our tests, only test 2 is working (until BHs will be
    upgraded to use async backing), so you can test it with
    `./bridges/zombienet/run-tests.sh --test 2` locally.
    
    (cherry picked from commit 2e6067d768a84e780258aa4580116f7180e24290)
    
    * [cumulus] Improved check for sane bridge fees calculations (#3175)
    
    - [x] change constants when CI fails (should fail :) )
    
    On the AssetHubRococo: 1701175800126 -> 1700929825257 = 0.15 %
    decreased.
    ```
    Feb 02 12:59:05.520 ERROR bridges::estimate: `bridging::XcmBridgeHubRouterBaseFee` actual value: 1701175800126 for runtime: statemine-1006000 (statemine-0.tx14.au1)
    
    Feb 02 13:02:40.647 ERROR bridges::estimate: `bridging::XcmBridgeHubRouterBaseFee` actual value: 1700929825257 for runtime: statemine-1006000 (statemine-0.tx14.au1)
    
    ```
    
    On the AssetHubWestend: 2116038876326 -> 1641718372993 = 22.4 %
    decreased.
    ```
    Feb 02 12:56:00.880 ERROR bridges::estimate: `bridging::XcmBridgeHubRouterBaseFee` actual value: 2116038876326 for runtime: westmint-1006000 (westmint-0.tx14.au1)
    
    Feb 02 13:04:42.515 ERROR bridges::estimate: `bridging::XcmBridgeHubRouterBaseFee` actual value: 1641718372993 for runtime: westmint-1006000 (westmint-0.tx14.au1)
    ```
    
    (cherry picked from commit 74b597fcaf143d8dd7f8d40e59f51065514f21d7)
    
    * Enable async backing on all testnet system chains (#2949)
    
    Built on top of https://github.com/paritytech/polkadot-sdk/pull/2826/
    which was a trial run.
    
    Guide:
    https://github.com/w3f/polkadot-wiki/blob/master/docs/maintain/maintain-guides-async-backing.md
    
    ---------
    
    Signed-off-by: default avatargeorgepisaltu <[email protected]>
    Co-authored-by: default avatarBranislav Kontur <[email protected]>
    Co-authored-by: default avatarDónal Murray <[email protected]>
    Co-authored-by: default avatarDmitry Sinyavin <[email protected]>
    Co-authored-by: default avatars0me0ne-unkn0wn <[email protected]>
    Co-authored-by: default avatarSvyatoslav Nikolsky <[email protected]>
    Co-authored-by: default avatarBastian Köcher <[email protected]>
    Co-authored-by: default avatargeorgepisaltu <[email protected]>
    (cherry picked from commit 700d5f85b768fe1867660938aa5edfcf4b26f632)
    
    * Introduce submit_finality_proof_ex call to bridges GRANDPA pallet (#3225)
    
    backport of
    https://github.com/paritytech/parity-bridges-common/pull/2821 (see
    detailed description there)
    
    (cherry picked from commit a462207158360b162228d9877fed7b9ca1f23fc2)
    
    * Bridge zombienet tests refactoring (#3260)
    
    Related to https://github.com/paritytech/polkadot-sdk/issues/3242
    
    Reorganizing the bridge zombienet tests in order to:
    - separate the environment spawning from the actual tests
    - offer better control over the tests and some possibility to
    orchestrate them as opposed to running everything from the zndsl file
    
    Only rewrote the asset transfer test using this new "framework". The old
    logic and old tests weren't functionally modified or deleted. The plan
    is to get feedback on this approach first and if this is agreed upon,
    migrate the other 2 tests later in separate PRs and also do other
    improvements later.
    
    (cherry picked from commit dfc8e4696c6edfb76ccb05f469a221ebb5b270ff)
    
    * Bridges: add test 0002 to CI (#3310)
    
    Bridges: add test 0002 to CI
    (cherry picked from commit 1b66bb51b52d3e6cacf155bd3e038b6ef44ac5da)
    
    * Bridge zombienet tests - move all test scripts to the same folder (#3333)
    
    Related to https://github.com/paritytech/polkadot-sdk/issues/3242
    
    (cherry picked from commit 5fc7622cb312f2d32ec8365012ee0a49622db8c8)
    
    * Lift dependencies to the workspace (Part 2/x) (#3366)
    
    Lifting some more dependencies to the workspace. Just using the
    most-often updated ones for now.
    It can be reproduced locally.
    
    ```sh
    $ zepter transpose dependency lift-to-workspace --ignore-errors syn quote thiserror "regex:^serde.*"
    
    $ zepter transpose dependency lift-to-workspace --version-resolver=highest syn quote thiserror "regex:^serde.*" --fix
    
    $ taplo format --config .config/taplo.toml
    ```
    
    ---------
    
    Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
    (cherry picked from commit e89d0fca351de0712f104c55fe45ed124b5c6968)
    
    * Add support for BHP local and BHK local (#3443)
    
    Related to https://github.com/paritytech/polkadot-sdk/issues/3400
    
    Extracting small parts of
    https://github.com/paritytech/polkadot-sdk/pull/3429 into separate PR:
    
    - Add support for BHP local and BHK local
    - Increase the timeout for the bridge zomienet tests
    
    (cherry picked from commit e4b6b8cd7973633f86d1b92a56abf2a946b7be84)
    
    * Bridge zombienet tests: move all "framework" files under one folder (#3462)
    
    Related to https://github.com/paritytech/polkadot-sdk/issues/3400
    
    Moving all bridges testing "framework" files under one folder in order
    to be able to download the entire folder when we want to add tests in
    other repos
    
    No significant functional changes
    
    (cherry picked from commit 6fc1d41d4487b9164451cd8214674ce195ab06a0)
    
    * Bridge zombienet tests: Check amount received at destination (#3490)
    
    Related to https://github.com/paritytech/polkadot-sdk/issues/3475
    
    (cherry picked from commit 2cdda0e62dd3088d2fd09cea627059674070c277)
    
    * FRAME: Create `TransactionExtension` as a replacement for `SignedExtension` (#2280)
    
    Closes #2160
    
    First part of [Extrinsic
    Horizon](https://github.com/paritytech/polkadot-sdk/issues/2415)
    
    Introduces a new trait `TransactionExtension` to replace
    `SignedExtension`. Introduce the idea of transactions which obey the
    runtime's extensions and have according Extension data (né Extra data)
    yet do not have hard-coded signatures.
    
    Deprecate the terminology of "Unsigned" when used for
    transactions/extrinsics owing to there now being "proper" unsigned
    transactions which obey the extension framework and "old-style" unsigned
    which do not. Instead we have __*General*__ for the former and
    __*Bare*__ for the latter. (Ultimately, the latter will be phased out as
    a type of transaction, and Bare will only be used for Inherents.)
    
    Types of extrinsic are now therefore:
    - Bare (no hardcoded signature, no Extra data; used to be known as
    "Unsigned")
    - Bare transactions (deprecated): Gossiped, validated with
    `ValidateUnsigned` (deprecated) and the `_bare_compat` bits of
    `TransactionExtension` (deprecated).
      - Inherents: Not gossiped, validated with `ProvideInherent`.
    - Extended (Extra data): Gossiped, validated via `TransactionExtension`.
      - Signed transactions (with a hardcoded signature).
      - General transactions (without a hardcoded signature).
    
    `TransactionExtension` differs from `SignedExtension` because:
    - A signature on the underlying transaction may validly not be present.
    - It may alter the origin during validation.
    - `pre_dispatch` is renamed to `prepare` and need not contain the checks
    present in `validate`.
    - `validate` and `prepare` is passed an `Origin` rather than a
    `AccountId`.
    - `validate` may pass arbitrary information into `prepare` via a new
    user-specifiable type `Val`.
    - `AdditionalSigned`/`additional_signed` is renamed to
    `Implicit`/`implicit`. It is encoded *for the entire transaction* and
    passed in to each extension as a new argument to `validate`. This
    facilitates the ability of extensions to acts as underlying crypto.
    
    There is a new `DispatchTransaction` trait which contains only default
    function impls and is impl'ed for any `TransactionExtension` impler. It
    provides several utility functions which reduce some of the tedium from
    using `TransactionExtension` (indeed, none of its regular functions
    should now need to be called directly).
    
    Three transaction version discriminator ("versions") are now
    permissible:
    - 0b000000100: Bare (used to be called "Unsigned"): contains Signature
    or Extra (extension data). After bare transactions are no longer
    supported, this will strictly identify an Inherents only.
    - 0b100000100: Old-school "Signed" Transaction: contains Signature and
    Extra (extension data).
    - 0b010000100: New-school "General" Transaction: contains Extra
    (extension data), but no Signature.
    
    For the New-school General Transaction, it becomes trivial for authors
    to publish extensions to the mechanism for authorizing an Origin, e.g.
    through new kinds of key-signing schemes, ZK proofs, pallet state,
    mutations over pre-authenticated origins or any combination of the
    above.
    
    Wrap your `SignedExtension`s in `AsTransactionExtension`. This should be
    accompanied by renaming your aggregate type in line with the new
    terminology. E.g. Before:
    
    ```rust
    /// The SignedExtension to the basic transaction logic.
    pub type SignedExtra = (
    	/* snip */
    	MySpecialSignedExtension,
    );
    /// Unchecked extrinsic type as expected by this runtime.
    pub type UncheckedExtrinsic =
    	generic::UncheckedExtrinsic<Address, RuntimeCall, Signature, SignedExtra>;
    ```
    
    After:
    
    ```rust
    /// The extension to the basic transaction logic.
    pub type TxExtension = (
    	/* snip */
    	AsTransactionExtension<MySpecialSignedExtension>,
    );
    /// Unchecked extrinsic type as expected by this runtime.
    pub type UncheckedExtrinsic =
    	generic::UncheckedExtrinsic<Address, RuntimeCall, Signature, TxExtension>;
    ```
    
    You'll also need to alter any transaction building logic to add a
    `.into()` to make the conversion happen. E.g. Before:
    
    ```rust
    fn construct_extrinsic(
    		/* snip */
    ) -> UncheckedExtrinsic {
    	let extra: SignedExtra = (
    		/* snip */
    		MySpecialSignedExtension::new(/* snip */),
    	);
    	let payload = SignedPayload::new(call.clone(), extra.clone()).unwrap();
    	let signature = payload.using_encoded(|e| sender.sign(e));
    	UncheckedExtrinsic::new_signed(
    		/* snip */
    		Signature::Sr25519(signature),
    		extra,
    	)
    }
    ```
    
    After:
    
    ```rust
    fn construct_extrinsic(
    		/* snip */
    ) -> UncheckedExtrinsic {
    	let tx_ext: TxExtension = (
    		/* snip */
    		MySpecialSignedExtension::new(/* snip */).into(),
    	);
    	let payload = SignedPayload::new(call.clone(), tx_ext.clone()).unwrap();
    	let signature = payload.using_encoded(|e| sender.sign(e));
    	UncheckedExtrinsic::new_signed(
    		/* snip */
    		Signature::Sr25519(signature),
    		tx_ext,
    	)
    }
    ```
    
    Most `SignedExtension`s can be trivially converted to become a
    `TransactionExtension`. There are a few things to know.
    
    - Instead of a single trait like `SignedExtension`, you should now
    implement two traits individually: `TransactionExtensionBase` and
    `TransactionExtension`.
    - Weights are now a thing and must be provided via the new function `fn
    weight`.
    
    This trait takes care of anything which is not dependent on types
    specific to your runtime, most notably `Call`.
    
    - `AdditionalSigned`/`additional_signed` is renamed to
    `Implicit`/`implicit`.
    - Weight must be returned by implementing the `weight` function. If your
    extension is associated with a pallet, you'll probably want to do this
    via the pallet's existing benchmarking infrastructure.
    
    Generally:
    - `pre_dispatch` is now `prepare` and you *should not reexecute the
    `validate` functionality in there*!
    - You don't get an account ID any more; you get an origin instead. If
    you need to presume an account ID, then you can use the trait function
    `AsSystemOriginSigner::as_system_origin_signer`.
    - You get an additional ticket, similar to `Pre`, called `Val`. This
    defines data which is passed from `validate` into `prepare`. This is
    important since you should not be duplicating logic from `validate` to
    `prepare`, you need a way of passing your working from the former into
    the latter. This is it.
    - This trait takes two type parameters: `Call` and `Context`. `Call` is
    the runtime call type which used to be an associated type; you can just
    move it to become a type parameter for your trait impl. `Context` is not
    currently used and you can safely implement over it as an unbounded
    type.
    - There's no `AccountId` associated type any more. Just remove it.
    
    Regarding `validate`:
    - You get three new parameters in `validate`; all can be ignored when
    migrating from `SignedExtension`.
    - `validate` returns a tuple on success; the second item in the tuple is
    the new ticket type `Self::Val` which gets passed in to `prepare`. If
    you use any information extracted during `validate` (off-chain and
    on-chain, non-mutating) in `prepare` (on-chain, mutating) then you can
    pass it through with this. For the tuple's last item, just return the
    `origin` argument.
    
    Regarding `prepare`:
    - This is renamed from `pre_dispatch`, but there is one change:
    - FUNCTIONALITY TO VALIDATE THE TRANSACTION NEED NOT BE DUPLICATED FROM
    `validate`!!
    - (This is different to `SignedExtension` which was required to run the
    same checks in `pre_dispatch` as in `validate`.)
    
    Regarding `post_dispatch`:
    - Since there are no unsigned transactions handled by
    `TransactionExtension`, `Pre` is always defined, so the first parameter
    is `Self::Pre` rather than `Option<Self::Pre>`.
    
    If you make use of `SignedExtension::validate_unsigned` or
    `SignedExtension::pre_dispatch_unsigned`, then:
    - Just use the regular versions of these functions instead.
    - Have your logic execute in the case that the `origin` is `None`.
    - Ensure your transaction creation logic creates a General Transaction
    rather than a Bare Transaction; this means having to include all
    `TransactionExtension`s' data.
    - `ValidateUnsigned` can still be used (for now) if you need to be able
    to construct transactions which contain none of the extension data,
    however these will be phased out in stage 2 of the Transactions Horizon,
    so you should consider moving to an extension-centric design.
    
    - [x] Introduce `CheckSignature` impl of `TransactionExtension` to
    ensure it's possible to have crypto be done wholly in a
    `TransactionExtension`.
    - [x] Deprecate `SignedExtension` and move all uses in codebase to
    `TransactionExtension`.
      - [x] `ChargeTransactionPayment`
      - [x] `DummyExtension`
      - [x] `ChargeAssetTxPayment` (asset-tx-payment)
      - [x] `ChargeAssetTxPayment` (asset-conversion-tx-payment)
      - [x] `CheckWeight`
      - [x] `CheckTxVersion`
      - [x] `CheckSpecVersion`
      - [x] `CheckNonce`
      - [x] `CheckNonZeroSender`
      - [x] `CheckMortality`
      - [x] `CheckGenesis`
      - [x] `CheckOnlySudoAccount`
      - [x] `WatchDummy`
      - [x] `PrevalidateAttests`
      - [x] `GenericSignedExtension`
      - [x] `SignedExtension` (chain-polkadot-bulletin)
      - [x] `RefundSignedExtensionAdapter`
    - [x] Implement `fn weight` across the board.
    - [ ] Go through all pre-existing extensions which assume an account
    signer and explicitly handle the possibility of another kind of origin.
    - [x] `CheckNonce` should probably succeed in the case of a non-account
    origin.
    - [x] `CheckNonZeroSender` should succeed in the case of a non-account
    origin.
    - [x] `ChargeTransactionPayment` and family should fail in the case of a
    non-account origin.
      - [ ]
    - [x] Fix any broken tests.
    
    ---------
    
    Signed-off-by: default avatargeorgepisaltu <[email protected]>
    Signed-off-by: default avatarAlexandru Vasile <[email protected]>
    Signed-off-by: default avatardependabot[bot] <[email protected]>
    Signed-off-by: default avatarOliver Tale-Yazdi <[email protected]>
    Signed-off-by: default avatarAlexandru Gheorghe <[email protected]>
    Signed-off-by: default avatarAndrei Sandu <[email protected]>
    Co-authored-by: default avatarNikhil Gupta <[email protected]>
    Co-authored-by: default avatargeorgepisaltu <[email protected]>
    Co-authored-by: default avatarChevdor <[email protected]>
    Co-authored-by: default avatarBastian Köcher <[email protected]>
    Co-authored-by: default avatarMaciej <[email protected]>
    Co-authored-by: default avatarJavier Viola <[email protected]>
    Co-authored-by: default avatarMarcin S. <[email protected]>
    Co-authored-by: default avatarTsvetomir Dimitrov <[email protected]>
    Co-authored-by: default avatarJavier Bullrich <[email protected]>
    Co-authored-by: default avatarKoute <[email protected]>
    Co-authored-by: default avatarAdrian Catangiu <[email protected]>
    Co-authored-by: Vladimir Istyufeev's avatarVladimir Istyufeev <[email protected]>
    Co-authored-by: default avatarRoss Bulat <[email protected]>
    Co-authored-by: default avatarGonçalo Pestana <[email protected]>
    Co-authored-by: default avatarLiam Aharon <[email protected]>
    Co-authored-by: default avatarSvyatoslav Nikolsky <[email protected]>
    Co-authored-by: default avatarAndré Silva <[email protected]>
    Co-authored-by: default avatarOliver Tale-Yazdi <[email protected]>
    Co-authored-by: default avatars0me0ne-unkn0wn <[email protected]>
    Co-authored-by: default avatarordian <[email protected]>
    Co-authored-by: default avatarSebastian Kunert <[email protected]>
    Co-authored-by: default avatarAaro Altonen <[email protected]>
    Co-authored-by: default avatarDmitry Markin <[email protected]>
    Co-authored-by: default avatarAlexandru Vasile <[email protected]>
    Co-authored-by: default avatarAlexander Samusev <[email protected]>
    Co-authored-by: default avatarJulian Eager <[email protected]>
    Co-authored-by: default avatarMichal Kucharczyk <[email protected]>
    Co-authored-by: default avatarDavide Galassi <[email protected]>
    Co-authored-by: default avatarDónal Murray <[email protected]>
    Co-authored-by: default avataryjh <[email protected]>
    Co-authored-by: default avatarTom Mi <[email protected]>
    Co-authored-by: default avatardependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
    Co-authored-by: default avatarWill | Paradox | ParaNodes.io <[email protected]>
    Co-authored-by: default avatarBastian Köcher <[email protected]>
    Co-authored-by: default avatarJoshy Orndorff <[email protected]>
    Co-authored-by: default avatarJoshy Orndorff <[email protected]>
    Co-authored-by: default avatarPG Herveou <[email protected]>
    Co-authored-by: default avatarAlexander Theißen <[email protected]>
    Co-authored-by: default avatarKian Paimani <[email protected]>
    Co-authored-by: default avatarJuan Girini <[email protected]>
    Co-authored-by: default avatarbader y <[email protected]>
    Co-authored-by: default avatarJames Wilson <[email protected]>
    Co-authored-by: default avatarjoe petrowski <[email protected]>
    Co-authored-by: default avatarasynchronous rob <[email protected]>
    Co-authored-by: default avatarParth <[email protected]>
    Co-authored-by: default avatarAndrew Jones <[email protected]>
    Co-authored-by: default avatarJonathan Udd <[email protected]>
    Co-authored-by: default avatarSerban Iorga <[email protected]>
    Co-authored-by: default avatarEgor_P <[email protected]>
    Co-authored-by: default avatarBranislav Kontur <[email protected]>
    Co-authored-by: default avatarEvgeny Snitko <[email protected]>
    Co-authored-by: default avatarJust van Stam <[email protected]>
    Co-authored-by: default avatarFrancisco Aguirre <[email protected]>
    Co-authored-by: default avatargupnik <[email protected]>
    Co-authored-by: default avatardzmitry-lahoda <[email protected]>
    Co-authored-by: default avatarzhiqiangxu <[email protected]>
    Co-authored-by: default avatarNazar Mokrynskyi <[email protected]>
    Co-authored-by: default avatarAnwesh <[email protected]>
    Co-authored-by: default avatarcheme <[email protected]>
    Co-authored-by: default avatarSam Johnson <[email protected]>
    Co-authored-by: default avatarkianenigma <[email protected]>
    Co-authored-by: default avatarJegor Sidorenko <[email protected]>
    Co-authored-by: default avatarMuharem <[email protected]>
    Co-authored-by: default avatarjoepetrowski <[email protected]>
    Co-authored-by: default avatarAlexandru Gheorghe <[email protected]>
    Co-authored-by: default avatarGabriel Facco de Arruda <[email protected]>
    Co-authored-by: default avatarSquirrel <[email protected]>
    Co-authored-by: default avatarAndrei Sandu <[email protected]>
    Co-authored-by: default avatargeorgepisaltu <[email protected]>
    Co-authored-by: command-bot <>
    (cherry picked from commit fd5f9292f500652e1d4792b09fb8ac60e1268ce4)
    
    * Revert "FRAME: Create `TransactionExtension` as a replacement for `SignedExtension` (#2280)" (#3665)
    
    This PR reverts #2280 which introduced `TransactionExtension` to replace
    `SignedExtension`.
    
    As a result of the discussion
    [here](https://github.com/paritytech/polkadot-sdk/pull/3623#issuecomment-1986789700),
    the changes will be reverted for now with plans to reintroduce the
    concept in the future.
    
    ---------
    
    Signed-off-by: default avatargeorgepisaltu <[email protected]>
    (cherry picked from commit bbd51ce867967f71657b901f1a956ad4f75d352e)
    
    * Increase timeout for assertions (#3680)
    
    Prevents timeouts in ci like
    https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5516019
    
    (cherry picked from commit c4c9257386036a9e27e7ee001fe8eadb80958cc0)
    
    * Removes `as [disambiguation_path]` from `derive_impl` usage (#3652)
    
    Step in https://github.com/paritytech/polkadot-sdk/issues/171
    
    This PR removes `as [disambiguation_path]` syntax from `derive_impl`
    usage across the polkadot-sdk as introduced in
    https://github.com/paritytech/polkadot-sdk/pull/3505
    
    (cherry picked from commit 7099f6e1b1fa3c8cd894693902263d9ed0e38978)
    
    * Fix typo (#3691)
    
    (cherry picked from commit 6b1179f13b4815685769c9f523720ec9ed0e2ff4)
    
    * Bridge zombienet tests: remove unneeded accounts (#3700)
    
    Bridge zombienet tests: remove unneeded accounts
    
    (cherry picked from commit 0c6c837f689a287583508506e342ba07687e8d26)
    
    * Fix typos (#3753)
    
    (cherry picked from commit 7241a8db7b3496816503c6058dae67f66c666b00)
    
    * Update polkadot-sdk refs
    
    * Fix dependency conflicts
    
    * Fix build
    
    * cargo fmt
    
    * Fix spellcheck test
    
    ---------
    
    Co-authored-by: default avatarSvyatoslav Nikolsky <[email protected]>
    Co-authored-by: default avatarBranislav Kontur <[email protected]>
    Co-authored-by: default avatarMarcin S <[email protected]>
    Co-authored-by: default avatarOliver Tale-Yazdi <[email protected]>
    Co-authored-by: default avatarGavin Wood <[email protected]>
    Co-authored-by: default avatargeorgepisaltu <[email protected]>
    Co-authored-by: default avatarJavier Viola <[email protected]>
    Co-authored-by: default avatargupnik <[email protected]>
    Co-authored-by: default avatarjokess123 <[email protected]>
    Co-authored-by: default avatarslicejoke <[email protected]>
    28c459be
  • dependabot[bot]'s avatar
    Bump async-trait from 0.1.78 to 0.1.79 · 8e58eb92
    dependabot[bot] authored
    
    
    Bumps [async-trait](https://github.com/dtolnay/async-trait) from 0.1.78 to 0.1.79.
    - [Release notes](https://github.com/dtolnay/async-trait/releases)
    - [Commits](https://github.com/dtolnay/async-trait/compare/0.1.78...0.1.79)
    
    ---
    updated-dependencies:
    - dependency-name: async-trait
      dependency-type: direct:production
      update-type: version-update:semver-patch
    ...
    
    Signed-off-by: default avatardependabot[bot] <[email protected]>
    8e58eb92
  • Serban Iorga's avatar
    [Backport from `polkadot-sdk`] Move chain definitions to separate folder (#2892) · 4bc73d80
    Serban Iorga authored
    * [Bridges] Move chain definitions to separate folder (#3822)
    
    Related to
    https://github.com/paritytech/parity-bridges-common/issues/2538
    
    This PR doesn't contain any functional changes.
    
    The PR moves specific bridged chain definitions from
    `bridges/primitives` to `bridges/chains` folder in order to facilitate
    the migration of the `parity-bridges-repo` into `polkadot-sdk` as
    discussed in https://hackmd.io/LprWjZ0bQXKpFeveYHIRXw?view
    
    Apart from this it also includes some cosmetic changes to some
    `Cargo.toml` files as a result of running `diener workspacify`.
    
    (cherry picked from commit 0711729d251efebf3486db602119ecfa67d98366)
    
    * diener workspacify
    4bc73d80
  • Serban Iorga's avatar
    a6bac6bc
  • Svyatoslav Nikolsky's avatar
    relayer waits until chain spec version matches the configured in Client... · 47b4c48c
    Svyatoslav Nikolsky authored
    relayer waits until chain spec version matches the configured in Client constructor/reconnect (#2894)
    
    47b4c48c
  • Svyatoslav Nikolsky's avatar
    Relayer v1.2.1 (#2895) · 1022b6d4
    Svyatoslav Nikolsky authored
    * bump relayer version
    
    * bump supported chain versions
    
    * updated lock file
    1022b6d4
  • Serban Iorga's avatar
    polkadot-sdk backport leftovers (#2896) · b9acdabb
    Serban Iorga authored
    b9acdabb
  • Serban Iorga's avatar
    e4e1ea60
  • Serban Iorga's avatar
    Backport changes from polkadot-sdk (#2899) · a7a47eae
    Serban Iorga authored
    
    
    * Fix spelling mistakes across the whole repository (#3808)
    
    **Update:** Pushed additional changes based on the review comments.
    
    **This pull request fixes various spelling mistakes in this
    repository.**
    
    Most of the changes are contained in the first **3** commits:
    
    - `Fix spelling mistakes in comments and docs`
    
    - `Fix spelling mistakes in test names`
    
    - `Fix spelling mistakes in error messages, panic messages, logs and
    tracing`
    
    Other source code spelling mistakes are separated into individual
    commits for easier reviewing:
    
    - `Fix the spelling of 'authority'`
    
    - `Fix the spelling of 'REASONABLE_HEADERS_IN_JUSTIFICATION_ANCESTRY'`
    
    - `Fix the spelling of 'prev_enqueud_messages'`
    
    - `Fix the spelling of 'endpoint'`
    
    - `Fix the spelling of 'children'`
    
    - `Fix the spelling of 'PenpalSiblingSovereignAccount'`
    
    - `Fix the spelling of 'PenpalSudoAccount'`
    
    - `Fix the spelling of 'insufficient'`
    
    - `Fix the spelling of 'PalletXcmExtrinsicsBenchmark'`
    
    - `Fix the spelling of 'subtracted'`
    
    - `Fix the spelling of 'CandidatePendingAvailability'`
    
    - `Fix the spelling of 'exclusive'`
    
    - `Fix the spelling of 'until'`
    
    - `Fix the spelling of 'discriminator'`
    
    - `Fix the spelling of 'nonexistent'`
    
    - `Fix the spelling of 'subsystem'`
    
    - `Fix the spelling of 'indices'`
    
    - `Fix the spelling of 'committed'`
    
    - `Fix the spelling of 'topology'`
    
    - `Fix the spelling of 'response'`
    
    - `Fix the spelling of 'beneficiary'`
    
    - `Fix the spelling of 'formatted'`
    
    - `Fix the spelling of 'UNKNOWN_PROOF_REQUEST'`
    
    - `Fix the spelling of 'succeeded'`
    
    - `Fix the spelling of 'reopened'`
    
    - `Fix the spelling of 'proposer'`
    
    - `Fix the spelling of 'InstantiationNonce'`
    
    - `Fix the spelling of 'depositor'`
    
    - `Fix the spelling of 'expiration'`
    
    - `Fix the spelling of 'phantom'`
    
    - `Fix the spelling of 'AggregatedKeyValue'`
    
    - `Fix the spelling of 'randomness'`
    
    - `Fix the spelling of 'defendant'`
    
    - `Fix the spelling of 'AquaticMammal'`
    
    - `Fix the spelling of 'transactions'`
    
    - `Fix the spelling of 'PassingTracingSubscriber'`
    
    - `Fix the spelling of 'TxSignaturePayload'`
    
    - `Fix the spelling of 'versioning'`
    
    - `Fix the spelling of 'descendant'`
    
    - `Fix the spelling of 'overridden'`
    
    - `Fix the spelling of 'network'`
    
    Let me know if this structure is adequate.
    
    **Note:** The usage of the words `Merkle`, `Merkelize`, `Merklization`,
    `Merkelization`, `Merkleization`, is somewhat inconsistent but I left it
    as it is.
    
    ~~**Note:** In some places the term `Receival` is used to refer to
    message reception, IMO `Reception` is the correct word here, but I left
    it as it is.~~
    
    ~~**Note:** In some places the term `Overlayed` is used instead of the
    more acceptable version `Overlaid` but I also left it as it is.~~
    
    ~~**Note:** In some places the term `Applyable` is used instead of the
    correct version `Applicable` but I also left it as it is.~~
    
    **Note:** Some usage of British vs American english e.g. `judgement` vs
    `judgment`, `initialise` vs `initialize`, `optimise` vs `optimize` etc.
    are both present in different places, but I suppose that's
    understandable given the number of contributors.
    
    ~~**Note:** There is a spelling mistake in `.github/CODEOWNERS` but it
    triggers errors in CI when I make changes to it, so I left it as it
    is.~~
    
    (cherry picked from commit 002d9260f9a0f844f87eefd0abce8bd95aae351b)
    
    * Fix
    
    ---------
    
    Co-authored-by: default avatarDcompoze <[email protected]>
    a7a47eae
  • Serban Iorga's avatar
    Leftover (#2900) · 1e4fd28e
    Serban Iorga authored
    1e4fd28e
  • Serban Iorga's avatar
    Fix polkadot-sdk CI failures (#2901) · 95660136
    Serban Iorga authored
    * taplo
    
    * markdown
    
    * publish = false
    
    * feature propagation
    95660136
  • dependabot[bot]'s avatar
    Bump serde_json from 1.0.114 to 1.0.115 · 35474455
    dependabot[bot] authored
    
    
    Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.114 to 1.0.115.
    - [Release notes](https://github.com/serde-rs/json/releases)
    - [Commits](https://github.com/serde-rs/json/compare/v1.0.114...v1.0.115)
    
    ---
    updated-dependencies:
    - dependency-name: serde_json
      dependency-type: direct:production
      update-type: version-update:semver-patch
    ...
    
    Signed-off-by: default avatardependabot[bot] <[email protected]>
    35474455
  • dependabot[bot]'s avatar
    Bump scale-info from 2.11.0 to 2.11.1 · 051d6ed6
    dependabot[bot] authored
    
    
    Bumps [scale-info](https://github.com/paritytech/scale-info) from 2.11.0 to 2.11.1.
    - [Release notes](https://github.com/paritytech/scale-info/releases)
    - [Changelog](https://github.com/paritytech/scale-info/blob/master/CHANGELOG.md)
    - [Commits](https://github.com/paritytech/scale-info/compare/v2.11.0...v2.11.1)
    
    ---
    updated-dependencies:
    - dependency-name: scale-info
      dependency-type: direct:production
      update-type: version-update:semver-patch
    ...
    
    Signed-off-by: default avatardependabot[bot] <[email protected]>
    051d6ed6
  • dependabot[bot]'s avatar
    Bump tokio from 1.36.0 to 1.37.0 · f7f983c4
    dependabot[bot] authored
    
    
    Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.36.0 to 1.37.0.
    - [Release notes](https://github.com/tokio-rs/tokio/releases)
    - [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.36.0...tokio-1.37.0)
    
    ---
    updated-dependencies:
    - dependency-name: tokio
      dependency-type: direct:production
      update-type: version-update:semver-minor
    ...
    
    Signed-off-by: default avatardependabot[bot] <[email protected]>
    f7f983c4
  • Svyatoslav Nikolsky's avatar
    Some relayer improvments (#2902) · 34817d81
    Svyatoslav Nikolsky authored
    * added CLI arguments: full WS URI + separate for WS path URI component + additional log
    
    * URI -> URL?
    
    * added TODO
    
    * fmt
    34817d81
  • Serban Iorga's avatar
    Address migration comments (#2910) · 8c4c99d1
    Serban Iorga authored
    * Use workspace.[authors|edition]
    
    * Add repository.workspace = true
    
    * Upgrade dependencies to the polkadot-sdk versions
    
    * Upgrade async-std version
    
    * Update jsonrpsee version
    
    * cargo update
    
    * use ci-unified image
    8c4c99d1
  • Serban Iorga's avatar
    ckb-merkle-mountain-range -> 0.5.2 (#2911) · bea13eab
    Serban Iorga authored
    bea13eab
  • Serban Iorga's avatar
    Backport changes from polakdot-sdk (#2920) · cfe1e7de
    Serban Iorga authored
    
    
    * Migrate fee payment from `Currency` to `fungible` (#2292)
    
    Part of https://github.com/paritytech/polkadot-sdk/issues/226
    Related https://github.com/paritytech/polkadot-sdk/issues/1833
    
    - Deprecate `CurrencyAdapter` and introduce `FungibleAdapter`
    - Deprecate `ToStakingPot` and replace usage with `ResolveTo`
    - Required creating a new `StakingPotAccountId` struct that implements
    `TypedGet` for the staking pot account ID
    - Update parachain common utils `DealWithFees`, `ToAuthor` and
    `AssetsToBlockAuthor` implementations to use `fungible`
    - Update runtime XCM Weight Traders to use `ResolveTo` instead of
    `ToStakingPot`
    - Update runtime Transaction Payment pallets to use `FungibleAdapter`
    instead of `CurrencyAdapter`
    - [x] Blocked by https://github.com/paritytech/polkadot-sdk/pull/1296,
    needs the `Unbalanced::decrease_balance` fix
    
    (cherry picked from commit bda4e75ac49786a7246531cf729b25c208cd38e6)
    
    * Upgrade `trie-db` from `0.28.0` to `0.29.0` (#3982)
    
    - What does this PR do?
    1. Upgrades `trie-db`'s version to the latest release. This release
    includes, among others, an implementation of `DoubleEndedIterator` for
    the `TrieDB` struct, allowing to iterate both backwards and forwards
    within the leaves of a trie.
    2. Upgrades `trie-bench` to `0.39.0` for compatibility.
    3. Upgrades `criterion` to `0.5.1` for compatibility.
    - Why are these changes needed?
    Besides keeping up with the upgrade of `trie-db`, this specifically adds
    the functionality of iterating back on the leafs of a trie, with
    `sp-trie`. In a project we're currently working on, this comes very
    handy to verify a Merkle proof that is the response to a challenge. The
    challenge is a random hash that (most likely) will not be an existing
    leaf in the trie. So the challenged user, has to provide a Merkle proof
    of the previous and next existing leafs in the trie, that surround the
    random challenged hash.
    
    Without having DoubleEnded iterators, we're forced to iterate until we
    find the first existing leaf, like so:
    ```rust
            // ************* VERIFIER (RUNTIME) *************
            // Verify proof. This generates a partial trie based on the proof and
            // checks that the root hash matches the `expected_root`.
            let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
            let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();
    
            // Print all leaf node keys and values.
            println!("\nPrinting leaf nodes of partial tree...");
            for key in trie.key_iter().unwrap() {
                if key.is_ok() {
                    println!("Leaf node key: {:?}", key.clone().unwrap());
    
                    let val = trie.get(&key.unwrap());
    
                    if val.is_ok() {
                        println!("Leaf node value: {:?}", val.unwrap());
                    } else {
                        println!("Leaf node value: None");
                    }
                }
            }
    
            println!("RECONSTRUCTED TRIE {:#?}", trie);
    
            // Create an iterator over the leaf nodes.
            let mut iter = trie.iter().unwrap();
    
            // First element with a value should be the previous existing leaf to the challenged hash.
            let mut prev_key = None;
            for element in &mut iter {
                if element.is_ok() {
                    let (key, _) = element.unwrap();
                    prev_key = Some(key);
                    break;
                }
            }
            assert!(prev_key.is_some());
    
            // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
            assert!(prev_key.unwrap() <= challenge_hash.to_vec());
    
            // The next element should exist (meaning there is no other existing leaf between the
            // previous and next leaf) and it should be greater than the challenged hash.
            let next_key = iter.next().unwrap().unwrap().0;
            assert!(next_key >= challenge_hash.to_vec());
    ```
    
    With DoubleEnded iterators, we can avoid that, like this:
    ```rust
            // ************* VERIFIER (RUNTIME) *************
            // Verify proof. This generates a partial trie based on the proof and
            // checks that the root hash matches the `expected_root`.
            let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
            let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();
    
            // Print all leaf node keys and values.
            println!("\nPrinting leaf nodes of partial tree...");
            for key in trie.key_iter().unwrap() {
                if key.is_ok() {
                    println!("Leaf node key: {:?}", key.clone().unwrap());
    
                    let val = trie.get(&key.unwrap());
    
                    if val.is_ok() {
                        println!("Leaf node value: {:?}", val.unwrap());
                    } else {
                        println!("Leaf node value: None");
                    }
                }
            }
    
            // println!("RECONSTRUCTED TRIE {:#?}", trie);
            println!("\nChallenged key: {:?}", challenge_hash);
    
            // Create an iterator over the leaf nodes.
            let mut double_ended_iter = trie.into_double_ended_iter().unwrap();
    
            // First element with a value should be the previous existing leaf to the challenged hash.
            double_ended_iter.seek(&challenge_hash.to_vec()).unwrap();
            let next_key = double_ended_iter.next_back().unwrap().unwrap().0;
            let prev_key = double_ended_iter.next_back().unwrap().unwrap().0;
    
            // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
            println!("Prev key: {:?}", prev_key);
            assert!(prev_key <= challenge_hash.to_vec());
    
            println!("Next key: {:?}", next_key);
            assert!(next_key >= challenge_hash.to_vec());
    ```
    - How were these changes implemented and what do they affect?
    All that is needed for this functionality to be exposed is changing the
    version number of `trie-db` in all the `Cargo.toml`s applicable, and
    re-exporting some additional structs from `trie-db` in `sp-trie`.
    
    ---------
    
    Co-authored-by: default avatarBastian Köcher <[email protected]>
    (cherry picked from commit 4e73c0fcd37e4e8c14aeb83b5c9e680981e16079)
    
    * Update polkadot-sdk refs
    
    * Fix Cargo.lock
    
    ---------
    
    Co-authored-by: default avatarLiam Aharon <[email protected]>
    Co-authored-by: default avatarFacundo Farall <[email protected]>
    cfe1e7de
  • Serban Iorga's avatar
    Consume migrated crates from `polkadot-sdk` (#2921) · a174cfa9
    Serban Iorga authored
    * Remove migrated crates
    
    * Reference polkadot-sdk for the migrated crates
    
    * Leftovers
    
    * Fixes
    a174cfa9
  • Serban Iorga's avatar
    Delete the testing folder (#2922) · 11b56b74
    Serban Iorga authored
    The testing folder has also been moved to polkadot-sdk
    11b56b74
  • dependabot[bot]'s avatar
    Bump quote from 1.0.35 to 1.0.36 · 92a722ab
    dependabot[bot] authored
    
    
    Bumps [quote](https://github.com/dtolnay/quote) from 1.0.35 to 1.0.36.
    - [Release notes](https://github.com/dtolnay/quote/releases)
    - [Commits](https://github.com/dtolnay/quote/compare/1.0.35...1.0.36)
    
    ---
    updated-dependencies:
    - dependency-name: quote
      dependency-type: direct:production
      update-type: version-update:semver-patch
    ...
    
    Signed-off-by: default avatardependabot[bot] <[email protected]>
    92a722ab
  • Serban Iorga's avatar
    [dependabot] ignore migrated crates (#2943) · 581b81dc
    Serban Iorga authored
    * [dependabot] ignore migrated crates
    
    * ignore more migrated crates
    581b81dc
  • dependabot[bot]'s avatar
    Bump anyhow from 1.0.81 to 1.0.82 · df0d367e
    dependabot[bot] authored
    
    
    Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.81 to 1.0.82.
    - [Release notes](https://github.com/dtolnay/anyhow/releases)
    - [Commits](https://github.com/dtolnay/anyhow/compare/1.0.81...1.0.82)
    
    ---
    updated-dependencies:
    - dependency-name: anyhow
      dependency-type: direct:production
      update-type: version-update:semver-patch
    ...
    
    Signed-off-by: default avatardependabot[bot] <[email protected]>
    df0d367e
  • dependabot[bot]'s avatar
    Bump subxt from 0.32.1 to 0.35.2 · 0364a0aa
    dependabot[bot] authored
    
    
    Bumps [subxt](https://github.com/paritytech/subxt) from 0.32.1 to 0.35.2.
    - [Release notes](https://github.com/paritytech/subxt/releases)
    - [Changelog](https://github.com/paritytech/subxt/blob/master/CHANGELOG.md)
    - [Commits](https://github.com/paritytech/subxt/compare/v0.32.1...v0.35.2)
    
    ---
    updated-dependencies:
    - dependency-name: subxt
      dependency-type: direct:production
      update-type: version-update:semver-minor
    ...
    
    Signed-off-by: default avatardependabot[bot] <[email protected]>
    0364a0aa
  • Serban Iorga's avatar
    Update CI image (#2951) · c752f16c
    Serban Iorga authored
    * Update CI image
    
    * cargo update -p [email protected]
    c752f16c
  • dependabot[bot]'s avatar
    Bump async-trait from 0.1.79 to 0.1.80 · 4dd72be5
    dependabot[bot] authored
    
    
    Bumps [async-trait](https://github.com/dtolnay/async-trait) from 0.1.79 to 0.1.80.
    - [Release notes](https://github.com/dtolnay/async-trait/releases)
    - [Commits](https://github.com/dtolnay/async-trait/compare/0.1.79...0.1.80)
    
    ---
    updated-dependencies:
    - dependency-name: async-trait
      dependency-type: direct:production
      update-type: version-update:semver-patch
    ...
    
    Signed-off-by: default avatardependabot[bot] <[email protected]>
    4dd72be5
  • dependabot[bot]'s avatar
    Bump subxt from 0.35.2 to 0.35.3 · f80f01d4
    dependabot[bot] authored
    
    
    Bumps [subxt](https://github.com/paritytech/subxt) from 0.35.2 to 0.35.3.
    - [Release notes](https://github.com/paritytech/subxt/releases)
    - [Changelog](https://github.com/paritytech/subxt/blob/v0.35.3/CHANGELOG.md)
    - [Commits](https://github.com/paritytech/subxt/compare/v0.35.2...v0.35.3)
    
    ---
    updated-dependencies:
    - dependency-name: subxt
      dependency-type: direct:production
      update-type: version-update:semver-patch
    ...
    
    Signed-off-by: default avatardependabot[bot] <[email protected]>
    f80f01d4
  • Svyatoslav Nikolsky's avatar
    Relayer 1.3.0 (#2959) · 83193de0
    Svyatoslav Nikolsky authored
    * updated RELEASE.md
    
    * bump relay version
    
    * bump BHK version to 1_002_000
    83193de0
  • Svyatoslav Nikolsky's avatar
    exported P<>K dashboards (#2960) · bacc6bfd
    Svyatoslav Nikolsky authored
    bacc6bfd
  • Svyatoslav Nikolsky's avatar
    7f41e098
  • dependabot[bot]'s avatar
    Bump rustls from 0.21.8 to 0.21.11 in /tools/runtime-codegen (#2965) · 5c8d4df5
    dependabot[bot] authored
    
    
    Bumps [rustls](https://github.com/rustls/rustls) from 0.21.8 to 0.21.11.
    - [Release notes](https://github.com/rustls/rustls/releases)
    - [Changelog](https://github.com/rustls/rustls/blob/main/CHANGELOG.md)
    - [Commits](https://github.com/rustls/rustls/compare/v/0.21.8...v/0.21.11)
    
    ---
    updated-dependencies:
    - dependency-name: rustls
      dependency-type: indirect
    ...
    
    Signed-off-by: default avatardependabot[bot] <[email protected]>
    Co-authored-by: default avatardependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
    5c8d4df5
  • Svyatoslav Nikolsky's avatar
  • dependabot[bot]'s avatar
    Bump thiserror from 1.0.58 to 1.0.59 · f14e95c4
    dependabot[bot] authored
    
    
    Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.58 to 1.0.59.
    - [Release notes](https://github.com/dtolnay/thiserror/releases)
    - [Commits](https://github.com/dtolnay/thiserror/compare/1.0.58...1.0.59)
    
    ---
    updated-dependencies:
    - dependency-name: thiserror
      dependency-type: direct:production
      update-type: version-update:semver-patch
    ...
    
    Signed-off-by: default avatardependabot[bot] <[email protected]>
    f14e95c4
  • Svyatoslav Nikolsky's avatar
    69616d69
  • Svyatoslav Nikolsky's avatar
    deleted moved files (merge conflict) · ee09b007
    Svyatoslav Nikolsky authored
    ee09b007
  • Svyatoslav Nikolsky's avatar
    a1cbcc75
  • Svyatoslav Nikolsky's avatar
    removed another moved file · d9105160
    Svyatoslav Nikolsky authored
    d9105160
...@@ -84,7 +84,7 @@ SS58Prefix ...@@ -84,7 +84,7 @@ SS58Prefix
STALL_SYNC_TIMEOUT STALL_SYNC_TIMEOUT
SURI SURI
ServiceFactory/MS ServiceFactory/MS
TransactionExtension SignedExtension
Stringified Stringified
Submitter1 Submitter1
S|N S|N
......
...@@ -8,7 +8,35 @@ updates: ...@@ -8,7 +8,35 @@ updates:
timezone: Europe/Berlin timezone: Europe/Berlin
open-pull-requests-limit: 20 open-pull-requests-limit: 20
ignore: ignore:
# Substrate (+ Polkadot/Cumulus pallets) dependencies # Bridges polkadot-sdk dependencies
- dependency-name: bp-*
versions:
- ">= 0"
- dependency-name: bridge-runtime-common
versions:
- ">= 0"
- dependency-name: equivocation-detector
versions:
- ">= 0"
- dependency-name: finality-relay
versions:
- ">= 0"
- dependency-name: messages-relay
versions:
- ">= 0"
- dependency-name: parachains-relay
versions:
- ">= 0"
- dependency-name: relay-substrate-client
versions:
- ">= 0"
- dependency-name: relay-utils
versions:
- ">= 0"
- dependency-name: substrate-relay-helper
versions:
- ">= 0"
# Substrate polkadot-sdk (+ Polkadot/Cumulus pallets) dependencies
- dependency-name: beefy-* - dependency-name: beefy-*
versions: versions:
- ">= 0" - ">= 0"
...@@ -42,7 +70,7 @@ updates: ...@@ -42,7 +70,7 @@ updates:
- dependency-name: binary-merkle-tree - dependency-name: binary-merkle-tree
versions: versions:
- ">= 0" - ">= 0"
# Polkadot dependencies # Polkadot polkadot-sdk dependencies
- dependency-name: kusama-* - dependency-name: kusama-*
versions: versions:
- ">= 0" - ">= 0"
...@@ -52,7 +80,7 @@ updates: ...@@ -52,7 +80,7 @@ updates:
- dependency-name: xcm* - dependency-name: xcm*
versions: versions:
- ">= 0" - ">= 0"
# Cumulus dependencies # Cumulus polkadot-sdk dependencies
- dependency-name: cumulus-* - dependency-name: cumulus-*
versions: versions:
- ">= 0" - ">= 0"
......
...@@ -10,7 +10,7 @@ variables: ...@@ -10,7 +10,7 @@ variables:
GIT_DEPTH: 100 GIT_DEPTH: 100
CARGO_INCREMENTAL: 0 CARGO_INCREMENTAL: 0
ARCH: "x86_64" ARCH: "x86_64"
CI_IMAGE: "paritytech/bridges-ci:production" CI_IMAGE: "paritytech/ci-unified:bullseye-1.77.0-2024-04-10-v20240408"
RUST_BACKTRACE: full RUST_BACKTRACE: full
BUILDAH_IMAGE: "quay.io/buildah/stable:v1.29" BUILDAH_IMAGE: "quay.io/buildah/stable:v1.29"
BUILDAH_COMMAND: "buildah --storage-driver overlay2" BUILDAH_COMMAND: "buildah --storage-driver overlay2"
...@@ -121,7 +121,7 @@ check: ...@@ -121,7 +121,7 @@ check:
<<: *docker-env <<: *docker-env
<<: *test-refs <<: *test-refs
script: &check-script script: &check-script
- SKIP_WASM_BUILD=1 time cargo check --locked --verbose --workspace --features runtime-benchmarks - SKIP_WASM_BUILD=1 time cargo check --locked --verbose --workspace
check-nightly: check-nightly:
stage: test stage: test
...@@ -142,7 +142,7 @@ test: ...@@ -142,7 +142,7 @@ test:
# Enable this, when you see: "`cargo metadata` can not fail on project `Cargo.toml`" # Enable this, when you see: "`cargo metadata` can not fail on project `Cargo.toml`"
#- time cargo fetch --manifest-path=`cargo metadata --format-version=1 | jq --compact-output --raw-output ".packages[] | select(.name == \"polkadot-runtime\").manifest_path"` #- time cargo fetch --manifest-path=`cargo metadata --format-version=1 | jq --compact-output --raw-output ".packages[] | select(.name == \"polkadot-runtime\").manifest_path"`
#- time cargo fetch --manifest-path=`cargo metadata --format-version=1 | jq --compact-output --raw-output ".packages[] | select(.name == \"kusama-runtime\").manifest_path"` #- time cargo fetch --manifest-path=`cargo metadata --format-version=1 | jq --compact-output --raw-output ".packages[] | select(.name == \"kusama-runtime\").manifest_path"`
- CARGO_NET_OFFLINE=true SKIP_WASM_BUILD=1 time cargo test --verbose --workspace --features runtime-benchmarks - CARGO_NET_OFFLINE=true SKIP_WASM_BUILD=1 time cargo test --verbose --workspace
test-nightly: test-nightly:
stage: test stage: test
......
This diff is collapsed.
...@@ -6,54 +6,17 @@ license = "GPL-3.0-only" ...@@ -6,54 +6,17 @@ license = "GPL-3.0-only"
[workspace] [workspace]
resolver = "2" resolver = "2"
members = [ members = [
"bin/runtime-common", "relay-clients/client-bridge-hub-kusama",
"modules/beefy", "relay-clients/client-bridge-hub-polkadot",
"modules/grandpa", "relay-clients/client-bridge-hub-rococo",
"modules/messages", "relay-clients/client-bridge-hub-westend",
"modules/parachains", "relay-clients/client-kusama",
"modules/relayers", "relay-clients/client-polkadot",
"modules/xcm-bridge-hub", "relay-clients/client-polkadot-bulletin",
"modules/xcm-bridge-hub-router", "relay-clients/client-rococo",
"primitives/beefy", "relay-clients/client-westend",
"primitives/chain-asset-hub-rococo", "substrate-relay",
"primitives/chain-asset-hub-westend",
"primitives/chain-bridge-hub-cumulus",
"primitives/chain-bridge-hub-kusama",
"primitives/chain-bridge-hub-polkadot",
"primitives/chain-bridge-hub-rococo",
"primitives/chain-bridge-hub-westend",
"primitives/chain-kusama",
"primitives/chain-polkadot",
"primitives/chain-polkadot-bulletin",
"primitives/chain-rococo",
"primitives/chain-westend",
"primitives/header-chain",
"primitives/messages",
"primitives/parachains",
"primitives/polkadot-core",
"primitives/relayers",
"primitives/runtime",
"primitives/test-utils",
"primitives/xcm-bridge-hub-router",
"relays/bin-substrate",
"relays/client-bridge-hub-kusama",
"relays/client-bridge-hub-polkadot",
"relays/client-bridge-hub-rococo",
"relays/client-bridge-hub-westend",
"relays/client-kusama",
"relays/client-polkadot",
"relays/client-polkadot-bulletin",
"relays/client-rococo",
"relays/client-substrate",
"relays/client-westend",
"relays/equivocation",
"relays/finality",
"relays/lib-substrate-relay",
"relays/messages",
"relays/parachains",
"relays/utils",
] ]
# Setup clippy lints as `polkadot-sdk`, # Setup clippy lints as `polkadot-sdk`,
...@@ -89,7 +52,7 @@ complexity = { level = "deny", priority = 1 } ...@@ -89,7 +52,7 @@ complexity = { level = "deny", priority = 1 }
[workspace.dependencies] [workspace.dependencies]
log = { version = "0.4.20", default-features = false } log = { version = "0.4.20", default-features = false }
quote = { version = "1.0.33" } quote = { version = "1.0.36" }
serde = { version = "1.0.197", default-features = false } serde = { version = "1.0.197", default-features = false }
serde_json = { version = "1.0.114", default-features = false } serde_json = { version = "1.0.115", default-features = false }
thiserror = { version = "1.0.48" } thiserror = { version = "1.0.59" }
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
# #
# See the `deployments/README.md` for all the available `PROJECT` values. # See the `deployments/README.md` for all the available `PROJECT` values.
FROM docker.io/paritytech/bridges-ci:production as builder FROM docker.io/paritytech/ci-unified:bullseye-1.77.0-2024-04-10-v20240408 as builder
USER root USER root
WORKDIR /parity-bridges-common WORKDIR /parity-bridges-common
......
...@@ -38,10 +38,10 @@ cargo test --all ...@@ -38,10 +38,10 @@ cargo test --all
``` ```
Also you can build the repo with [Parity CI Docker Also you can build the repo with [Parity CI Docker
image](https://github.com/paritytech/scripts/tree/master/dockerfiles/bridges-ci): image](https://github.com/paritytech/scripts/tree/master/dockerfiles/ci-unified):
```bash ```bash
docker pull paritytech/bridges-ci:production docker pull paritytech/ci-unified:bullseye-1.77.0-2024-04-10-v20240408
mkdir ~/cache mkdir ~/cache
chown 1000:1000 ~/cache #processes in the container runs as "nonroot" user with UID 1000 chown 1000:1000 ~/cache #processes in the container runs as "nonroot" user with UID 1000
docker run --rm -it -w /shellhere/parity-bridges-common \ docker run --rm -it -w /shellhere/parity-bridges-common \
...@@ -49,7 +49,7 @@ docker run --rm -it -w /shellhere/parity-bridges-common \ ...@@ -49,7 +49,7 @@ docker run --rm -it -w /shellhere/parity-bridges-common \
-v "$(pwd)":/shellhere/parity-bridges-common \ -v "$(pwd)":/shellhere/parity-bridges-common \
-e CARGO_HOME=/cache/cargo/ \ -e CARGO_HOME=/cache/cargo/ \
-e SCCACHE_DIR=/cache/sccache/ \ -e SCCACHE_DIR=/cache/sccache/ \
-e CARGO_TARGET_DIR=/cache/target/ paritytech/bridges-ci:production cargo build --all -e CARGO_TARGET_DIR=/cache/target/ paritytech/ci-unified:bullseye-1.77.0-2024-04-10-v20240408 cargo build --all
#artifacts can be found in ~/cache/target #artifacts can be found in ~/cache/target
``` ```
......
...@@ -6,16 +6,16 @@ come first and details come in the last sections. ...@@ -6,16 +6,16 @@ come first and details come in the last sections.
### Making a Release ### Making a Release
All releases are supposed to be done from the All releases are supposed to be done from the
[`polkadot-staging` branch](https://github.com/paritytech/parity-bridges-common/tree/polkadot-staging). [`master` branch](https://github.com/paritytech/parity-bridges-common/tree/master).
This branch is assumed to contain changes, that are reviewed and audited. This branch is assumed to contain changes, that are reviewed and audited.
To prepare a release: To prepare a release:
1. Make sure all required changes are merged to the 1. Make sure all required changes are merged to the
[`polkadot-staging` branch](https://github.com/paritytech/parity-bridges-common/tree/polkadot-staging); [`master` branch](https://github.com/paritytech/parity-bridges-common/tree/master);
2. Select release version: go to the `Cargo.toml` of `substrate-relay` crate 2. Select release version: go to the `Cargo.toml` of `substrate-relay` crate
([here](https://github.com/paritytech/parity-bridges-common/blob/polkadot-staging/relays/bin-substrate/Cargo.toml#L3)) ([here](https://github.com/paritytech/parity-bridges-common/blob/master/relays/bin-substrate/Cargo.toml#L3))
to look for the latest version. Then increment the minor or major version. to look for the latest version. Then increment the minor or major version.
**NOTE**: we are not going to properly support [semver](https://semver.org) **NOTE**: we are not going to properly support [semver](https://semver.org)
...@@ -28,11 +28,11 @@ To prepare a release: ...@@ -28,11 +28,11 @@ To prepare a release:
It could be combined with the (1) if changes are not large. Make sure to It could be combined with the (1) if changes are not large. Make sure to
add the [`A-release`](https://github.com/paritytech/parity-bridges-common/labels/A-release) add the [`A-release`](https://github.com/paritytech/parity-bridges-common/labels/A-release)
label to your PR - in the future we'll add workflow to make pre-releases label to your PR - in the future we'll add workflow to make pre-releases
when such PR is merged to the `polkadot-staging` branch; when such PR is merged to the `master` branch;
4. Wait for approvals and merge PR, mentioned in (3); 4. Wait for approvals and merge PR, mentioned in (3);
5. Checkout updated `polkadot-staging` branch and do `git pull`; 5. Checkout updated `master` branch and do `git pull`;
6. Make a new git tag with the `substrate-relay` version: 6. Make a new git tag with the `substrate-relay` version:
```sh ```sh
...@@ -123,15 +123,15 @@ support it. Normally it means: ...@@ -123,15 +123,15 @@ support it. Normally it means:
1. Bumping bundled chain versions in following places: 1. Bumping bundled chain versions in following places:
- for `Rococo` and `RBH`: [here](https://github.com/paritytech/parity-bridges-common/blob/polkadot-staging/relays/bin-substrate/src/chains/rococo.rs); - for `Rococo` and `RBH`: [here](https://github.com/paritytech/parity-bridges-common/blob/master/relays/bin-substrate/src/chains/rococo.rs);
- for `Westend` and `WBH`: [here](https://github.com/paritytech/parity-bridges-common/blob/polkadot-staging/relays/bin-substrate/src/chains/westend.rs); - for `Westend` and `WBH`: [here](https://github.com/paritytech/parity-bridges-common/blob/master/relays/bin-substrate/src/chains/westend.rs);
- for `Kusama` and `KBH`: [here](https://github.com/paritytech/parity-bridges-common/blob/polkadot-staging/relays/bin-substrate/src/chains/polkadot.rs) - for `Kusama` and `KBH`: [here](https://github.com/paritytech/parity-bridges-common/blob/master/relays/bin-substrate/src/chains/polkadot.rs)
- for `Polkadot` and `PBH`: [here](https://github.com/paritytech/parity-bridges-common/blob/polkadot-staging/relays/bin-substrate/src/chains/polkadot.rs); - for `Polkadot` and `PBH`: [here](https://github.com/paritytech/parity-bridges-common/blob/master/relays/bin-substrate/src/chains/polkadot.rs);
- for `PBC`: [here](https://github.com/paritytech/parity-bridges-common/blob/polkadot-staging/relays/bin-substrate/src/chains/polkadot_bulletin.rs). - for `PBC`: [here](https://github.com/paritytech/parity-bridges-common/blob/master/relays/bin-substrate/src/chains/polkadot_bulletin.rs).
2. Regenerating bundled runtime wrapper code using `runtime-codegen` binary: 2. Regenerating bundled runtime wrapper code using `runtime-codegen` binary:
......
[package]
name = "bridge-runtime-common"
version = "0.7.0"
description = "Common types and functions that may be used by substrate-based runtimes of all bridged chains"
authors.workspace = true
edition.workspace = true
repository.workspace = true
license = "GPL-3.0-or-later WITH Classpath-exception-2.0"
[lints]
workspace = true
[dependencies]
codec = { package = "parity-scale-codec", version = "3.1.5", default-features = false, features = ["derive"] }
hash-db = { version = "0.16.0", default-features = false }
log = { workspace = true }
scale-info = { version = "2.10.0", default-features = false, features = ["derive"] }
static_assertions = { version = "1.1", optional = true }
tuplex = { version = "0.1", default-features = false }
# Bridge dependencies
bp-header-chain = { path = "../../primitives/header-chain", default-features = false }
bp-messages = { path = "../../primitives/messages", default-features = false }
bp-parachains = { path = "../../primitives/parachains", default-features = false }
bp-polkadot-core = { path = "../../primitives/polkadot-core", default-features = false }
bp-relayers = { path = "../../primitives/relayers", default-features = false }
bp-runtime = { path = "../../primitives/runtime", default-features = false }
bp-xcm-bridge-hub = { path = "../../primitives/xcm-bridge-hub", default-features = false }
bp-xcm-bridge-hub-router = { path = "../../primitives/xcm-bridge-hub-router", default-features = false }
pallet-bridge-grandpa = { path = "../../modules/grandpa", default-features = false }
pallet-bridge-messages = { path = "../../modules/messages", default-features = false }
pallet-bridge-parachains = { path = "../../modules/parachains", default-features = false }
pallet-bridge-relayers = { path = "../../modules/relayers", default-features = false }
# Substrate dependencies
frame-support = { git = "https://github.com/paritytech/polkadot-sdk", branch = "master", default-features = false }
frame-system = { git = "https://github.com/paritytech/polkadot-sdk", branch = "master", default-features = false }
pallet-transaction-payment = { git = "https://github.com/paritytech/polkadot-sdk", branch = "master", default-features = false }
pallet-utility = { git = "https://github.com/paritytech/polkadot-sdk", branch = "master", default-features = false }
sp-api = { git = "https://github.com/paritytech/polkadot-sdk", branch = "master", default-features = false }
sp-core = { git = "https://github.com/paritytech/polkadot-sdk", branch = "master", default-features = false }
sp-io = { git = "https://github.com/paritytech/polkadot-sdk", branch = "master", default-features = false }
sp-runtime = { git = "https://github.com/paritytech/polkadot-sdk", branch = "master", default-features = false }
sp-std = { git = "https://github.com/paritytech/polkadot-sdk", branch = "master", default-features = false }
sp-trie = { git = "https://github.com/paritytech/polkadot-sdk", branch = "master", default-features = false }
# Polkadot dependencies
xcm = { package = "staging-xcm", git = "https://github.com/paritytech/polkadot-sdk", default-features = false , branch = "master" }
xcm-builder = { package = "staging-xcm-builder", git = "https://github.com/paritytech/polkadot-sdk", default-features = false , branch = "master" }
[dev-dependencies]
bp-test-utils = { path = "../../primitives/test-utils" }
pallet-balances = { git = "https://github.com/paritytech/polkadot-sdk", branch = "master" }
[features]
default = ["std"]
std = [
"bp-header-chain/std",
"bp-messages/std",
"bp-parachains/std",
"bp-polkadot-core/std",
"bp-relayers/std",
"bp-runtime/std",
"bp-xcm-bridge-hub-router/std",
"bp-xcm-bridge-hub/std",
"codec/std",
"frame-support/std",
"frame-system/std",
"hash-db/std",
"log/std",
"pallet-bridge-grandpa/std",
"pallet-bridge-messages/std",
"pallet-bridge-parachains/std",
"pallet-bridge-relayers/std",
"pallet-transaction-payment/std",
"pallet-utility/std",
"scale-info/std",
"sp-api/std",
"sp-core/std",
"sp-io/std",
"sp-runtime/std",
"sp-std/std",
"sp-trie/std",
"tuplex/std",
"xcm-builder/std",
"xcm/std",
]
runtime-benchmarks = [
"frame-support/runtime-benchmarks",
"frame-system/runtime-benchmarks",
"pallet-balances/runtime-benchmarks",
"pallet-bridge-grandpa/runtime-benchmarks",
"pallet-bridge-messages/runtime-benchmarks",
"pallet-bridge-parachains/runtime-benchmarks",
"pallet-bridge-relayers/runtime-benchmarks",
"pallet-transaction-payment/runtime-benchmarks",
"pallet-utility/runtime-benchmarks",
"sp-runtime/runtime-benchmarks",
"xcm-builder/runtime-benchmarks",
]
integrity-test = ["static_assertions"]
// Copyright (C) Parity Technologies (UK) Ltd.
// This file is part of Parity Bridges Common.
// Parity Bridges Common is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity Bridges Common is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity Bridges Common. If not, see <http://www.gnu.org/licenses/>.
//! Bridge-specific transaction extensions.
pub mod check_obsolete_extension;
pub mod priority_calculator;
pub mod refund_relayer_extension;
// Copyright (C) Parity Technologies (UK) Ltd.
// This file is part of Parity Bridges Common.
// Parity Bridges Common is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity Bridges Common is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity Bridges Common. If not, see <http://www.gnu.org/licenses/>.
//! Bridge transaction priority calculator.
//!
//! We want to prioritize message delivery transactions with more messages over
//! transactions with less messages. That's because we reject delivery transactions
//! if it contains already delivered message. And if some transaction delivers
//! single message with nonce `N`, then the transaction with nonces `N..=N+100` will
//! be rejected. This can lower bridge throughput down to one message per block.
use bp_messages::MessageNonce;
use frame_support::traits::Get;
use sp_runtime::transaction_validity::TransactionPriority;
// reexport everything from `integrity_tests` module
#[allow(unused_imports)]
pub use integrity_tests::*;
/// Compute priority boost for message delivery transaction that delivers
/// given number of messages.
pub fn compute_priority_boost<PriorityBoostPerMessage>(
messages: MessageNonce,
) -> TransactionPriority
where
PriorityBoostPerMessage: Get<TransactionPriority>,
{
// we don't want any boost for transaction with single message => minus one
PriorityBoostPerMessage::get().saturating_mul(messages.saturating_sub(1))
}
#[cfg(not(feature = "integrity-test"))]
mod integrity_tests {}
#[cfg(feature = "integrity-test")]
mod integrity_tests {
use super::compute_priority_boost;
use bp_messages::MessageNonce;
use bp_runtime::PreComputedSize;
use frame_support::{
dispatch::{DispatchClass, DispatchInfo, Pays, PostDispatchInfo},
traits::Get,
};
use pallet_bridge_messages::WeightInfoExt;
use pallet_transaction_payment::OnChargeTransaction;
use sp_runtime::{
traits::{Dispatchable, UniqueSaturatedInto, Zero},
transaction_validity::TransactionPriority,
FixedPointOperand, SaturatedConversion, Saturating,
};
type BalanceOf<T> =
<<T as pallet_transaction_payment::Config>::OnChargeTransaction as OnChargeTransaction<
T,
>>::Balance;
/// Ensures that the value of `PriorityBoostPerMessage` matches the value of
/// `tip_boost_per_message`.
///
/// We want two transactions, `TX1` with `N` messages and `TX2` with `N+1` messages, have almost
/// the same priority if we'll add `tip_boost_per_message` tip to the `TX1`. We want to be sure
/// that if we add plain `PriorityBoostPerMessage` priority to `TX1`, the priority will be close
/// to `TX2` as well.
pub fn ensure_priority_boost_is_sane<Runtime, MessagesInstance, PriorityBoostPerMessage>(
tip_boost_per_message: BalanceOf<Runtime>,
) where
Runtime:
pallet_transaction_payment::Config + pallet_bridge_messages::Config<MessagesInstance>,
MessagesInstance: 'static,
PriorityBoostPerMessage: Get<TransactionPriority>,
Runtime::RuntimeCall: Dispatchable<Info = DispatchInfo, PostInfo = PostDispatchInfo>,
BalanceOf<Runtime>: Send + Sync + FixedPointOperand,
{
let priority_boost_per_message = PriorityBoostPerMessage::get();
let maximal_messages_in_delivery_transaction =
Runtime::MaxUnconfirmedMessagesAtInboundLane::get();
for messages in 1..=maximal_messages_in_delivery_transaction {
let base_priority = estimate_message_delivery_transaction_priority::<
Runtime,
MessagesInstance,
>(messages, Zero::zero());
let priority_boost = compute_priority_boost::<PriorityBoostPerMessage>(messages);
let priority_with_boost = base_priority + priority_boost;
let tip = tip_boost_per_message.saturating_mul((messages - 1).unique_saturated_into());
let priority_with_tip =
estimate_message_delivery_transaction_priority::<Runtime, MessagesInstance>(1, tip);
const ERROR_MARGIN: TransactionPriority = 5; // 5%
if priority_with_boost.abs_diff(priority_with_tip).saturating_mul(100) /
priority_with_tip >
ERROR_MARGIN
{
panic!(
"The PriorityBoostPerMessage value ({}) must be fixed to: {}",
priority_boost_per_message,
compute_priority_boost_per_message::<Runtime, MessagesInstance>(
tip_boost_per_message
),
);
}
}
}
/// Compute priority boost that we give to message delivery transaction for additional message.
#[cfg(feature = "integrity-test")]
fn compute_priority_boost_per_message<Runtime, MessagesInstance>(
tip_boost_per_message: BalanceOf<Runtime>,
) -> TransactionPriority
where
Runtime:
pallet_transaction_payment::Config + pallet_bridge_messages::Config<MessagesInstance>,
MessagesInstance: 'static,
Runtime::RuntimeCall: Dispatchable<Info = DispatchInfo, PostInfo = PostDispatchInfo>,
BalanceOf<Runtime>: Send + Sync + FixedPointOperand,
{
// esimate priority of transaction that delivers one message and has large tip
let maximal_messages_in_delivery_transaction =
Runtime::MaxUnconfirmedMessagesAtInboundLane::get();
let small_with_tip_priority =
estimate_message_delivery_transaction_priority::<Runtime, MessagesInstance>(
1,
tip_boost_per_message
.saturating_mul(maximal_messages_in_delivery_transaction.saturated_into()),
);
// estimate priority of transaction that delivers maximal number of messages, but has no tip
let large_without_tip_priority = estimate_message_delivery_transaction_priority::<
Runtime,
MessagesInstance,
>(maximal_messages_in_delivery_transaction, Zero::zero());
small_with_tip_priority
.saturating_sub(large_without_tip_priority)
.saturating_div(maximal_messages_in_delivery_transaction - 1)
}
/// Estimate message delivery transaction priority.
#[cfg(feature = "integrity-test")]
fn estimate_message_delivery_transaction_priority<Runtime, MessagesInstance>(
messages: MessageNonce,
tip: BalanceOf<Runtime>,
) -> TransactionPriority
where
Runtime:
pallet_transaction_payment::Config + pallet_bridge_messages::Config<MessagesInstance>,
MessagesInstance: 'static,
Runtime::RuntimeCall: Dispatchable<Info = DispatchInfo, PostInfo = PostDispatchInfo>,
BalanceOf<Runtime>: Send + Sync + FixedPointOperand,
{
// just an estimation of extra transaction bytes that are added to every transaction
// (including signature, signed extensions extra and etc + in our case it includes
// all call arguments extept the proof itself)
let base_tx_size = 512;
// let's say we are relaying similar small messages and for every message we add more trie
// nodes to the proof (x0.5 because we expect some nodes to be reused)
let estimated_message_size = 512;
// let's say all our messages have the same dispatch weight
let estimated_message_dispatch_weight = <Runtime as pallet_bridge_messages::Config<
MessagesInstance,
>>::WeightInfo::message_dispatch_weight(
estimated_message_size
);
// messages proof argument size is (for every message) messages size + some additional
// trie nodes. Some of them are reused by different messages, so let's take 2/3 of default
// "overhead" constant
let messages_proof_size = <Runtime as pallet_bridge_messages::Config<MessagesInstance>>::WeightInfo::expected_extra_storage_proof_size()
.saturating_mul(2)
.saturating_div(3)
.saturating_add(estimated_message_size)
.saturating_mul(messages as _);
// finally we are able to estimate transaction size and weight
let transaction_size = base_tx_size.saturating_add(messages_proof_size);
let transaction_weight = <Runtime as pallet_bridge_messages::Config<MessagesInstance>>::WeightInfo::receive_messages_proof_weight(
&PreComputedSize(transaction_size as _),
messages as _,
estimated_message_dispatch_weight.saturating_mul(messages),
);
pallet_transaction_payment::ChargeTransactionPayment::<Runtime>::get_priority(
&DispatchInfo {
weight: transaction_weight,
class: DispatchClass::Normal,
pays_fee: Pays::Yes,
},
transaction_size as _,
tip,
Zero::zero(),
)
}
}
// Copyright (C) Parity Technologies (UK) Ltd.
// This file is part of Parity Bridges Common.
// Parity Bridges Common is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity Bridges Common is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity Bridges Common. If not, see <http://www.gnu.org/licenses/>.
//! Integrity tests for chain constants and pallets configuration.
//!
//! Most of the tests in this module assume that the bridge is using standard (see `crate::messages`
//! module for details) configuration.
use crate::{messages, messages::MessageBridge};
use bp_messages::{InboundLaneData, MessageNonce};
use bp_runtime::{Chain, ChainId};
use codec::Encode;
use frame_support::{storage::generator::StorageValue, traits::Get, weights::Weight};
use frame_system::limits;
use pallet_bridge_messages::WeightInfoExt as _;
/// Macro that ensures that the runtime configuration and chain primitives crate are sharing
/// the same types (nonce, block number, hash, hasher, account id and header).
#[macro_export]
macro_rules! assert_chain_types(
( runtime: $r:path, this_chain: $this:path ) => {
{
// if one of asserts fail, then either bridge isn't configured properly (or alternatively - non-standard
// configuration is used), or something has broke existing configuration (meaning that all bridged chains
// and relays will stop functioning)
use frame_system::{Config as SystemConfig, pallet_prelude::{BlockNumberFor, HeaderFor}};
use static_assertions::assert_type_eq_all;
assert_type_eq_all!(<$r as SystemConfig>::Nonce, bp_runtime::NonceOf<$this>);
assert_type_eq_all!(BlockNumberFor<$r>, bp_runtime::BlockNumberOf<$this>);
assert_type_eq_all!(<$r as SystemConfig>::Hash, bp_runtime::HashOf<$this>);
assert_type_eq_all!(<$r as SystemConfig>::Hashing, bp_runtime::HasherOf<$this>);
assert_type_eq_all!(<$r as SystemConfig>::AccountId, bp_runtime::AccountIdOf<$this>);
assert_type_eq_all!(HeaderFor<$r>, bp_runtime::HeaderOf<$this>);
}
}
);
/// Macro that ensures that the bridge GRANDPA pallet is configured properly to bridge with given
/// chain.
#[macro_export]
macro_rules! assert_bridge_grandpa_pallet_types(
( runtime: $r:path, with_bridged_chain_grandpa_instance: $i:path, bridged_chain: $bridged:path ) => {
{
// if one of asserts fail, then either bridge isn't configured properly (or alternatively - non-standard
// configuration is used), or something has broke existing configuration (meaning that all bridged chains
// and relays will stop functioning)
use pallet_bridge_grandpa::Config as GrandpaConfig;
use static_assertions::assert_type_eq_all;
assert_type_eq_all!(<$r as GrandpaConfig<$i>>::BridgedChain, $bridged);
}
}
);
/// Macro that ensures that the bridge messages pallet is configured properly to bridge using given
/// configuration.
#[macro_export]
macro_rules! assert_bridge_messages_pallet_types(
(
runtime: $r:path,
with_bridged_chain_messages_instance: $i:path,
bridge: $bridge:path
) => {
{
// if one of asserts fail, then either bridge isn't configured properly (or alternatively - non-standard
// configuration is used), or something has broke existing configuration (meaning that all bridged chains
// and relays will stop functioning)
use $crate::messages::{
source::{FromThisChainMessagePayload, TargetHeaderChainAdapter},
target::{FromBridgedChainMessagePayload, SourceHeaderChainAdapter},
AccountIdOf, BalanceOf, BridgedChain, ThisChain,
};
use pallet_bridge_messages::Config as MessagesConfig;
use static_assertions::assert_type_eq_all;
assert_type_eq_all!(<$r as MessagesConfig<$i>>::OutboundPayload, FromThisChainMessagePayload);
assert_type_eq_all!(<$r as MessagesConfig<$i>>::InboundRelayer, AccountIdOf<BridgedChain<$bridge>>);
assert_type_eq_all!(<$r as MessagesConfig<$i>>::TargetHeaderChain, TargetHeaderChainAdapter<$bridge>);
assert_type_eq_all!(<$r as MessagesConfig<$i>>::SourceHeaderChain, SourceHeaderChainAdapter<$bridge>);
}
}
);
/// Macro that combines four other macro calls - `assert_chain_types`, `assert_bridge_types`,
/// `assert_bridge_grandpa_pallet_types` and `assert_bridge_messages_pallet_types`. It may be used
/// at the chain that is implementing complete standard messages bridge (i.e. with bridge GRANDPA
/// and messages pallets deployed).
#[macro_export]
macro_rules! assert_complete_bridge_types(
(
runtime: $r:path,
with_bridged_chain_grandpa_instance: $gi:path,
with_bridged_chain_messages_instance: $mi:path,
bridge: $bridge:path,
this_chain: $this:path,
bridged_chain: $bridged:path,
) => {
$crate::assert_chain_types!(runtime: $r, this_chain: $this);
$crate::assert_bridge_grandpa_pallet_types!(
runtime: $r,
with_bridged_chain_grandpa_instance: $gi,
bridged_chain: $bridged
);
$crate::assert_bridge_messages_pallet_types!(
runtime: $r,
with_bridged_chain_messages_instance: $mi,
bridge: $bridge
);
}
);
/// Parameters for asserting chain-related constants.
#[derive(Debug)]
pub struct AssertChainConstants {
/// Block length limits of the chain.
pub block_length: limits::BlockLength,
/// Block weight limits of the chain.
pub block_weights: limits::BlockWeights,
}
/// Test that our hardcoded, chain-related constants, are matching chain runtime configuration.
///
/// In particular, this test ensures that:
///
/// 1) block weight limits are matching;
/// 2) block size limits are matching.
pub fn assert_chain_constants<R>(params: AssertChainConstants)
where
R: frame_system::Config,
{
// we don't check runtime version here, because in our case we'll be building relay from one
// repo and runtime will live in another repo, along with outdated relay version. To avoid
// unneeded commits, let's not raise an error in case of version mismatch.
// if one of following assert fails, it means that we may need to upgrade bridged chain and
// relay to use updated constants. If constants are now smaller than before, it may lead to
// undeliverable messages.
// `BlockLength` struct is not implementing `PartialEq`, so we compare encoded values here.
assert_eq!(
R::BlockLength::get().encode(),
params.block_length.encode(),
"BlockLength from runtime ({:?}) differ from hardcoded: {:?}",
R::BlockLength::get(),
params.block_length,
);
// `BlockWeights` struct is not implementing `PartialEq`, so we compare encoded values here
assert_eq!(
R::BlockWeights::get().encode(),
params.block_weights.encode(),
"BlockWeights from runtime ({:?}) differ from hardcoded: {:?}",
R::BlockWeights::get(),
params.block_weights,
);
}
/// Test that the constants, used in GRANDPA pallet configuration are valid.
pub fn assert_bridge_grandpa_pallet_constants<R, GI>()
where
R: pallet_bridge_grandpa::Config<GI>,
GI: 'static,
{
assert!(
R::HeadersToKeep::get() > 0,
"HeadersToKeep ({}) must be larger than zero",
R::HeadersToKeep::get(),
);
}
/// Parameters for asserting messages pallet constants.
#[derive(Debug)]
pub struct AssertBridgeMessagesPalletConstants {
/// Maximal number of unrewarded relayer entries in a confirmation transaction at the bridged
/// chain.
pub max_unrewarded_relayers_in_bridged_confirmation_tx: MessageNonce,
/// Maximal number of unconfirmed messages in a confirmation transaction at the bridged chain.
pub max_unconfirmed_messages_in_bridged_confirmation_tx: MessageNonce,
/// Identifier of the bridged chain.
pub bridged_chain_id: ChainId,
}
/// Test that the constants, used in messages pallet configuration are valid.
pub fn assert_bridge_messages_pallet_constants<R, MI>(params: AssertBridgeMessagesPalletConstants)
where
R: pallet_bridge_messages::Config<MI>,
MI: 'static,
{
assert!(
!R::ActiveOutboundLanes::get().is_empty(),
"ActiveOutboundLanes ({:?}) must not be empty",
R::ActiveOutboundLanes::get(),
);
assert!(
R::MaxUnrewardedRelayerEntriesAtInboundLane::get() <= params.max_unrewarded_relayers_in_bridged_confirmation_tx,
"MaxUnrewardedRelayerEntriesAtInboundLane ({}) must be <= than the hardcoded value for bridged chain: {}",
R::MaxUnrewardedRelayerEntriesAtInboundLane::get(),
params.max_unrewarded_relayers_in_bridged_confirmation_tx,
);
assert!(
R::MaxUnconfirmedMessagesAtInboundLane::get() <= params.max_unconfirmed_messages_in_bridged_confirmation_tx,
"MaxUnrewardedRelayerEntriesAtInboundLane ({}) must be <= than the hardcoded value for bridged chain: {}",
R::MaxUnconfirmedMessagesAtInboundLane::get(),
params.max_unconfirmed_messages_in_bridged_confirmation_tx,
);
assert_eq!(R::BridgedChainId::get(), params.bridged_chain_id);
}
/// Parameters for asserting bridge pallet names.
#[derive(Debug)]
pub struct AssertBridgePalletNames<'a> {
/// Name of the messages pallet, deployed at the bridged chain and used to bridge with this
/// chain.
pub with_this_chain_messages_pallet_name: &'a str,
/// Name of the GRANDPA pallet, deployed at this chain and used to bridge with the bridged
/// chain.
pub with_bridged_chain_grandpa_pallet_name: &'a str,
/// Name of the messages pallet, deployed at this chain and used to bridge with the bridged
/// chain.
pub with_bridged_chain_messages_pallet_name: &'a str,
}
/// Tests that bridge pallet names used in `construct_runtime!()` macro call are matching constants
/// from chain primitives crates.
pub fn assert_bridge_pallet_names<B, R, GI, MI>(params: AssertBridgePalletNames)
where
B: MessageBridge,
R: pallet_bridge_grandpa::Config<GI> + pallet_bridge_messages::Config<MI>,
GI: 'static,
MI: 'static,
{
assert_eq!(B::BRIDGED_MESSAGES_PALLET_NAME, params.with_this_chain_messages_pallet_name);
assert_eq!(
pallet_bridge_grandpa::PalletOwner::<R, GI>::storage_value_final_key().to_vec(),
bp_runtime::storage_value_key(params.with_bridged_chain_grandpa_pallet_name, "PalletOwner",).0,
);
assert_eq!(
pallet_bridge_messages::PalletOwner::<R, MI>::storage_value_final_key().to_vec(),
bp_runtime::storage_value_key(
params.with_bridged_chain_messages_pallet_name,
"PalletOwner",
)
.0,
);
}
/// Parameters for asserting complete standard messages bridge.
#[derive(Debug)]
pub struct AssertCompleteBridgeConstants<'a> {
/// Parameters to assert this chain constants.
pub this_chain_constants: AssertChainConstants,
/// Parameters to assert messages pallet constants.
pub messages_pallet_constants: AssertBridgeMessagesPalletConstants,
/// Parameters to assert pallet names constants.
pub pallet_names: AssertBridgePalletNames<'a>,
}
/// All bridge-related constants tests for the complete standard messages bridge (i.e. with bridge
/// GRANDPA and messages pallets deployed).
pub fn assert_complete_bridge_constants<R, GI, MI, B>(params: AssertCompleteBridgeConstants)
where
R: frame_system::Config
+ pallet_bridge_grandpa::Config<GI>
+ pallet_bridge_messages::Config<MI>,
GI: 'static,
MI: 'static,
B: MessageBridge,
{
assert_chain_constants::<R>(params.this_chain_constants);
assert_bridge_grandpa_pallet_constants::<R, GI>();
assert_bridge_messages_pallet_constants::<R, MI>(params.messages_pallet_constants);
assert_bridge_pallet_names::<B, R, GI, MI>(params.pallet_names);
}
/// Check that the message lane weights are correct.
pub fn check_message_lane_weights<
C: Chain,
T: frame_system::Config + pallet_bridge_messages::Config<MessagesPalletInstance>,
MessagesPalletInstance: 'static,
>(
bridged_chain_extra_storage_proof_size: u32,
this_chain_max_unrewarded_relayers: MessageNonce,
this_chain_max_unconfirmed_messages: MessageNonce,
// whether `RefundBridgedParachainMessages` extension is deployed at runtime and is used for
// refunding this bridge transactions?
//
// in other words: pass true for all known production chains
runtime_includes_refund_extension: bool,
) {
type Weights<T, MI> = <T as pallet_bridge_messages::Config<MI>>::WeightInfo;
// check basic weight assumptions
pallet_bridge_messages::ensure_weights_are_correct::<Weights<T, MessagesPalletInstance>>();
// check that weights allow us to receive messages
let max_incoming_message_proof_size = bridged_chain_extra_storage_proof_size
.saturating_add(messages::target::maximal_incoming_message_size(C::max_extrinsic_size()));
pallet_bridge_messages::ensure_able_to_receive_message::<Weights<T, MessagesPalletInstance>>(
C::max_extrinsic_size(),
C::max_extrinsic_weight(),
max_incoming_message_proof_size,
messages::target::maximal_incoming_message_dispatch_weight(C::max_extrinsic_weight()),
);
// check that weights allow us to receive delivery confirmations
let max_incoming_inbound_lane_data_proof_size =
InboundLaneData::<()>::encoded_size_hint_u32(this_chain_max_unrewarded_relayers as _);
pallet_bridge_messages::ensure_able_to_receive_confirmation::<Weights<T, MessagesPalletInstance>>(
C::max_extrinsic_size(),
C::max_extrinsic_weight(),
max_incoming_inbound_lane_data_proof_size,
this_chain_max_unrewarded_relayers,
this_chain_max_unconfirmed_messages,
);
// check that extra weights of delivery/confirmation transactions include the weight
// of `RefundBridgedParachainMessages` operations. This signed extension assumes the worst case
// (i.e. slashing if delivery transaction was invalid) and refunds some weight if
// assumption was wrong (i.e. if we did refund instead of slashing). This check
// ensures the extension will not refund weight when it doesn't need to (i.e. if pallet
// weights do not account weights of refund extension).
if runtime_includes_refund_extension {
assert_ne!(
Weights::<T, MessagesPalletInstance>::receive_messages_proof_overhead_from_runtime(),
Weight::zero()
);
assert_ne!(
Weights::<T, MessagesPalletInstance>::receive_messages_delivery_proof_overhead_from_runtime(),
Weight::zero()
);
}
}
// Copyright (C) Parity Technologies (UK) Ltd.
// This file is part of Parity Bridges Common.
// Parity Bridges Common is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity Bridges Common is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity Bridges Common. If not, see <http://www.gnu.org/licenses/>.
//! Common types/functions that may be used by runtimes of all bridged chains.
#![warn(missing_docs)]
#![cfg_attr(not(feature = "std"), no_std)]
use bp_runtime::{Parachain, ParachainIdOf};
use sp_runtime::traits::{Get, PhantomData};
pub mod extensions;
pub mod messages;
pub mod messages_api;
pub mod messages_benchmarking;
pub mod messages_call_ext;
pub mod messages_generation;
pub mod messages_xcm_extension;
pub mod parachains_benchmarking;
mod mock;
#[cfg(feature = "integrity-test")]
pub mod integrity;
const LOG_TARGET_BRIDGE_DISPATCH: &str = "runtime::bridge-dispatch";
/// Trait identifying a bridged parachain. A relayer might be refunded for delivering messages
/// coming from this parachain.
pub trait RefundableParachainId {
/// The instance of the bridge parachains pallet.
type Instance: 'static;
/// The parachain Id.
type Id: Get<u32>;
}
/// Default implementation of `RefundableParachainId`.
pub struct DefaultRefundableParachainId<Instance, Id>(PhantomData<(Instance, Id)>);
impl<Instance, Id> RefundableParachainId for DefaultRefundableParachainId<Instance, Id>
where
Instance: 'static,
Id: Get<u32>,
{
type Instance = Instance;
type Id = Id;
}
/// Implementation of `RefundableParachainId` for `trait Parachain`.
pub struct RefundableParachain<Instance, Para>(PhantomData<(Instance, Para)>);
impl<Instance, Para> RefundableParachainId for RefundableParachain<Instance, Para>
where
Instance: 'static,
Para: Parachain,
{
type Instance = Instance;
type Id = ParachainIdOf<Para>;
}
This diff is collapsed.
// Copyright (C) Parity Technologies (UK) Ltd.
// This file is part of Parity Bridges Common.
// Parity Bridges Common is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity Bridges Common is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity Bridges Common. If not, see <http://www.gnu.org/licenses/>.
//! Helpers for implementing various message-related runtime API mthods.
use bp_messages::{
InboundMessageDetails, LaneId, MessageNonce, MessagePayload, OutboundMessageDetails,
};
use sp_std::vec::Vec;
/// Implementation of the `To*OutboundLaneApi::message_details`.
pub fn outbound_message_details<Runtime, MessagesPalletInstance>(
lane: LaneId,
begin: MessageNonce,
end: MessageNonce,
) -> Vec<OutboundMessageDetails>
where
Runtime: pallet_bridge_messages::Config<MessagesPalletInstance>,
MessagesPalletInstance: 'static,
{
(begin..=end)
.filter_map(|nonce| {
let message_data =
pallet_bridge_messages::Pallet::<Runtime, MessagesPalletInstance>::outbound_message_data(lane, nonce)?;
Some(OutboundMessageDetails {
nonce,
// dispatch message weight is always zero at the source chain, since we're paying for
// dispatch at the target chain
dispatch_weight: frame_support::weights::Weight::zero(),
size: message_data.len() as _,
})
})
.collect()
}
/// Implementation of the `To*InboundLaneApi::message_details`.
pub fn inbound_message_details<Runtime, MessagesPalletInstance>(
lane: LaneId,
messages: Vec<(MessagePayload, OutboundMessageDetails)>,
) -> Vec<InboundMessageDetails>
where
Runtime: pallet_bridge_messages::Config<MessagesPalletInstance>,
MessagesPalletInstance: 'static,
{
messages
.into_iter()
.map(|(payload, details)| {
pallet_bridge_messages::Pallet::<Runtime, MessagesPalletInstance>::inbound_message_data(
lane, payload, details,
)
})
.collect()
}
// Copyright (C) Parity Technologies (UK) Ltd.
// This file is part of Parity Bridges Common.
// Parity Bridges Common is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity Bridges Common is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity Bridges Common. If not, see <http://www.gnu.org/licenses/>.
//! Everything required to run benchmarks of messages module, based on
//! `bridge_runtime_common::messages` implementation.
#![cfg(feature = "runtime-benchmarks")]
use crate::{
messages::{
source::FromBridgedChainMessagesDeliveryProof, target::FromBridgedChainMessagesProof,
AccountIdOf, BridgedChain, HashOf, MessageBridge, ThisChain,
},
messages_generation::{
encode_all_messages, encode_lane_data, prepare_message_delivery_storage_proof,
prepare_messages_storage_proof,
},
};
use bp_messages::MessagePayload;
use bp_polkadot_core::parachains::ParaHash;
use bp_runtime::{Chain, Parachain, StorageProofSize, UnderlyingChainOf};
use codec::Encode;
use frame_support::weights::Weight;
use pallet_bridge_messages::benchmarking::{MessageDeliveryProofParams, MessageProofParams};
use sp_runtime::traits::{Header, Zero};
use sp_std::prelude::*;
use xcm::latest::prelude::*;
/// Prepare inbound bridge message according to given message proof parameters.
fn prepare_inbound_message(
params: &MessageProofParams,
successful_dispatch_message_generator: impl Fn(usize) -> MessagePayload,
) -> MessagePayload {
// we only care about **this** message size when message proof needs to be `Minimal`
let expected_size = match params.size {
StorageProofSize::Minimal(size) => size as usize,
_ => 0,
};
// if we don't need a correct message, then we may just return some random blob
if !params.is_successful_dispatch_expected {
return vec![0u8; expected_size]
}
// else let's prepare successful message.
let msg = successful_dispatch_message_generator(expected_size);
assert!(
msg.len() >= expected_size,
"msg.len(): {} does not match expected_size: {}",
expected_size,
msg.len()
);
msg
}
/// Prepare proof of messages for the `receive_messages_proof` call.
///
/// In addition to returning valid messages proof, environment is prepared to verify this message
/// proof.
///
/// This method is intended to be used when benchmarking pallet, linked to the chain that
/// uses GRANDPA finality. For parachains, please use the `prepare_message_proof_from_parachain`
/// function.
pub fn prepare_message_proof_from_grandpa_chain<R, FI, B>(
params: MessageProofParams,
message_generator: impl Fn(usize) -> MessagePayload,
) -> (FromBridgedChainMessagesProof<HashOf<BridgedChain<B>>>, Weight)
where
R: pallet_bridge_grandpa::Config<FI, BridgedChain = UnderlyingChainOf<BridgedChain<B>>>,
FI: 'static,
B: MessageBridge,
{
// prepare storage proof
let (state_root, storage_proof) = prepare_messages_storage_proof::<B>(
params.lane,
params.message_nonces.clone(),
params.outbound_lane_data.clone(),
params.size,
prepare_inbound_message(&params, message_generator),
encode_all_messages,
encode_lane_data,
);
// update runtime storage
let (_, bridged_header_hash) = insert_header_to_grandpa_pallet::<R, FI>(state_root);
(
FromBridgedChainMessagesProof {
bridged_header_hash,
storage_proof,
lane: params.lane,
nonces_start: *params.message_nonces.start(),
nonces_end: *params.message_nonces.end(),
},
Weight::MAX / 1000,
)
}
/// Prepare proof of messages for the `receive_messages_proof` call.
///
/// In addition to returning valid messages proof, environment is prepared to verify this message
/// proof.
///
/// This method is intended to be used when benchmarking pallet, linked to the chain that
/// uses parachain finality. For GRANDPA chains, please use the
/// `prepare_message_proof_from_grandpa_chain` function.
pub fn prepare_message_proof_from_parachain<R, PI, B>(
params: MessageProofParams,
message_generator: impl Fn(usize) -> MessagePayload,
) -> (FromBridgedChainMessagesProof<HashOf<BridgedChain<B>>>, Weight)
where
R: pallet_bridge_parachains::Config<PI>,
PI: 'static,
B: MessageBridge,
UnderlyingChainOf<BridgedChain<B>>: Chain<Hash = ParaHash> + Parachain,
{
// prepare storage proof
let (state_root, storage_proof) = prepare_messages_storage_proof::<B>(
params.lane,
params.message_nonces.clone(),
params.outbound_lane_data.clone(),
params.size,
prepare_inbound_message(&params, message_generator),
encode_all_messages,
encode_lane_data,
);
// update runtime storage
let (_, bridged_header_hash) =
insert_header_to_parachains_pallet::<R, PI, UnderlyingChainOf<BridgedChain<B>>>(state_root);
(
FromBridgedChainMessagesProof {
bridged_header_hash,
storage_proof,
lane: params.lane,
nonces_start: *params.message_nonces.start(),
nonces_end: *params.message_nonces.end(),
},
Weight::MAX / 1000,
)
}
/// Prepare proof of messages delivery for the `receive_messages_delivery_proof` call.
///
/// This method is intended to be used when benchmarking pallet, linked to the chain that
/// uses GRANDPA finality. For parachains, please use the
/// `prepare_message_delivery_proof_from_parachain` function.
pub fn prepare_message_delivery_proof_from_grandpa_chain<R, FI, B>(
params: MessageDeliveryProofParams<AccountIdOf<ThisChain<B>>>,
) -> FromBridgedChainMessagesDeliveryProof<HashOf<BridgedChain<B>>>
where
R: pallet_bridge_grandpa::Config<FI, BridgedChain = UnderlyingChainOf<BridgedChain<B>>>,
FI: 'static,
B: MessageBridge,
{
// prepare storage proof
let lane = params.lane;
let (state_root, storage_proof) = prepare_message_delivery_storage_proof::<B>(
params.lane,
params.inbound_lane_data,
params.size,
);
// update runtime storage
let (_, bridged_header_hash) = insert_header_to_grandpa_pallet::<R, FI>(state_root);
FromBridgedChainMessagesDeliveryProof {
bridged_header_hash: bridged_header_hash.into(),
storage_proof,
lane,
}
}
/// Prepare proof of messages delivery for the `receive_messages_delivery_proof` call.
///
/// This method is intended to be used when benchmarking pallet, linked to the chain that
/// uses parachain finality. For GRANDPA chains, please use the
/// `prepare_message_delivery_proof_from_grandpa_chain` function.
pub fn prepare_message_delivery_proof_from_parachain<R, PI, B>(
params: MessageDeliveryProofParams<AccountIdOf<ThisChain<B>>>,
) -> FromBridgedChainMessagesDeliveryProof<HashOf<BridgedChain<B>>>
where
R: pallet_bridge_parachains::Config<PI>,
PI: 'static,
B: MessageBridge,
UnderlyingChainOf<BridgedChain<B>>: Chain<Hash = ParaHash> + Parachain,
{
// prepare storage proof
let lane = params.lane;
let (state_root, storage_proof) = prepare_message_delivery_storage_proof::<B>(
params.lane,
params.inbound_lane_data,
params.size,
);
// update runtime storage
let (_, bridged_header_hash) =
insert_header_to_parachains_pallet::<R, PI, UnderlyingChainOf<BridgedChain<B>>>(state_root);
FromBridgedChainMessagesDeliveryProof {
bridged_header_hash: bridged_header_hash.into(),
storage_proof,
lane,
}
}
/// Insert header to the bridge GRANDPA pallet.
pub(crate) fn insert_header_to_grandpa_pallet<R, GI>(
state_root: bp_runtime::HashOf<R::BridgedChain>,
) -> (bp_runtime::BlockNumberOf<R::BridgedChain>, bp_runtime::HashOf<R::BridgedChain>)
where
R: pallet_bridge_grandpa::Config<GI>,
GI: 'static,
R::BridgedChain: bp_runtime::Chain,
{
let bridged_block_number = Zero::zero();
let bridged_header = bp_runtime::HeaderOf::<R::BridgedChain>::new(
bridged_block_number,
Default::default(),
state_root,
Default::default(),
Default::default(),
);
let bridged_header_hash = bridged_header.hash();
pallet_bridge_grandpa::initialize_for_benchmarks::<R, GI>(bridged_header);
(bridged_block_number, bridged_header_hash)
}
/// Insert header to the bridge parachains pallet.
pub(crate) fn insert_header_to_parachains_pallet<R, PI, PC>(
state_root: bp_runtime::HashOf<PC>,
) -> (bp_runtime::BlockNumberOf<PC>, bp_runtime::HashOf<PC>)
where
R: pallet_bridge_parachains::Config<PI>,
PI: 'static,
PC: Chain<Hash = ParaHash> + Parachain,
{
let bridged_block_number = Zero::zero();
let bridged_header = bp_runtime::HeaderOf::<PC>::new(
bridged_block_number,
Default::default(),
state_root,
Default::default(),
Default::default(),
);
let bridged_header_hash = bridged_header.hash();
pallet_bridge_parachains::initialize_for_benchmarks::<R, PI, PC>(bridged_header);
(bridged_block_number, bridged_header_hash)
}
/// Returns callback which generates `BridgeMessage` from Polkadot XCM builder based on
/// `expected_message_size` for benchmark.
pub fn generate_xcm_builder_bridge_message_sample(
destination: InteriorLocation,
) -> impl Fn(usize) -> MessagePayload {
move |expected_message_size| -> MessagePayload {
// For XCM bridge hubs, it is the message that
// will be pushed further to some XCM queue (XCMP/UMP)
let location = xcm::VersionedInteriorLocation::V4(destination.clone());
let location_encoded_size = location.encoded_size();
// we don't need to be super-precise with `expected_size` here
let xcm_size = expected_message_size.saturating_sub(location_encoded_size);
let xcm_data_size = xcm_size.saturating_sub(
// minus empty instruction size
Instruction::<()>::ExpectPallet {
index: 0,
name: vec![],
module_name: vec![],
crate_major: 0,
min_crate_minor: 0,
}
.encoded_size(),
);
log::trace!(
target: "runtime::bridge-benchmarks",
"generate_xcm_builder_bridge_message_sample with expected_message_size: {}, location_encoded_size: {}, xcm_size: {}, xcm_data_size: {}",
expected_message_size, location_encoded_size, xcm_size, xcm_data_size,
);
let xcm = xcm::VersionedXcm::<()>::V4(
vec![Instruction::<()>::ExpectPallet {
index: 0,
name: vec![42; xcm_data_size],
module_name: vec![],
crate_major: 0,
min_crate_minor: 0,
}]
.into(),
);
// this is the `BridgeMessage` from polkadot xcm builder, but it has no constructor
// or public fields, so just tuple
// (double encoding, because `.encode()` is called on original Xcm BLOB when it is pushed
// to the storage)
(location, xcm).encode().encode()
}
}
This diff is collapsed.