- Mar 29, 2024
-
-
s0me0ne-unkn0wn authored
Removes transient code introduced to clean up offchain database after `im-online` pallet removal. Should be merged after #2290 has been enacted.
-
- Mar 27, 2024
-
-
Gonçalo Pestana authored
This PR adds a new extrinsic `Call::restore_ledger ` gated by `StakingAdmin` origin that restores a corrupted staking ledger. This extrinsic will be used to recover ledgers that were affected by the issue discussed in https://github.com/paritytech/polkadot-sdk/issues/3245. The extrinsic will re-write the storage items associated with a stash account provided as input parameter. The data used to reset the ledger can be either i) fetched on-chain or ii) partially/totally set by the input parameters of the call. In order to use on-chain data to restore the staking locks, we need a way to read the current lock in the balances pallet. This PR adds a `InspectLockableCurrency` trait and implements it in the pallet balances. An alternative would be to tightly couple staking with the pallet balances but that's inelegant (an example of how it would look like in [this branch](https://github.com/paritytech/polkadot-sdk/tree/gpestana/ledger-badstate-clean_tightly)). More details on the type of corruptions and corresponding fixes https://hackmd.io/DLb5jEYWSmmvqXC9ae4yRg?view#/ We verified that the `Call::restore_ledger` does fix all current corrupted ledgers in Polkadot and Kusama. You can verify it here https://hackmd.io/v-XNrEoGRpe7APR-EZGhOA. **Changes introduced** - Adds `Call::restore_ledger ` extrinsic to recover a corrupted ledger; - Adds trait `frame_support::traits::currency::InspectLockableCurrency` to allow external pallets to read current locks given an account and lock ID; - Implements the `InspectLockableCurrency` in the pallet-balances. - Adds staking locks try-runtime checks (https://github.com/paritytech/polkadot-sdk/issues/3751) **Todo** - [x] benchmark `Call::restore_ledger` - [x] throughout testing of all ledger recovering cases - [x] consider adding the staking locks try-runtime checks to this PR (https://github.com/paritytech/polkadot-sdk/issues/3751) - [x] simulate restoring all ledgers (https://hackmd.io/Dsa2tvhISNSs7zcqriTaxQ?view) in Polkadot and Kusama using chopsticks -- https://hackmd.io/v-XNrEoGRpe7APR-EZGhOA Related to https://github.com/paritytech/polkadot-sdk/issues/3245 Closes https://github.com/paritytech/polkadot-sdk/issues/3751 --------- Co-authored-by: command-bot <>
-
Ermal Kaleci authored
This will make it possible to use remaining weight on idle for processing enqueued messages. More context here https://github.com/paritytech/polkadot-sdk/issues/3709 --------- Co-authored-by: Adrian Catangiu <[email protected]>
-
Andrei Sandu authored
This works only for collators that implement the `collator_fn` allowing `collation-generation` subsystem to pull collations triggered on new heads. Also enables `request_v2::CollationFetchingResponse::CollationWithParentHeadData` for test adder/undying collators. TODO: - [x] fix tests - [x] new tests - [x] PR doc --------- Signed-off-by: Andrei Sandu <[email protected]>
-
Francisco Aguirre authored
`execute` and `send` try to decode the xcm in the parameters before reaching the filter line. The new extrinsics decode only after the filter line. These should be used instead of the old ones. ## TODO - [x] Tests - [x] Generate weights - [x] Deprecation issue -> https://github.com/paritytech/polkadot-sdk/issues/3771 - [x] PRDoc - [x] Handle error in pallet-contracts This would make writing XCMs in PJS Apps more difficult, but here's the fix for that: https://github.com/polkadot-js/apps/pull/10350. Already deployed! https://polkadot.js.org/apps/#/utilities/xcm Supersedes https://github.com/paritytech/polkadot-sdk/pull/1798/ --------- Co-authored-by: PG Herveou <[email protected]> Co-authored-by: command-bot <> Co-authored-by: Adrian Catangiu <[email protected]>
-
- Mar 26, 2024
-
-
Pavel Orlov authored
The PR provides API for obtaining: - the weight required to execute an XCM message, - a list of acceptable `AssetId`s for message execution payment, - the cost of the weight in the specified acceptable `AssetId`. It is meant to address an issue where one has to guess how much fee to pay for execution. Also, at the moment, a client has to guess which assets are acceptable for fee execution payment. See the related issue https://github.com/paritytech/polkadot-sdk/issues/690. With this API, a client is supposed to query the list of the supported asset IDs (in the XCM version format the client understands), weigh the XCM program the client wants to execute and convert the weight into one of the acceptable assets. Note that the client is supposed to know what program will be executed on what chains. However, having a small companion JS library for the pallet-xcm and xtokens should be enough to determine what XCM programs will be executed and where (since these pallets compose a known small set of programs). ```Rust pub trait XcmPaymentApi<Call> where Call: Codec, { /// Returns a list of acceptable payment assets. /// /// # Arguments /// /// * `xcm_version`: Version. fn query_acceptable_payment_assets(xcm_version: Version) -> Result<Vec<VersionedAssetId>, Error>; /// Returns a weight needed to execute a XCM. /// /// # Arguments /// /// * `message`: `VersionedXcm`. fn query_xcm_weight(message: VersionedXcm<Call>) -> Result<Weight, Error>; /// Converts a weight into a fee for the specified `AssetId`. /// /// # Arguments /// /// * `weight`: convertible `Weight`. /// * `asset`: `VersionedAssetId`. fn query_weight_to_asset_fee(weight: Weight, asset: VersionedAssetId) -> Result<u128, Error>; /// Get delivery fees for sending a specific `message` to a `destination`. /// These always come in a specific asset, defined by the chain. /// /// # Arguments /// * `message`: The message that'll be sent, necessary because most delivery fees are based on the /// size of the message. /// * `destination`: The destination to send the message to. Different destinations may use /// different senders that charge different fees. fn query_delivery_fees(destination: VersionedLocation, message: VersionedXcm<()>) -> Result<VersionedAssets, Error>; } ``` An [example](https://gist.github.com/PraetorP/4bc323ff85401abe253897ba990ec29d) of a client side code. --------- Co-authored-by: Francisco Aguirre <[email protected]> Co-authored-by: Adrian Catangiu <[email protected]> Co-authored-by: Daniel Shiposha <[email protected]>
-
Tsvetomir Dimitrov authored
This PR notifies broker pallet for any parachain slot swaps performed on the relay chain. This is achieved by registering an `OnSwap` for the the `coretime` pallet. The hook sends XCM message to the broker chain and invokes a new extrinsic `swap_leases` which updates `Leases` storage item (which keeps the legacy parachain leases). I made two assumptions in this PR: 1. [`Leases`](https://github.com/paritytech/polkadot-sdk/blob/4987d798/substrate/frame/broker/src/lib.rs#L120) in `broker` pallet and [`Leases`](https://github.com/paritytech/polkadot-sdk/blob/4987d798 /polkadot/runtime/common/src/slots/mod.rs#L118) in `slots` pallet are in sync. 2. `swap_leases` extrinsic from `broker` pallet can be triggered only by root or by the XCM message from the relay chain. If not - the extrinsic will generate an error and do nothing. As a side effect from the changes `OnSwap` trait is moved from runtime/common/traits.rs to runtime/parachains. Otherwise it is not accessible from `broker` pallet. Closes https://github.com/paritytech/polkadot-sdk/issues/3552 TODOs: - [x] Weights - [x] Tests --------- Co-authored-by: command-bot <> Co-authored-by: eskimor <[email protected]> Co-authored-by: Bastian Köcher <[email protected]>
-
Dcompoze authored
**Update:** Pushed additional changes based on the review comments. **This pull request fixes various spelling mistakes in this repository.** Most of the changes are contained in the first **3** commits: - `Fix spelling mistakes in comments and docs` - `Fix spelling mistakes in test names` - `Fix spelling mistakes in error messages, panic messages, logs and tracing` Other source code spelling mistakes are separated into individual commits for easier reviewing: - `Fix the spelling of 'authority'` - `Fix the spelling of 'REASONABLE_HEADERS_IN_JUSTIFICATION_ANCESTRY'` - `Fix the spelling of 'prev_enqueud_messages'` - `Fix the spelling of 'endpoint'` - `Fix the spelling of 'children'` - `Fix the spelling of 'PenpalSiblingSovereignAccount'` - `Fix the spelling of 'PenpalSudoAccount'` - `Fix the spelling of 'insufficient'` - `Fix the spelling of 'PalletXcmExtrinsicsBenchmark'` - `Fix the spelling of 'subtracted'` - `Fix the spelling of 'CandidatePendingAvailability'` - `Fix the spelling of 'exclusive'` - `Fix the spelling of 'until'` - `Fix the spelling of 'discriminator'` - `Fix the spelling of 'nonexistent'` - `Fix the spelling of 'subsystem'` - `Fix the spelling of 'indices'` - `Fix the spelling of 'committed'` - `Fix the spelling of 'topology'` - `Fix the spelling of 'response'` - `Fix the spelling of 'beneficiary'` - `Fix the spelling of 'formatted'` - `Fix the spelling of 'UNKNOWN_PROOF_REQUEST'` - `Fix the spelling of 'succeeded'` - `Fix the spelling of 'reopened'` - `Fix the spelling of 'proposer'` - `Fix the spelling of 'InstantiationNonce'` - `Fix the spelling of 'depositor'` - `Fix the spelling of 'expiration'` - `Fix the spelling of 'phantom'` - `Fix the spelling of 'AggregatedKeyValue'` - `Fix the spelling of 'randomness'` - `Fix the spelling of 'defendant'` - `Fix the spelling of 'AquaticMammal'` - `Fix the spelling of 'transactions'` - `Fix the spelling of 'PassingTracingSubscriber'` - `Fix the spelling of 'TxSignaturePayload'` - `Fix the spelling of 'versioning'` - `Fix the spelling of 'descendant'` - `Fix the spelling of 'overridden'` - `Fix the spelling of 'network'` Let me know if this structure is adequate. **Note:** The usage of the words `Merkle`, `Merkelize`, `Merklization`, `Merkelization`, `Merkleization`, is somewhat inconsistent but I left it as it is. ~~**Note:** In some places the term `Receival` is used to refer to message reception, IMO `Reception` is the correct word here, but I left it as it is.~~ ~~**Note:** In some places the term `Overlayed` is used instead of the more acceptable version `Overlaid` but I also left it as it is.~~ ~~**Note:** In some places the term `Applyable` is used instead of the correct version `Applicable` but I also left it as it is.~~ **Note:** Some usage of British vs American english e.g. `judgement` vs `judgment`, `initialise` vs `initialize`, `optimise` vs `optimize` etc. are both present in different places, but I suppose that's understandable given the number of contributors. ~~**Note:** There is a spelling mistake in `.github/CODEOWNERS` but it triggers errors in CI when I make changes to it, so I left it as it is.~~
-
- Mar 23, 2024
-
-
eskimor authored
Fixes https://github.com/paritytech/polkadot-sdk/issues/3762 . --------- Co-authored-by: eskimor <[email protected]> Co-authored-by: Adrian Catangiu <[email protected]>
-
- Mar 21, 2024
-
-
Alejandro Martinez Andres authored
Based on issue [#2512](https://github.com/paritytech/polkadot-sdk/issues/2512), it seems that some ecosystem teams are using these networks to set up their staging environments and test certain use cases, some of them involving sending XCMs from the relay with origins not allowed in the current configuration. This change reverts the configuration of `SendXcmOrigin`. --------- Co-authored-by: Adrian Catangiu <[email protected]>
-
Alin Dima authored
Changes needed to implement the runtime part of elastic scaling: https://github.com/paritytech/polkadot-sdk/issues/3131, https://github.com/paritytech/polkadot-sdk/issues/3132, https://github.com/paritytech/polkadot-sdk/issues/3202 Also fixes https://github.com/paritytech/polkadot-sdk/issues/3675 TODOs: - [x] storage migration - [x] optimise process_candidates from O(N^2) - [x] drop backable candidates which form cycles - [x] fix unit tests - [x] add more unit tests - [x] check the runtime APIs which use the pending availability storage. We need to expose all of them, see https://github.com/paritytech/polkadot-sdk/issues/3576 - [x] optimise the candidate selection. we're currently picking randomly until we satisfy the weight limit. we need to be smart about not breaking candidate chains while being fair to all paras - https://github.com/paritytech/polkadot-sdk/pull/3573 Relies on the changes made in https://github.com/paritytech/polkadot-sdk/pull/3233 in terms of the inclusion policy and the candidate ordering --------- Signed-off-by: alindima <[email protected]> Co-authored-by: command-bot <> Co-authored-by: eskimor <[email protected]>
-
Egor_P authored
This PR backports: - node version bump - `spec_vesion` bump - reordering of the `prdocs` to the appropriate folder from the `1.9.0` release branch
-
gupnik authored
Step in https://github.com/paritytech/polkadot-sdk/issues/3688
-
- Mar 20, 2024
-
-
eskimor authored
We witnessed really poor performance on Rococo, where we ended up with 50 on-demand cores. This was due to the fact that for each core the full queue was processed. With this change full queue processing will happen way less often (most of the time complexity is O(1) or O(log(n))) and if it happens then only for one core (in expectation). Also spot price is now updated before each order to ensure economic back pressure. TODO: - [x] Implement - [x] Basic tests - [x] Add more tests (see todos) - [x] Run benchmark to confirm better performance, first results suggest > 100x faster. - [x] Write migrations - [x] Bump scale-info version and remove patch in Cargo.toml - [x] Write PR docs: on-demand performance improved, more on-demand cores are now non problematic anymore. If need by also the max queue size can be increased again. (Maybe not to 10k) Optional: Performance can be improved even more, if we called `pop_assignment_for_core()`, before calling `report_processed` (Avoid needless affinity drops). The effect gets smaller the larger the claim queue and I would only go for it, if it does not add complexity to the scheduler. --------- Co-authored-by: eskimor <[email protected]> Co-authored-by: antonva <[email protected]> Co-authored-by: command-bot <> Co-authored-by: Anton Vilhelm Ásgeirsson <[email protected]> Co-authored-by: ordian <[email protected]>
-
Tsvetomir Dimitrov authored
The PR adds two things: 1. Runtime API exposing the whole claim queue 2. Consumes the API in `collation-generation` to fetch the next scheduled `ParaEntry` for an occupied core. Related to https://github.com/paritytech/polkadot-sdk/issues/1797
-
- Mar 19, 2024
-
-
Davide Galassi authored
Introduces `CryptoBytes` type defined as: ```rust pub struct CryptoBytes<const N: usize, Tag = ()>(pub [u8; N], PhantomData<fn() -> Tag>); ``` The type implements a bunch of methods and traits which are typically expected from a byte array newtype (NOTE: some of the methods and trait implementations IMO are a bit redundant, but I decided to maintain them all to not change too much stuff in this PR) It also introduces two (generic) typical consumers of `CryptoBytes`: `PublicBytes` and `SignatureBytes`. ```rust pub struct PublicTag; pub PublicBytes<const N: usize, CryptoTag> = CryptoBytes<N, (PublicTag, CryptoTag)>; pub struct SignatureTag; pub SignatureBytes<const N: usize, CryptoTag> = CryptoBytes<N, (SignatureTag, CryptoTag)>; ``` Both of them use a tag to differentiate the two types at a higher level. Downstream specializations will further specialize using a dedicated crypto tag. For example in ECDSA: ```rust pub struct EcdsaTag; pub type Public = PublicBytes<PUBLIC_KEY_SERIALIZED_SIZE, EcdsaTag>; pub type Signature = PublicBytes<PUBLIC_KEY_SERIALIZED_SIZE, EcdsaTag>; ``` Overall we have a cleaner and most importantly **consistent** code for all the types involved All these details are opaque to the end user which can use `Public` and `Signature` for the cryptos as before
-
Matteo Muraca authored
Part of #3326 cc @Kianenigma @ggwpez @liamaharon polkadot address: 12poSUQPtcF1HUPQGY3zZu2P8emuW9YnsPduA4XG3oCEfJVp --------- Signed-off-by: Matteo Muraca <[email protected]>
-
Juan Ignacio Rios authored
Currently the xcm-executor returns an `Unimplemented` error if it receives any HRMP-related instruction. What I propose here, which is what we are currently doing in our forked executor at polimec, is to introduce a trait implemented by the executor which will handle those instructions. This way, if parachains want to keep the default behavior, they just use `()` and it will return unimplemented, but they can also implement their own logic to establish HRMP channels with other chains in an automated fashion, without requiring to go through governance. Our implementation is mentioned in the [polkadot HRMP docs](https://arc.net/l/quote/hduiivbu), and it was suggested to us to submit a PR to add these changes to polkadot-sdk. --------- Co-authored-by: Branislav Kontur <[email protected]> Co-authored-by: command-bot <>
-
- Mar 17, 2024
-
-
Oliver Tale-Yazdi authored
Crowdloan account should burn all funds after a crowd loan got dissolved to ensure that the account is reaped correctly. --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: Francisco Aguirre <[email protected]>
-
- Mar 15, 2024
-
-
Andrei Sandu authored
Extracted Benchbuilder enhancements used in https://github.com/paritytech/polkadot-sdk/pull/3644 . Might still require some work to fully support all scenarios when disputing elastic scaling parachains, but it should be useful in writing elastic scaling runtime tests. --------- Signed-off-by: Andrei Sandu <[email protected]>
-
gupnik authored
Step in https://github.com/paritytech/polkadot-sdk/issues/171 This PR removes `as [disambiguation_path]` syntax from `derive_impl` usage across the polkadot-sdk as introduced in https://github.com/paritytech/polkadot-sdk/pull/3505
-
- Mar 14, 2024
-
-
Dino Pačandi authored
Make Rococo & Westend XCM's location converter `HashedDescription` more in line with Polkadot/Kusama runtimes. Co-authored-by: Muharem <[email protected]>
-
- Mar 13, 2024
-
-
georgepisaltu authored
Revert "FRAME: Create `TransactionExtension` as a replacement for `SignedExtension` (#2280)" (#3665) This PR reverts #2280 which introduced `TransactionExtension` to replace `SignedExtension`. As a result of the discussion [here](https://github.com/paritytech/polkadot-sdk/pull/3623#issuecomment-1986789700), the changes will be reverted for now with plans to reintroduce the concept in the future. --------- Signed-off-by: georgepisaltu <[email protected]>
-
- Mar 11, 2024
-
-
Tsvetomir Dimitrov authored
Fixes some typos, outdated comments and test asserts. Also uses safe math and `defensive` for arithmetic operations.
-
- Mar 08, 2024
-
-
cuinix authored
Signed-off-by: cuinix <[email protected]> Co-authored-by: Bastian Köcher <[email protected]>
-
- Mar 07, 2024
-
-
Pablo Andrés Dorado Suárez authored
Closes #376 --------- Co-authored-by: command-bot <>
-
Muharem Ismailov authored
Deprecate the `xcm::body::TREASURER_INDEX` constant and use the standard Treasury variant from the `xcm::BodyId` type instead. To align with the production runtimes: https://github.com/polkadot-fellows/runtimes/pull/149
-
Andrei Sandu authored
Changes the way we perform the random selection of backed candidates when there isn't enough room for all of them. Instead of picking individual backed candidates `apply_weight` now operates on chains of candidates. This is fully backwards compatible and relies on the node side (provisioner/prospective parachains) doing the heavy lifting and providing the candidates in the order they form a chain. The same approach can be implemented for bitfields random selection once https://github.com/paritytech/polkadot-sdk/pull/3479 is merged. The approach taken in this PR aims for reduced additional complexity at the cost of being less fair wrt how many backed candidates from each chain are picked. It favors elastic scaling parachains vs parachains not using elastic scaling, but from my perspective it should be fine as this should happen under exceptional circumstances like dispute storms. Note: to make things more fair we can consider specializing `random_sel` such that it will try to pick candidates one by one in the order provided by the provisioner such that non elastic scaling parachains have the same chance of getting a candidate backed. --------- Signed-off-by: Andrei Sandu <[email protected]>
-
- Mar 04, 2024
-
-
Gavin Wood authored
Closes #2160 First part of [Extrinsic Horizon](https://github.com/paritytech/polkadot-sdk/issues/2415) Introduces a new trait `TransactionExtension` to replace `SignedExtension`. Introduce the idea of transactions which obey the runtime's extensions and have according Extension data (né Extra data) yet do not have hard-coded signatures. Deprecate the terminology of "Unsigned" when used for transactions/extrinsics owing to there now being "proper" unsigned transactions which obey the extension framework and "old-style" unsigned which do not. Instead we have __*General*__ for the former and __*Bare*__ for the latter. (Ultimately, the latter will be phased out as a type of transaction, and Bare will only be used for Inherents.) Types of extrinsic are now therefore: - Bare (no hardcoded signature, no Extra data; used to be known as "Unsigned") - Bare transactions (deprecated): Gossiped, validated with `ValidateUnsigned` (deprecated) and the `_bare_compat` bits of `TransactionExtension` (deprecated). - Inherents: Not gossiped, validated with `ProvideInherent`. - Extended (Extra data): Gossiped, validated via `TransactionExtension`. - Signed transactions (with a hardcoded signature). - General transactions (without a hardcoded signature). `TransactionExtension` differs from `SignedExtension` because: - A signature on the underlying transaction may validly not be present. - It may alter the origin during validation. - `pre_dispatch` is renamed to `prepare` and need not contain the checks present in `validate`. - `validate` and `prepare` is passed an `Origin` rather than a `AccountId`. - `validate` may pass arbitrary information into `prepare` via a new user-specifiable type `Val`. - `AdditionalSigned`/`additional_signed` is renamed to `Implicit`/`implicit`. It is encoded *for the entire transaction* and passed in to each extension as a new argument to `validate`. This facilitates the ability of extensions to acts as underlying crypto. There is a new `DispatchTransaction` trait which contains only default function impls and is impl'ed for any `TransactionExtension` impler. It provides several utility functions which reduce some of the tedium from using `TransactionExtension` (indeed, none of its regular functions should now need to be called directly). Three transaction version discriminator ("versions") are now permissible: - 0b000000100: Bare (used to be called "Unsigned"): contains Signature or Extra (extension data). After bare transactions are no longer supported, this will strictly identify an Inherents only. - 0b100000100: Old-school "Signed" Transaction: contains Signature and Extra (extension data). - 0b010000100: New-school "General" Transaction: contains Extra (extension data), but no Signature. For the New-school General Transaction, it becomes trivial for authors to publish extensions to the mechanism for authorizing an Origin, e.g. through new kinds of key-signing schemes, ZK proofs, pallet state, mutations over pre-authenticated origins or any combination of the above. ## Code Migration ### NOW: Getting it to build Wrap your `SignedExtension`s in `AsTransactionExtension`. This should be accompanied by renaming your aggregate type in line with the new terminology. E.g. Before: ```rust /// The SignedExtension to the basic transaction logic. pub type SignedExtra = ( /* snip */ MySpecialSignedExtension, ); /// Unchecked extrinsic type as expected by this runtime. pub type UncheckedExtrinsic = generic::UncheckedExtrinsic<Address, RuntimeCall, Signature, SignedExtra>; ``` After: ```rust /// The extension to the basic transaction logic. pub type TxExtension = ( /* snip */ AsTransactionExtension<MySpecialSignedExtension>, ); /// Unchecked extrinsic type as expected by this runtime. pub type UncheckedExtrinsic = generic::UncheckedExtrinsic<Address, RuntimeCall, Signature, TxExtension>; ``` You'll also need to alter any transaction building logic to add a `.into()` to make the conversion happen. E.g. Before: ```rust fn construct_extrinsic( /* snip */ ) -> UncheckedExtrinsic { let extra: SignedExtra = ( /* snip */ MySpecialSignedExtension::new(/* snip */), ); let payload = SignedPayload::new(call.clone(), extra.clone()).unwrap(); let signature = payload.using_encoded(|e| sender.sign(e)); UncheckedExtrinsic::new_signed( /* snip */ Signature::Sr25519(signature), extra, ) } ``` After: ```rust fn construct_extrinsic( /* snip */ ) -> UncheckedExtrinsic { let tx_ext: TxExtension = ( /* snip */ MySpecialSignedExtension::new(/* snip */).into(), ); let payload = SignedPayload::new(call.clone(), tx_ext.clone()).unwrap(); let signature = payload.using_encoded(|e| sender.sign(e)); UncheckedExtrinsic::new_signed( /* snip */ Signature::Sr25519(signature), tx_ext, ) } ``` ### SOON: Migrating to `TransactionExtension` Most `SignedExtension`s can be trivially converted to become a `TransactionExtension`. There are a few things to know. - Instead of a single trait like `SignedExtension`, you should now implement two traits individually: `TransactionExtensionBase` and `TransactionExtension`. - Weights are now a thing and must be provided via the new function `fn weight`. #### `TransactionExtensionBase` This trait takes care of anything which is not dependent on types specific to your runtime, most notably `Call`. - `AdditionalSigned`/`additional_signed` is renamed to `Implicit`/`implicit`. - Weight must be returned by implementing the `weight` function. If your extension is associated with a pallet, you'll probably want to do this via the pallet's existing benchmarking infrastructure. #### `TransactionExtension` Generally: - `pre_dispatch` is now `prepare` and you *should not reexecute the `validate` functionality in there*! - You don't get an account ID any more; you get an origin instead. If you need to presume an account ID, then you can use the trait function `AsSystemOriginSigner::as_system_origin_signer`. - You get an additional ticket, similar to `Pre`, called `Val`. This defines data which is passed from `validate` into `prepare`. This is important since you should not be duplicating logic from `validate` to `prepare`, you need a way of passing your working from the former into the latter. This is it. - This trait takes two type parameters: `Call` and `Context`. `Call` is the runtime call type which used to be an associated type; you can just move it to become a type parameter for your trait impl. `Context` is not currently used and you can safely implement over it as an unbounded type. - There's no `AccountId` associated type any more. Just remove it. Regarding `validate`: - You get three new parameters in `validate`; all can be ignored when migrating from `SignedExtension`. - `validate` returns a tuple on success; the second item in the tuple is the new ticket type `Self::Val` which gets passed in to `prepare`. If you use any information extracted during `validate` (off-chain and on-chain, non-mutating) in `prepare` (on-chain, mutating) then you can pass it through with this. For the tuple's last item, just return the `origin` argument. Regarding `prepare`: - This is renamed from `pre_dispatch`, but there is one change: - FUNCTIONALITY TO VALIDATE THE TRANSACTION NEED NOT BE DUPLICATED FROM `validate`!! - (This is different to `SignedExtension` which was required to run the same checks in `pre_dispatch` as in `validate`.) Regarding `post_dispatch`: - Since there are no unsigned transactions handled by `TransactionExtension`, `Pre` is always defined, so the first parameter is `Self::Pre` rather than `Option<Self::Pre>`. If you make use of `SignedExtension::validate_unsigned` or `SignedExtension::pre_dispatch_unsigned`, then: - Just use the regular versions of these functions instead. - Have your logic execute in the case that the `origin` is `None`. - Ensure your transaction creation logic creates a General Transaction rather than a Bare Transaction; this means having to include all `TransactionExtension`s' data. - `ValidateUnsigned` can still be used (for now) if you need to be able to construct transactions which contain none of the extension data, however these will be phased out in stage 2 of the Transactions Horizon, so you should consider moving to an extension-centric design. ## TODO - [x] Introduce `CheckSignature` impl of `TransactionExtension` to ensure it's possible to have crypto be done wholly in a `TransactionExtension`. - [x] Deprecate `SignedExtension` and move all uses in codebase to `TransactionExtension`. - [x] `ChargeTransactionPayment` - [x] `DummyExtension` - [x] `ChargeAssetTxPayment` (asset-tx-payment) - [x] `ChargeAssetTxPayment` (asset-conversion-tx-payment) - [x] `CheckWeight` - [x] `CheckTxVersion` - [x] `CheckSpecVersion` - [x] `CheckNonce` - [x] `CheckNonZeroSender` - [x] `CheckMortality` - [x] `CheckGenesis` - [x] `CheckOnlySudoAccount` - [x] `WatchDummy` - [x] `PrevalidateAttests` - [x] `GenericSignedExtension` - [x] `SignedExtension` (chain-polkadot-bulletin) - [x] `RefundSignedExtensionAdapter` - [x] Implement `fn weight` across the board. - [ ] Go through all pre-existing extensions which assume an account signer and explicitly handle the possibility of another kind of origin. - [x] `CheckNonce` should probably succeed in the case of a non-account origin. - [x] `CheckNonZeroSender` should succeed in the case of a non-account origin. - [x] `ChargeTransactionPayment` and family should fail in the case of a non-account origin. - [ ] - [x] Fix any broken tests. --------- Signed-off-by: georgepisaltu <[email protected]> Signed-off-by: Alexandru Vasile <[email protected]> Signed-off-by: dependabot[bot] <[email protected]> Signed-off-by: Oliver Tale-Yazdi <[email protected]> Signed-off-by: Alexandru Gheorghe <[email protected]> Signed-off-by: Andrei Sandu <[email protected]> Co-authored-by: Nikhil Gupta <[email protected]> Co-authored-by: georgepisaltu <[email protected]> Co-authored-by: Chevdor <[email protected]> Co-authored-by: Bastian Köcher <[email protected]> Co-authored-by: Maciej <[email protected]> Co-authored-by: Javier Viola <[email protected]> Co-authored-by: Marcin S. <[email protected]> Co-authored-by: Tsvetomir Dimitrov <[email protected]> Co-authored-by: Javier Bullrich <[email protected]> Co-authored-by: Koute <[email protected]> Co-authored-by: Adrian Catangiu <[email protected]> Co-authored-by: Vladimir Istyufeev <[email protected]> Co-authored-by: Ross Bulat <[email protected]> Co-authored-by: Gonçalo Pestana <[email protected]> Co-authored-by: Liam Aharon <[email protected]> Co-authored-by: Svyatoslav Nikolsky <[email protected]> Co-authored-by: André Silva <[email protected]> Co-authored-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: s0me0ne-unkn0wn <[email protected]> Co-authored-by: ordian <[email protected]> Co-authored-by: Sebastian Kunert <[email protected]> Co-authored-by: Aaro Altonen <[email protected]> Co-authored-by: Dmitry Markin <[email protected]> Co-authored-by: Alexandru Vasile <[email protected]> Co-authored-by: Alexander Samusev <[email protected]> Co-authored-by: Julian Eager <[email protected]> Co-authored-by: Michal Kucharczyk <[email protected]> Co-authored-by: Davide Galassi <[email protected]> Co-authored-by: Dónal Murray <[email protected]> Co-authored-by: yjh <[email protected]> Co-authored-by: Tom Mi <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Will | Paradox | ParaNodes.io <[email protected]> Co-authored-by: Bastian Köcher <[email protected]> Co-authored-by: Joshy Orndorff <[email protected]> Co-authored-by: Joshy Orndorff <[email protected]> Co-authored-by: PG Herveou <[email protected]> Co-authored-by: Alexander Theißen <[email protected]> Co-authored-by: Kian Paimani <[email protected]> Co-authored-by: Juan Girini <[email protected]> Co-authored-by: bader y <[email protected]> Co-authored-by: James Wilson <[email protected]> Co-authored-by: joe petrowski <[email protected]> Co-authored-by: asynchronous rob <[email protected]> Co-authored-by: Parth <[email protected]> Co-authored-by: Andrew Jones <[email protected]> Co-authored-by: Jonathan Udd <[email protected]> Co-authored-by: Serban Iorga <[email protected]> Co-authored-by: Egor_P <[email protected]> Co-authored-by: Branislav Kontur <[email protected]> Co-authored-by: Evgeny Snitko <[email protected]> Co-authored-by: Just van Stam <[email protected]> Co-authored-by: Francisco Aguirre <[email protected]> Co-authored-by: gupnik <[email protected]> Co-authored-by: dzmitry-lahoda <[email protected]> Co-authored-by: zhiqiangxu <[email protected]> Co-authored-by: Nazar Mokrynskyi <[email protected]> Co-authored-by: Anwesh <[email protected]> Co-authored-by: cheme <[email protected]> Co-authored-by: Sam Johnson <[email protected]> Co-authored-by: kianenigma <[email protected]> Co-authored-by: Jegor Sidorenko <[email protected]> Co-authored-by: Muharem <[email protected]> Co-authored-by: joepetrowski <[email protected]> Co-authored-by: Alexandru Gheorghe <[email protected]> Co-authored-by: Gabriel Facco de Arruda <[email protected]> Co-authored-by: Squirrel <[email protected]> Co-authored-by: Andrei Sandu <[email protected]> Co-authored-by: georgepisaltu <[email protected]> Co-authored-by: command-bot <>
-
- Mar 01, 2024
-
-
Egor_P authored
This PR backports Node version and `spec_version` bumps to `1.8.0` from the latest release and orders prdoc files related to it.
-
Francisco Aguirre authored
If an XCM execution fails or ends with leftover assets, these will be trapped. In order to claim them, a custom XCM has to be executed, with the `ClaimAsset` instruction. However, arbitrary XCM execution is not allowed everywhere yet and XCM itself is still not easy enough to use for users out there with trapped assets. This new extrinsic in `pallet-xcm` will allow these users to easily claim their assets, without concerning themselves with writing arbitrary XCMs. Part of fixing https://github.com/paritytech/polkadot-sdk/issues/3495 --------- Co-authored-by: command-bot <> Co-authored-by: Adrian Catangiu <[email protected]>
-
- Feb 29, 2024
-
-
Andrei Sandu authored
Signed-off-by: Andrei Sandu <[email protected]>
-
Tsvetomir Dimitrov authored
This PR removes `AssignmentProviderConfig` and uses the corresponding ondemand parameters from `HostConfiguration` instead. Additionally `scheduling_lookahead` and all coretime/ondemand related parameters are extracted in a separate struct - `SchedulerParams`. The most relevant commit from the PR is [this one](https://github.com/paritytech/polkadot-sdk/pull/3181/commits/830bc0f5). Fixes https://github.com/paritytech/polkadot-sdk/issues/2268 --------- Co-authored-by: command-bot <>
-
- Feb 28, 2024
-
-
Oliver Tale-Yazdi authored
This MR is the merge of https://github.com/paritytech/substrate/pull/14414 and https://github.com/paritytech/substrate/pull/14275. It implements [RFC#13](https://github.com/polkadot-fellows/RFCs/pull/13), closes https://github.com/paritytech/polkadot-sdk/issues/198. ----- This Merge request introduces three major topicals: 1. Multi-Block-Migrations 1. New pallet `poll` hook for periodic service work 1. Replacement hooks for `on_initialize` and `on_finalize` in cases where `poll` cannot be used and some more general changes to FRAME. The changes for each topical span over multiple crates. They are listed in topical order below. # 1.) Multi-Block-Migrations Multi-Block-Migrations are facilitated by creating `pallet_migrations` and configuring `System::Config::MultiBlockMigrator` to point to it. Executive picks this up and triggers one step of the migrations pallet per block. The chain is in lockdown mode for as long as an MBM is ongoing. Executive does this by polling `MultiBlockMigrator::ongoing` and not allowing any transaction in a block, if true. A MBM is defined through trait `SteppedMigration`. A condensed version looks like this: ```rust /// A migration that can proceed in multiple steps. pub trait SteppedMigration { type Cursor: FullCodec + MaxEncodedLen; type Identifier: FullCodec + MaxEncodedLen; fn id() -> Self::Identifier; fn max_steps() -> Option<u32>; fn step( cursor: Option<Self::Cursor>, meter: &mut WeightMeter, ) -> Result<Option<Self::Cursor>, SteppedMigrationError>; } ``` `pallet_migrations` can be configured with an aggregated tuple of these migrations. It then starts to migrate them one-by-one on the next runtime upgrade. Two things are important here: - 1. Doing another runtime upgrade while MBMs are ongoing is not a good idea and can lead to messed up state. - 2. **Pallet Migrations MUST BE CONFIGURED IN `System::Config`, otherwise it is not used.** The pallet supports an `UpgradeStatusHandler` that can be used to notify external logic of upgrade start/finish (for example to pause XCM dispatch). Error recovery is very limited in the case that a migration errors or times out (exceeds its `max_steps`). Currently the runtime dev can decide in `FailedMigrationHandler::failed` how to handle this. One follow-up would be to pair this with the `SafeMode` pallet and enact safe mode when an upgrade fails, to allow governance to rescue the chain. This is currently not possible, since governance is not `Mandatory`. ## Runtime API - `Core`: `initialize_block` now returns `ExtrinsicInclusionMode` to inform the Block Author whether they can push transactions. ### Integration Add it to your runtime implementation of `Core` and `BlockBuilder`: ```patch diff --git a/runtime/src/lib.rs b/runtime/src/lib.rs @@ impl_runtime_apis! { impl sp_block_builder::Core<Block> for Runtime { - fn initialize_block(header: &<Block as BlockT>::Header) { + fn initialize_block(header: &<Block as BlockT>::Header) -> RuntimeExecutiveMode { Executive::initialize_block(header) } ... } ``` # 2.) `poll` hook A new pallet hook is introduced: `poll`. `Poll` is intended to replace mostly all usage of `on_initialize`. The reason for this is that any code that can be called from `on_initialize` cannot be migrated through an MBM. Currently there is no way to statically check this; the implication is to use `on_initialize` as rarely as possible. Failing to do so can result in broken storage invariants. The implementation of the poll hook depends on the `Runtime API` changes that are explained above. # 3.) Hard-Deadline callbacks Three new callbacks are introduced and configured on `System::Config`: `PreInherents`, `PostInherents` and `PostTransactions`. These hooks are meant as replacement for `on_initialize` and `on_finalize` in cases where the code that runs cannot be moved to `poll`. The reason for this is to make the usage of HD-code (hard deadline) more explicit - again to prevent broken invariants by MBMs. # 4.) FRAME (general changes) ## `frame_system` pallet A new memorize storage item `InherentsApplied` is added. It is used by executive to track whether inherents have already been applied. Executive and can then execute the MBMs directly between inherents and transactions. The `Config` gets five new items: - `SingleBlockMigrations` this is the new way of configuring migrations that run in a single block. Previously they were defined as last generic argument of `Executive`. This shift is brings all central configuration about migrations closer into view of the developer (migrations that are configured in `Executive` will still work for now but is deprecated). - `MultiBlockMigrator` this can be configured to an engine that drives MBMs. One example would be the `pallet_migrations`. Note that this is only the engine; the exact MBMs are injected into the engine. - `PreInherents` a callback that executes after `on_initialize` but before inherents. - `PostInherents` a callback that executes after all inherents ran (including MBMs and `poll`). - `PostTransactions` in symmetry to `PreInherents`, this one is called before `on_finalize` but after all transactions. A sane default is to set all of these to `()`. Example diff suitable for any chain: ```patch @@ impl frame_system::Config for Test { type MaxConsumers = ConstU32<16>; + type SingleBlockMigrations = (); + type MultiBlockMigrator = (); + type PreInherents = (); + type PostInherents = (); + type PostTransactions = (); } ``` An overview of how the block execution now looks like is here. The same graph is also in the rust doc. <details><summary>Block Execution Flow</summary> <p> ![Screenshot 2023-12-04 at 19 11 29](https://github.com/paritytech/polkadot-sdk/assets/10380170/e88a80c4-ef11-4faa-8df5-8b33a724c054) </p> </details> ## Inherent Order Moved to https://github.com/paritytech/polkadot-sdk/pull/2154 --------------- ## TODO - [ ] Check that `try-runtime` still works - [ ] Ensure backwards compatibility with old Runtime APIs - [x] Consume weight correctly - [x] Cleanup --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: Liam Aharon <[email protected]> Co-authored-by: Juan Girini <[email protected]> Co-authored-by: command-bot <> Co-authored-by: Francisco Aguirre <[email protected]> Co-authored-by: Gavin Wood <[email protected]> Co-authored-by: Bastian Köcher <[email protected]>
-
Liam Aharon authored
Closes https://github.com/paritytech/polkadot-sdk-docs/issues/55 - Changes 'current storage version' terminology to less ambiguous 'in-code storage version' (suggestion by @ggwpez ) - Adds a new example pallet `pallet-example-single-block-migrations` - Adds a new reference doc to replace https://docs.substrate.io/maintain/runtime-upgrades/ (temporarily living in the pallet while we wait for developer hub PR to merge) - Adds documentation for the `storage_alias` macro - Improves `trait Hooks` docs - Improves `trait GetStorageVersion` docs - Update the suggested patterns for using `VersionedMigration`, so that version unchecked migrations are never exported - Prevents accidental usage of version unchecked migrations in runtimes https://github.com/paritytech/substrate/pull/14421#discussion_r1255467895 - Unversioned migration code is kept inside `mod version_unchecked`, versioned code is kept in `pub mod versioned` - It is necessary to use modules to limit visibility because the inner migration must be `pub`. See https://github.com/rust-lang/rust/issues/30905 and https://internals.rust-lang.org/t/lang-team-minutes-private-in-public-rules/4504/40 for more. ### todo - [x] move to reference docs to proper place within sdk-docs (now that https://github.com/paritytech/polkadot-sdk/pull/2102 is merged) - [x] prdoc --------- Co-authored-by: Kian Paimani <[email protected]> Co-authored-by: Juan <[email protected]> Co-authored-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: command-bot <> Co-authored-by: gupnik <[email protected]>
-
- Feb 26, 2024
-
-
brenzi authored
with the deprecation of Rococo, Encointer needs a new staging environment. Paseo will be Polkadot-focused and westend Kusama-focused, so we propose to use Westend
-
Branislav Kontur authored
## Problem During the bumping of the `polkadot-fellows` repository to `[email protected]`, I encountered a situation where the benchmarks `teleport_assets` and `reserve_transfer_assets` in AssetHubKusama started to fail. This issue arose due to a decreased ED balance for AssetHubs introduced [here](https://github.com/polkadot-fellows/runtimes/pull/158/files#diff-80668ff8e793b64f36a9a3ec512df5cbca4ad448c157a5d81abda1b15f35f1daR213), and also because of a [missing CI pipeline](https://github.com/polkadot-fellows/runtimes/issues/197) to check the benchmarks, which went unnoticed. These benchmarks expect the `caller` to have enough: 1. balance to transfer (BTT) 2. balance for paying delivery (BFPD). So the initial balance was calculated as `ED * 100`, which seems reasonable: ``` const ED_MULTIPLIER: u32 = 100; let balance = existential_deposit.saturating_mul(ED_MULTIPLIER.into());` ``` The problem arises when the price for delivery is 100 times higher than the existential deposit. In other words, when `ED * 100` does not cover `BTT` + `BFPD`. I check AHR/AHW/AHK/AHP and this problem has only AssetHubKusama ``` ED: 3333333 calculated price to parent delivery: 1031666634 (from xcm logs from the benchmark) --- 3333333 * 100 - BTT(3333333) - BFPD(1031666634) = −701666667 ``` which results in the error; ``` 2024-02-23 09:19:42 Unable to charge fee with error Module(ModuleError { index: 31, error: [17, 0, 0, 0], message: Some("FeesNotMet") }) Error: Input("Benchmark pallet_xcm::reserve_transfer_assets failed: FeesNotMet") ``` ## Solution The benchmarks `teleport_assets` and `reserve_transfer_assets` were fixed by removing `ED * 100` and replacing it with `DeliveryHelper` logic, which calculates the (almost real) price for delivery and sets it along with the existential deposit as the initial balance for the account used in the benchmark. ## TODO - [ ] patch for 1.6 - https://github.com/paritytech/polkadot-sdk/pull/3466 - [ ] patch for 1.7 - https://github.com/paritytech/polkadot-sdk/pull/3465 - [ ] patch for 1.8 - TODO: PR --------- Co-authored-by: Francisco Aguirre <[email protected]>
-
eskimor authored
from Westend and Rococo. --------- Co-authored-by: eskimor <[email protected]> Co-authored-by: command-bot <>
-
- Feb 23, 2024
-
-
Andrei Sandu authored
Fixes https://github.com/paritytech/polkadot-sdk/issues/3144 Builds on top of https://github.com/paritytech/polkadot-sdk/pull/3229 ### Summary Some preparations for Runtime to support elastic scaling, guarded by config node features bit `FeatureIndex::ElasticScalingMVP`. This PR introduces a per-candidate `CoreIndex` but does it in a hacky way to avoid changing `CandidateCommitments`, `CandidateReceipts` primitives and networking protocols. #### Including `CoreIndex` in `BackedCandidate` If the `ElasticScalingMVP` feature bit is enabled then `BackedCandidate::validator_indices` is extended by 8 bits. The value stored in these bits represents the assumed core index for the candidate. It is temporary solution which works by creating a mapping from `BackedCandidate` to `CoreIndex` by assuming the `CoreIndex` can be discovered by checking in which validator group the validator that signed the statement is. TODO: - [x] fix tests - [x] add new tests - [x] Bump runtime API for Kusama, so we have that node features thing! -> https://github.com/polkadot-fellows/runtimes/pull/194 --------- Signed-off-by: Andrei Sandu <[email protected]> Signed-off-by: alindima <[email protected]> Co-authored-by: alindima <[email protected]>
-
Ignacio Palacios authored
The `fee` should be calculated with the reanchored asset, otherwise it could lead to a failure where the set aside fee ends up not being enough. @acatangiu --------- Co-authored-by: Adrian Catangiu <[email protected]>
-