- Feb 28, 2024
-
-
Oliver Tale-Yazdi authored
This MR is the merge of https://github.com/paritytech/substrate/pull/14414 and https://github.com/paritytech/substrate/pull/14275. It implements [RFC#13](https://github.com/polkadot-fellows/RFCs/pull/13), closes https://github.com/paritytech/polkadot-sdk/issues/198. ----- This Merge request introduces three major topicals: 1. Multi-Block-Migrations 1. New pallet `poll` hook for periodic service work 1. Replacement hooks for `on_initialize` and `on_finalize` in cases where `poll` cannot be used and some more general changes to FRAME. The changes for each topical span over multiple crates. They are listed in topical order below. # 1.) Multi-Block-Migrations Multi-Block-Migrations are facilitated by creating `pallet_migrations` and configuring `System::Config::MultiBlockMigrator` to point to it. Executive picks this up and triggers one step of the migrations pallet per block. The chain is in lockdown mode for as long as an MBM is ongoing. Executive does this by polling `MultiBlockMigrator::ongoing` and not allowing any transaction in a block, if true. A MBM is defined through trait `SteppedMigration`. A condensed version looks like this: ```rust /// A migration that can proceed in multiple steps. pub trait SteppedMigration { type Cursor: FullCodec + MaxEncodedLen; type Identifier: FullCodec + MaxEncodedLen; fn id() -> Self::Identifier; fn max_steps() -> Option<u32>; fn step( cursor: Option<Self::Cursor>, meter: &mut WeightMeter, ) -> Result<Option<Self::Cursor>, SteppedMigrationError>; } ``` `pallet_migrations` can be configured with an aggregated tuple of these migrations. It then starts to migrate them one-by-one on the next runtime upgrade. Two things are important here: - 1. Doing another runtime upgrade while MBMs are ongoing is not a good idea and can lead to messed up state. - 2. **Pallet Migrations MUST BE CONFIGURED IN `System::Config`, otherwise it is not used.** The pallet supports an `UpgradeStatusHandler` that can be used to notify external logic of upgrade start/finish (for example to pause XCM dispatch). Error recovery is very limited in the case that a migration errors or times out (exceeds its `max_steps`). Currently the runtime dev can decide in `FailedMigrationHandler::failed` how to handle this. One follow-up would be to pair this with the `SafeMode` pallet and enact safe mode when an upgrade fails, to allow governance to rescue the chain. This is currently not possible, since governance is not `Mandatory`. ## Runtime API - `Core`: `initialize_block` now returns `ExtrinsicInclusionMode` to inform the Block Author whether they can push transactions. ### Integration Add it to your runtime implementation of `Core` and `BlockBuilder`: ```patch diff --git a/runtime/src/lib.rs b/runtime/src/lib.rs @@ impl_runtime_apis! { impl sp_block_builder::Core<Block> for Runtime { - fn initialize_block(header: &<Block as BlockT>::Header) { + fn initialize_block(header: &<Block as BlockT>::Header) -> RuntimeExecutiveMode { Executive::initialize_block(header) } ... } ``` # 2.) `poll` hook A new pallet hook is introduced: `poll`. `Poll` is intended to replace mostly all usage of `on_initialize`. The reason for this is that any code that can be called from `on_initialize` cannot be migrated through an MBM. Currently there is no way to statically check this; the implication is to use `on_initialize` as rarely as possible. Failing to do so can result in broken storage invariants. The implementation of the poll hook depends on the `Runtime API` changes that are explained above. # 3.) Hard-Deadline callbacks Three new callbacks are introduced and configured on `System::Config`: `PreInherents`, `PostInherents` and `PostTransactions`. These hooks are meant as replacement for `on_initialize` and `on_finalize` in cases where the code that runs cannot be moved to `poll`. The reason for this is to make the usage of HD-code (hard deadline) more explicit - again to prevent broken invariants by MBMs. # 4.) FRAME (general changes) ## `frame_system` pallet A new memorize storage item `InherentsApplied` is added. It is used by executive to track whether inherents have already been applied. Executive and can then execute the MBMs directly between inherents and transactions. The `Config` gets five new items: - `SingleBlockMigrations` this is the new way of configuring migrations that run in a single block. Previously they were defined as last generic argument of `Executive`. This shift is brings all central configuration about migrations closer into view of the developer (migrations that are configured in `Executive` will still work for now but is deprecated). - `MultiBlockMigrator` this can be configured to an engine that drives MBMs. One example would be the `pallet_migrations`. Note that this is only the engine; the exact MBMs are injected into the engine. - `PreInherents` a callback that executes after `on_initialize` but before inherents. - `PostInherents` a callback that executes after all inherents ran (including MBMs and `poll`). - `PostTransactions` in symmetry to `PreInherents`, this one is called before `on_finalize` but after all transactions. A sane default is to set all of these to `()`. Example diff suitable for any chain: ```patch @@ impl frame_system::Config for Test { type MaxConsumers = ConstU32<16>; + type SingleBlockMigrations = (); + type MultiBlockMigrator = (); + type PreInherents = (); + type PostInherents = (); + type PostTransactions = (); } ``` An overview of how the block execution now looks like is here. The same graph is also in the rust doc. <details><summary>Block Execution Flow</summary> <p> ![Screenshot 2023-12-04 at 19 11 29](https://github.com/paritytech/polkadot-sdk/assets/10380170/e88a80c4-ef11-4faa-8df5-8b33a724c054) </p> </details> ## Inherent Order Moved to https://github.com/paritytech/polkadot-sdk/pull/2154 --------------- ## TODO - [ ] Check that `try-runtime` still works - [ ] Ensure backwards compatibility with old Runtime APIs - [x] Consume weight correctly - [x] Cleanup --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: Liam Aharon <[email protected]> Co-authored-by: Juan Girini <[email protected]> Co-authored-by: command-bot <> Co-authored-by: Francisco Aguirre <[email protected]> Co-authored-by: Gavin Wood <[email protected]> Co-authored-by: Bastian Köcher <[email protected]>
-
Clara van Staden authored
While adding runtime tests to https://github.com/polkadot-fellows/runtimes/pull/130, I noticed the Ethereum chain ID was hardcoded. For Kusama + Polkadot, the Ethereum chain ID should 1 (Mainnet), whereas on Rococo it is 11155111 (Sepolia). This PR also updates the Snowbridge crates versions to the current versions on crates.io. --------- Co-authored-by: claravanstaden <Cats 4 life!>
-
maksimryndin authored
resolve https://github.com/paritytech/polkadot-sdk/issues/3139 - [x] use a distinguishable error for `execute_artifact` - [x] remove artifact in case of a `RuntimeConstruction` error during the execution - [x] augment the `validate_candidate_with_retry` of `ValidationBackend` with the case of retriable `RuntimeConstruction` error during the execution - [x] update the book (https://paritytech.github.io/polkadot-sdk/book/node/utility/pvf-host-and-workers.html#retrying-execution-requests) - [x] add a test - [x] run zombienet tests --------- Co-authored-by: s0me0ne-unkn0wn <[email protected]>
-
Kian Paimani authored
- deprecation companion: https://github.com/substrate-developer-hub/substrate-docs/pull/2136 - inspired by https://substrate.stackexchange.com/questions/11058/how-can-i-create-ocw-that-wont-activates-every-block-but-will-activates-only-w/11060#11060 --------- Co-authored-by: Sergej Sakac <[email protected]>
-
Oliver Tale-Yazdi authored
Changes: - Add an optional `bump` field to the crates in a prdoc. - Explain the cargo semver interpretation for <1 versions in the release doc. --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]>
-
Alexandru Vasile authored
This PR adds tests for the `transaction_broadcast` method. The testing needs to coordinate the following components: - The `TestApi` marks transactions as invalid and implements `ChainApi::validate_transaction` - this is what dictates if a transaction is valid or not and is called from within the `BasicPool` - The `BasicPool` which maintains the transactions and implements `submit_and_watch` needed by the tx broadcast to submit the transaction - The status of the transaction pool is exposed by mocking the BasicPool - The `ChainHeadMockClient` which mocks the `BlockchainEvents::import_notification_stream` needed by the tx broadcast to know to which blocks the transaction is submitted The following changes have been added to the substrate testing to accommodate this: - `TestApi` gets ` remove_invalid`, counterpart to `add_invalid` to ensure an invalid transaction can become valid again; as well as a priority setter for extrinsics - `BasicPool` test constructor is extended with options for the `PoolRotator` - this mechanism is needed because transactions are banned for 30mins (default) after they are declared invalid - testing bypasses this by providing a `Duration::ZERO` ### Testing Scenarios - Capture the status of the transaction as it is normally broadcasted - `transaction_stop` is valid while the transaction is in progress - A future transaction is handled when the dependencies are completed - Try to resubmit the transaction at a later block (currently invalid) - An invalid transaction status is propagated; the transaction is marked as temporarily banned; then the ban expires and transaction is resubmitted This builds on top of: https://github.com/paritytech/polkadot-sdk/pull/3079 Part of: https://github.com/paritytech/polkadot-sdk/issues/3084 cc @paritytech/subxt-team --------- Signed-off-by: Alexandru Vasile <[email protected]> Co-authored-by: James Wilson <[email protected]>
-
Liam Aharon authored
Closes https://github.com/paritytech/polkadot-sdk-docs/issues/55 - Changes 'current storage version' terminology to less ambiguous 'in-code storage version' (suggestion by @ggwpez ) - Adds a new example pallet `pallet-example-single-block-migrations` - Adds a new reference doc to replace https://docs.substrate.io/maintain/runtime-upgrades/ (temporarily living in the pallet while we wait for developer hub PR to merge) - Adds documentation for the `storage_alias` macro - Improves `trait Hooks` docs - Improves `trait GetStorageVersion` docs - Update the suggested patterns for using `VersionedMigration`, so that version unchecked migrations are never exported - Prevents accidental usage of version unchecked migrations in runtimes https://github.com/paritytech/substrate/pull/14421#discussion_r1255467895 - Unversioned migration code is kept inside `mod version_unchecked`, versioned code is kept in `pub mod versioned` - It is necessary to use modules to limit visibility because the inner migration must be `pub`. See https://github.com/rust-lang/rust/issues/30905 and https://internals.rust-lang.org/t/lang-team-minutes-private-in-public-rules/4504/40 for more. ### todo - [x] move to reference docs to proper place within sdk-docs (now that https://github.com/paritytech/polkadot-sdk/pull/2102 is merged) - [x] prdoc --------- Co-authored-by: Kian Paimani <[email protected]> Co-authored-by: Juan <[email protected]> Co-authored-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: command-bot <> Co-authored-by: gupnik <[email protected]>
-
maksimryndin authored
resolve https://github.com/paritytech/polkadot-sdk/issues/3116 a follow-up on https://github.com/paritytech/polkadot-sdk/pull/3061#pullrequestreview-1847530265: - [x] reuse collator overseer builder for polkadot-node and collator - [x] run zombienet test (0001-parachains-smoke-test.toml) - [x] make wasm build errors more user-friendly for an easier problem detection when using different toolchains in Rust --------- Co-authored-by: ordian <[email protected]> Co-authored-by: s0me0ne-unkn0wn <[email protected]>
-
Liam Aharon authored
Introduce storage attr macro `#[disable_try_decode_storage]` and set it on `System::Events` and `ParachainSystem::HostConfiguration` (#3454) Closes https://github.com/paritytech/polkadot-sdk/issues/2560 Allows marking storage items with `#[disable_try_decode_storage]`, and uses it with `System::Events`. Question: what's the recommended way to write a test for this? I couldn't find a test for similar existing macro `#[whitelist_storage]`.
-
- Feb 27, 2024
-
-
Petr Mensik authored
Hey everyone, this PR will replace existing Polkadotters bootnodes for Polkadot, Kusama and Westend and add Paseo bootnode to the relay chain suite. At the same time, it will add new bootnodes for all the system parachains, including People on Westend. This PR is a part of our membership in the IBP, meaning that all the bootnodes are hosted on our hardware housed in the data center in Christchurch, New Zealand. All the bootnodes were tested with an empty chain spec file with the following command yielding 1 peer. The test commands used are as follows: ``` ./polkadot --base-path /tmp/node --reserved-only --chain paseo --reserved-nodes "/dns/paseo.bootnodes.polkadotters.com/tcp/30540/wss/p2p/12D3KooWPbbFy4TefEGTRF5eTYhq8LEzc4VAHdNUVCbY4nAnhqPP" ./polkadot --base-path /tmp/node --reserved-only --chain westend --reserved-nodes "/dns/westend.bootnodes.polkadotters.com/tcp/30310/wss/p2p/12D3KooWHPHb64jXMtSRJDrYFATWeLnvChL8NtWVttY67DCH1eC5" ./polkadot --base-path /tmp/node --reserved-only --chain kusama --reserved-nodes "/dns/kusama.bootnodes.polkadotters.com/tcp/30313/wss/p2p/12D3KooWHB5rTeNkQdXNJ9ynvGz8Lpnmsctt7Tvp7mrYv6bcwbPG" ./polkadot --base-path /tmp/node --no-hardware-benchmarks --reserved-only --chain polkadot --reserved-nodes "/dns/polkadot.bootnodes.polkadotters.com/tcp/30316/wss/p2p/12D3KooWPAVUgBaBk6n8SztLrMk8ESByncbAfRKUdxY1nygb9zG3" ./polkadot-parachain --base-path /tmp/node --reserved-only --chain asset-hub-kusama --reserved-nodes "/dns/asset-hub-kusama.bootnodes.polkadotters.com/tcp/30513/wss/p2p/12D3KooWDpk7wVH7RgjErEvbvAZ2kY5VeaAwRJP5ojmn1e8b8UbU" ./polkadot-parachain --base-path /tmp/node --reserved-only --chain asset-hub-polkadot --reserved-nodes "/dns/asset-hub-polkadot.bootnodes.polkadotters.com/tcp/30510/wss/p2p/12D3KooWKbfY9a9oywxMJKiALmt7yhrdQkjXMtvxhhDDN23vG93R" ./polkadot-parachain --base-path /tmp/node --reserved-only --chain asset-hub-westend --reserved-nodes "/dns/asset-hub-westend.bootnodes.polkadotters.com/tcp/30516/wss/p2p/12D3KooWNFYysCqmojxqjjaTfD2VkWBNngfyUKWjcR4WFixfHNTk" ./polkadot-parachain --base-path /tmp/node --reserved-only --chain bridge-hub-kusama --reserved-nodes "/dns/bridge-hub-kusama.bootnodes.polkadotters.com/tcp/30522/wss/p2p/12D3KooWH3pucezRRS5esoYyzZsUkKWcPSByQxEvmM819QL1HPLV" ./polkadot-parachain --base-path /tmp/node --reserved-only --chain bridge-hub-kusama --reserved-nodes "/dns/bridge-hub-kusama.bootnodes.polkadotters.com/tcp/30522/wss/p2p/12D3KooWH3pucezRRS5esoYyzZsUkKWcPSByQxEvmM819QL1HPLV" ./polkadot-parachain --base-path /tmp/node --reserved-only --chain bridge-hub-westend --reserved-nodes "/dns/bridge-hub-westend.bootnodes.polkadotters.com/tcp/30525/wss/p2p/12D3KooWPkwgJofp4GeeRwNgXqkp2aFwdLkCWv3qodpBJLwK43Jj" ./polkadot-parachain --base-path /tmp/node --reserved-only --chain collectives-polkadot --reserved-nodes "/dns/collectives-polkadot.bootnodes.polkadotters.com/tcp/30528/wss/p2p/12D3KooWNohUjvJtGKUa8Vhy8C1ZBB5N8JATB6e7rdLVCioeb3ff" ./polkadot-parachain --base-path /tmp/node --reserved-only --chain collectives-westend --reserved-nodes "/dns/collectives-westend.bootnodes.polkadotters.com/tcp/30531/wss/p2p/12D3KooWAFkXNSBfyPduZVgfS7pj5NuVpbU8Ee5gHeF8wvos7Yqn" ./polkadot-parachain --base-path /tmp/node --reserved-only --chain people-westend --reserved-nodes "/dns/identity-westend.bootnodes.polkadotters.com/tcp/30534/wss/p2p/12D3KooWKr9San6KTM7REJ95cBaDoiciGcWnW8TTftEJgxGF5Ehb" ``` Best regards, Petr, Polkadotters
-
Kian Paimani authored
Does the following: - Add a reference doc page named `frame_runtime_types`, which explains what types like `RuntimeOrigin`, `RuntimeCall` etc are. - On top of it, it adds a reference doc page called `frame_origin` which explains a few important patterns that we use around origins - And finally brushes up `#[frame::origin]` docs. - Updates the theme, sidebar and favicon to look like: <img width="1728" alt="Screenshot 2024-02-20 at 12 16 00" src="https://github.com/paritytech/polkadot-sdk/assets/5588131/6d60a16b-2081-411b-8869-43b91920cca9"> All of this was inspired by https://substrate.stackexchange.com/questions/10992/how-do-you-find-the-public-key-for-the-medium-spender-track-origin/10993 closes https://github.com/paritytech/polkadot-sdk-docs/issues/45 closes https://github.com/paritytech/polkadot-sdk-docs/issues/43 contributes / overlaps with https://github.com/paritytech/polkadot-sdk/pull/2638 cc @liamaharon deprecation companion: https://github.com/substrate-developer-hub/substrate-docs/pull/2131 pba-content companion: https://github.com/Polkadot-Blockchain-Academy/pba-content/pull/977 --------- Co-authored-by: Radha <[email protected]> Co-authored-by: Sebastian Kunert <[email protected]> Co-authored-by: Gonçalo Pestana <[email protected]> Co-authored-by: Liam Aharon <[email protected]>
-
Serban Iorga authored
Related to https://github.com/paritytech/polkadot-sdk/issues/3475
-
Clara van Staden authored
When running `cargo test -p bridge-hub-rococo-runtime --features runtime-benchmarks`, two of the Snowbridge benchmark tests fails. The reason is that when the runtime-benchmarks feature is enabled, the `NoopMessageProcessor` message processor is used. The Snowbridge tests rely on the outbound messages to be processed using the message queue, so that we can check the expected nonce and block digest logs. This PR changes the conditional compilation to only use `NoopMessageProcessor` when compiling the executable to run benchmarks against, not when running tests. --------- Co-authored-by: claravanstaden <Cats 4 life!>
-
Javier Viola authored
Fix timeouts downloading the artifacts (e.g https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5246040) Thx! --------- Co-authored-by: Bastian Köcher <[email protected]>
-
- Feb 26, 2024
-
-
philoniare authored
# Description *Refactors `String::from_utf8` usage in the pallet benchmarking Fixes #389 --------- Co-authored-by: command-bot <>
-
Oliver Tale-Yazdi authored
Changes: - Add CI script to check that the `crate` names that are mentioned in prdocs are valid. We can extend it lateron to also validate the correct SemVer bumps as introduced in https://github.com/paritytech/polkadot-sdk/pull/3441. Example output: ```pre $ python3 .github/scripts/check-prdoc.py Cargo.toml prdoc/*.prdoc
🔎 Reading workspace polkadot-sdk/Cargo.toml.📦 Checking 36 prdocs against 494 crates.✅ All prdocs are valid. ``` Note that not all old prdocs pass the check since crates have been renamed: ```pre $ python3 .github/scripts/check-prdoc.py Cargo.toml prdoc/**/*.prdoc🔎 Reading workspace polkadot-sdk/Cargo.toml.📦 Checking 186 prdocs against 494 crates.❌ Some prdocs are invalid.💥 prdoc/1.4.0/pr_1926.prdoc lists invalid crate: node-cli💥 prdoc/1.4.0/pr_2086.prdoc lists invalid crate: xcm-executor💥 prdoc/1.4.0/pr_2107.prdoc lists invalid crate: xcm💥 prdoc/1.6.0/pr_2684.prdoc lists invalid crate: xcm-builder ``` --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> -
Vladimir Istyufeev authored
Tests run as part of the `test-linux-stable` jobs could hang as shown [here](https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5341393). This PR adds a failsafe for such cases.
-
Sebastian Kunert authored
While investigating some pruning issues I found some room for improvement in the notification pin handling. **Problem:** It was not possible to define an upper limit on notification pins. The block pinning cache has a limit, but only handles bodies and justifications. After this PR, bookkeeping for notifications is managed in the pinning worker. A limit can be defined in the worker. If that limit is crossed, blocks that were pinned for that notification are unpinned, which now affects the state as well as bodies and justifications. The pinned blocks cache still has a limit, but should never be hit. closes #19 --------- Co-authored-by: Bastian Köcher <[email protected]> Co-authored-by: André Silva <[email protected]>
-
Bastian Köcher authored
Instead of only generating the error, we now generate the actual code and the error. This generates in total less errors and helps the user to identify the actual problem and not being confronted with tons of errors.
-
brenzi authored
with the deprecation of Rococo, Encointer needs a new staging environment. Paseo will be Polkadot-focused and westend Kusama-focused, so we propose to use Westend
-
Branislav Kontur authored
## Problem During the bumping of the `polkadot-fellows` repository to `[email protected]`, I encountered a situation where the benchmarks `teleport_assets` and `reserve_transfer_assets` in AssetHubKusama started to fail. This issue arose due to a decreased ED balance for AssetHubs introduced [here](https://github.com/polkadot-fellows/runtimes/pull/158/files#diff-80668ff8e793b64f36a9a3ec512df5cbca4ad448c157a5d81abda1b15f35f1daR213), and also because of a [missing CI pipeline](https://github.com/polkadot-fellows/runtimes/issues/197) to check the benchmarks, which went unnoticed. These benchmarks expect the `caller` to have enough: 1. balance to transfer (BTT) 2. balance for paying delivery (BFPD). So the initial balance was calculated as `ED * 100`, which seems reasonable: ``` const ED_MULTIPLIER: u32 = 100; let balance = existential_deposit.saturating_mul(ED_MULTIPLIER.into());` ``` The problem arises when the price for delivery is 100 times higher than the existential deposit. In other words, when `ED * 100` does not cover `BTT` + `BFPD`. I check AHR/AHW/AHK/AHP and this problem has only AssetHubKusama ``` ED: 3333333 calculated price to parent delivery: 1031666634 (from xcm logs from the benchmark) --- 3333333 * 100 - BTT(3333333) - BFPD(1031666634) = −701666667 ``` which results in the error; ``` 2024-02-23 09:19:42 Unable to charge fee with error Module(ModuleError { index: 31, error: [17, 0, 0, 0], message: Some("FeesNotMet") }) Error: Input("Benchmark pallet_xcm::reserve_transfer_assets failed: FeesNotMet") ``` ## Solution The benchmarks `teleport_assets` and `reserve_transfer_assets` were fixed by removing `ED * 100` and replacing it with `DeliveryHelper` logic, which calculates the (almost real) price for delivery and sets it along with the existential deposit as the initial balance for the account used in the benchmark. ## TODO - [ ] patch for 1.6 - https://github.com/paritytech/polkadot-sdk/pull/3466 - [ ] patch for 1.7 - https://github.com/paritytech/polkadot-sdk/pull/3465 - [ ] patch for 1.8 - TODO: PR --------- Co-authored-by: Francisco Aguirre <[email protected]>
-
Alexandru Gheorghe authored
Add more debug logs to understand if statement-distribution is in a bad state, should be useful for debugging https://github.com/paritytech/polkadot-sdk/issues/3314 on production networks. Additionally, increase the number of parallel requests should make, since I notice that requests take around 100ms on kusama, and the 5 parallel request was picked mostly random, no reason why we can do more than that. --------- Signed-off-by: Alexandru Gheorghe <[email protected]> Co-authored-by: ordian <[email protected]>
-
eskimor authored
from Westend and Rococo. --------- Co-authored-by: eskimor <[email protected]> Co-authored-by: command-bot <>
-
- Feb 24, 2024
-
-
tmpolaczyk authored
Changes the runtime hash algorithm used in `resolve_state_version_from_wasm` from `DefaultHasher` to a caller-provided one (usually `HashingFor<Block>`), to match the one used elsewhere. This fixes an issue where the runtime wasm is compiled 3 times when starting the `tanssi-node` with `--dev`. With this fix, the runtime wasm is only compiled 2 times. The other redundant compilation is caused by the `GenesisConfigBuilderRuntimeCaller` struct, which ignores the runtime cache. --------- Co-authored-by: Bastian Köcher <[email protected]>
-
- Feb 23, 2024
-
-
Andrei Sandu authored
Fixes https://github.com/paritytech/polkadot-sdk/issues/3144 Builds on top of https://github.com/paritytech/polkadot-sdk/pull/3229 ### Summary Some preparations for Runtime to support elastic scaling, guarded by config node features bit `FeatureIndex::ElasticScalingMVP`. This PR introduces a per-candidate `CoreIndex` but does it in a hacky way to avoid changing `CandidateCommitments`, `CandidateReceipts` primitives and networking protocols. #### Including `CoreIndex` in `BackedCandidate` If the `ElasticScalingMVP` feature bit is enabled then `BackedCandidate::validator_indices` is extended by 8 bits. The value stored in these bits represents the assumed core index for the candidate. It is temporary solution which works by creating a mapping from `BackedCandidate` to `CoreIndex` by assuming the `CoreIndex` can be discovered by checking in which validator group the validator that signed the statement is. TODO: - [x] fix tests - [x] add new tests - [x] Bump runtime API for Kusama, so we have that node features thing! -> https://github.com/polkadot-fellows/runtimes/pull/194 --------- Signed-off-by: Andrei Sandu <[email protected]> Signed-off-by: alindima <[email protected]> Co-authored-by: alindima <[email protected]>
-
Serban Iorga authored
Related to https://github.com/paritytech/polkadot-sdk/issues/3400 Moving all bridges testing "framework" files under one folder in order to be able to download the entire folder when we want to add tests in other repos No significant functional changes
-
Sebastian Kunert authored
# Runtime side for PoV Reclaim ## Implementation Overview - Hostfunction to fetch the storage proof size has been added to the PVF. It uses the size tracking recorder that was introduced in my previous PR. - Mechanisms to use the reclaim HostFunction have been introduced. - 1. A SignedExtension that checks the node-reported proof size before and after application of an extrinsic. Then it reclaims the difference. - 2. A manual helper to make reclaiming easier when manual interaction is required, for example in `on_idle` or other hooks. - In order to utilize the manual reclaiming, I modified `WeightMeter` to support the reduction of consumed weight, at least for storage proof size. ## How to use To enable the general functionality for a parachain: 1. Add the SignedExtension to your parachain runtime. 2. Provide the HostFunction to the node 3. Enable proof recording during block import ## TODO - [x] PRDoc --------- Co-authored-by: Dmitry Markin <[email protected]> Co-authored-by: Davide Galassi <[email protected]> Co-authored-by: Bastian Köcher <[email protected]>
-
Dino Pačandi authored
## Summary * use benchamarked weights instead of hardcoded ones for `pallet-membership` * rename benchmark to match extrinsic name * remove unnecessary dependency from `clear_prime` --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: Oliver Tale-Yazdi <[email protected]>
-
PG Herveou authored
Add a `ApiVersion` constant to the pallet-contracts Config to communicate with developers the current state of the host functions exposed by the pallet
-
Sebastian Kunert authored
By passing `RUST_LOG=info` to the check command, we will be able to see the exact problem with a given prdoc. Before: ``` PR #3243 -> ERR ``` After: ``` [2024-02-23T12:53:55Z INFO prdoclib::commands::check] Checking directory prdoc [2024-02-23T12:53:55Z INFO prdoclib::commands::check] Using schema: /Users/sebastian/work/repos/polkadot-sdk/prdoc/schema_user.json [2024-02-23T12:53:55Z WARN prdoclib::schema] validation_result: false [2024-02-23T12:53:55Z WARN prdoclib::schema] validation_result_strict: false [2024-02-23T12:53:55Z WARN prdoclib::schema] errors: [ Required { path: "/title", }, ] [2024-02-23T12:53:55Z WARN prdoclib::schema] missing: [] [2024-02-23T12:53:55Z ERROR prdoclib::commands::check] Loading the schema failed: [2024-02-23T12:53:55Z ERROR prdoclib::commands::check] ValidationErrors(ValidationState { errors: [Required { path: "/title" }], missing: [], replacement: None, evaluated: {"/doc/0/description", "/crates/0/name", "/doc/0", "/crates", "/crates/0", "", "/doc", "/doc/0/audience"} }) PR #3243 -> ERR ```
-
Ignacio Palacios authored
The `fee` should be calculated with the reanchored asset, otherwise it could lead to a failure where the set aside fee ends up not being enough. @acatangiu --------- Co-authored-by: Adrian Catangiu <[email protected]>
-
Serban Iorga authored
Related to https://github.com/paritytech/polkadot-sdk/issues/3400 Extracting small parts of https://github.com/paritytech/polkadot-sdk/pull/3429 into separate PR: - Add support for BHP local and BHK local - Increase the timeout for the bridge zomienet tests
-
- Feb 22, 2024
-
-
Bastian Köcher authored
This introduces a check to ensure that the parachain code matches the validation code stored in the relay chain state. If not, it will print a warning. This should be mainly useful for parachain builders to make sure they have setup everything correctly.
-
Adrian Catangiu authored
As part of BEEFY worker/voter initialization the task waits for certain chain and backend conditions to be fulfilled: - BEEFY consensus enabled on-chain & GRANDPA best finalized higher than on-chain BEEFY genesis block, - backend has synced headers for BEEFY mandatory blocks between best BEEFY and best GRANDPA. During this waiting time, any messages gossiped on the BEEFY topic for current chain get enqueued in the gossip engine, leading to RAM bloating and output warning/error messages when the wait time is non-negligible (like during a clean sync). This PR adds logic to pump the gossip engine while waiting for other things to make sure gossiped messages get consumed (practically discarded until worker is fully initialized). Also raises the warning threshold for enqueued messages from 10k to 100k. This is in line with the other gossip protocols on the node. Fixes https://github.com/paritytech/polkadot-sdk/issues/3390 --------- Signed-off-by: Adrian Catangiu <[email protected]>
-
Koute authored
This PR fixes a subtle bug in `wasm-builder` first introduced in https://github.com/paritytech/polkadot-sdk/pull/1851 (sorry, my bad! I should have caught this during review) where the status code of the `cargo` subprocess is not properly checked, which results in builds silently succeeding when they shouldn't (that is: if we successfully build a runtime blob, and then modify the code so that it won't compile, and recompile it again, then the build will succeed and silently use the *old* blob). cc @athei This is the bug you were seeing. [edit]Also fixes a similar PolkaVM-specific bug where I accidentally used the wrong comparison operator.[/edit]
-
Egor_P authored
This PR adds two more public channels in matrix where should the release announcements go
-
Andrei Sandu authored
First step in implementing https://github.com/paritytech/polkadot-sdk/issues/3144 ### Summary of changes - switch statement `Table` candidate mapping from `ParaId` to `CoreIndex` - introduce experimental `InjectCoreIndex` node feature. - determine and assume a `CoreIndex` for a candidate based on statement validator index. If the signature is valid it means validator controls the validator that index and we can easily map it to a validator group/core. - introduce a temporary provisioner fix until we fully enable elastic scaling in the subystem. The fix ensures we don't fetch the same backable candidate when calling `get_backable_candidate` for each core. TODO: - [x] fix backing tests - [x] fix statement table tests - [x] add new test --------- Signed-off-by: Andrei Sandu <[email protected]> Signed-off-by: alindima <[email protected]> Co-authored-by: alindima <[email protected]>
-
Oliver Tale-Yazdi authored
Closes https://github.com/paritytech/polkadot-sdk/issues/2713 --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: André Silva <[email protected]>
-
- Feb 21, 2024
-
-
Matteo Muraca authored
part of #3326 @ggwpez @Kianenigma @shawntabrizi --------- Signed-off-by: Matteo Muraca <[email protected]> Co-authored-by: Kian Paimani <[email protected]>
-
tmpolaczyk authored
In Tanssi, we need a way to stop the collator code and then start it again. This is to support rotating the same collator between different runtimes. Currently, this works very well, except for the proposer metrics, because they only get registered the first time they are started. Afterwards, we see this warning log: > Failed to register proposer prometheus metrics: Duplicate metrics collector registration attempted ~~So this PR adds a method to set metrics, to allow us to register metrics manually before creating the `ProposerFactory`, and then clone the same metrics every time we need to start the collator.~~ Implemented Clone instead
-