- Feb 28, 2024
-
-
Liam Aharon authored
Introduce storage attr macro `#[disable_try_decode_storage]` and set it on `System::Events` and `ParachainSystem::HostConfiguration` (#3454) Closes https://github.com/paritytech/polkadot-sdk/issues/2560 Allows marking storage items with `#[disable_try_decode_storage]`, and uses it with `System::Events`. Question: what's the recommended way to write a test for this? I couldn't find a test for similar existing macro `#[whitelist_storage]`.
-
- Feb 27, 2024
-
-
Petr Mensik authored
Hey everyone, this PR will replace existing Polkadotters bootnodes for Polkadot, Kusama and Westend and add Paseo bootnode to the relay chain suite. At the same time, it will add new bootnodes for all the system parachains, including People on Westend. This PR is a part of our membership in the IBP, meaning that all the bootnodes are hosted on our hardware housed in the data center in Christchurch, New Zealand. All the bootnodes were tested with an empty chain spec file with the following command yielding 1 peer. The test commands used are as follows: ``` ./polkadot --base-path /tmp/node --reserved-only --chain paseo --reserved-nodes "/dns/paseo.bootnodes.polkadotters.com/tcp/30540/wss/p2p/12D3KooWPbbFy4TefEGTRF5eTYhq8LEzc4VAHdNUVCbY4nAnhqPP" ./polkadot --base-path /tmp/node --reserved-only --chain westend --reserved-nodes "/dns/westend.bootnodes.polkadotters.com/tcp/30310/wss/p2p/12D3KooWHPHb64jXMtSRJDrYFATWeLnvChL8NtWVttY67DCH1eC5" ./polkadot --base-path /tmp/node --reserved-only --chain kusama --reserved-nodes "/dns/kusama.bootnodes.polkadotters.com/tcp/30313/wss/p2p/12D3KooWHB5rTeNkQdXNJ9ynvGz8Lpnmsctt7Tvp7mrYv6bcwbPG" ./polkadot --base-path /tmp/node --no-hardware-benchmarks --reserved-only --chain polkadot --reserved-nodes "/dns/polkadot.bootnodes.polkadotters.com/tcp/30316/wss/p2p/12D3KooWPAVUgBaBk6n8SztLrMk8ESByncbAfRKUdxY1nygb9zG3" ./polkadot-parachain --base-path /tmp/node --reserved-only --chain asset-hub-kusama --reserved-nodes "/dns/asset-hub-kusama.bootnodes.polkadotters.com/tcp/30513/wss/p2p/12D3KooWDpk7wVH7RgjErEvbvAZ2kY5VeaAwRJP5ojmn1e8b8UbU" ./polkadot-parachain --base-path /tmp/node --reserved-only --chain asset-hub-polkadot --reserved-nodes "/dns/asset-hub-polkadot.bootnodes.polkadotters.com/tcp/30510/wss/p2p/12D3KooWKbfY9a9oywxMJKiALmt7yhrdQkjXMtvxhhDDN23vG93R" ./polkadot-parachain --base-path /tmp/node --reserved-only --chain asset-hub-westend --reserved-nodes "/dns/asset-hub-westend.bootnodes.polkadotters.com/tcp/30516/wss/p2p/12D3KooWNFYysCqmojxqjjaTfD2VkWBNngfyUKWjcR4WFixfHNTk" ./polkadot-parachain --base-path /tmp/node --reserved-only --chain bridge-hub-kusama --reserved-nodes "/dns/bridge-hub-kusama.bootnodes.polkadotters.com/tcp/30522/wss/p2p/12D3KooWH3pucezRRS5esoYyzZsUkKWcPSByQxEvmM819QL1HPLV" ./polkadot-parachain --base-path /tmp/node --reserved-only --chain bridge-hub-kusama --reserved-nodes "/dns/bridge-hub-kusama.bootnodes.polkadotters.com/tcp/30522/wss/p2p/12D3KooWH3pucezRRS5esoYyzZsUkKWcPSByQxEvmM819QL1HPLV" ./polkadot-parachain --base-path /tmp/node --reserved-only --chain bridge-hub-westend --reserved-nodes "/dns/bridge-hub-westend.bootnodes.polkadotters.com/tcp/30525/wss/p2p/12D3KooWPkwgJofp4GeeRwNgXqkp2aFwdLkCWv3qodpBJLwK43Jj" ./polkadot-parachain --base-path /tmp/node --reserved-only --chain collectives-polkadot --reserved-nodes "/dns/collectives-polkadot.bootnodes.polkadotters.com/tcp/30528/wss/p2p/12D3KooWNohUjvJtGKUa8Vhy8C1ZBB5N8JATB6e7rdLVCioeb3ff" ./polkadot-parachain --base-path /tmp/node --reserved-only --chain collectives-westend --reserved-nodes "/dns/collectives-westend.bootnodes.polkadotters.com/tcp/30531/wss/p2p/12D3KooWAFkXNSBfyPduZVgfS7pj5NuVpbU8Ee5gHeF8wvos7Yqn" ./polkadot-parachain --base-path /tmp/node --reserved-only --chain people-westend --reserved-nodes "/dns/identity-westend.bootnodes.polkadotters.com/tcp/30534/wss/p2p/12D3KooWKr9San6KTM7REJ95cBaDoiciGcWnW8TTftEJgxGF5Ehb" ``` Best regards, Petr, Polkadotters
-
Kian Paimani authored
Does the following: - Add a reference doc page named `frame_runtime_types`, which explains what types like `RuntimeOrigin`, `RuntimeCall` etc are. - On top of it, it adds a reference doc page called `frame_origin` which explains a few important patterns that we use around origins - And finally brushes up `#[frame::origin]` docs. - Updates the theme, sidebar and favicon to look like: <img width="1728" alt="Screenshot 2024-02-20 at 12 16 00" src="https://github.com/paritytech/polkadot-sdk/assets/5588131/6d60a16b-2081-411b-8869-43b91920cca9"> All of this was inspired by https://substrate.stackexchange.com/questions/10992/how-do-you-find-the-public-key-for-the-medium-spender-track-origin/10993 closes https://github.com/paritytech/polkadot-sdk-docs/issues/45 closes https://github.com/paritytech/polkadot-sdk-docs/issues/43 contributes / overlaps with https://github.com/paritytech/polkadot-sdk/pull/2638 cc @liamaharon deprecation companion: https://github.com/substrate-developer-hub/substrate-docs/pull/2131 pba-content companion: https://github.com/Polkadot-Blockchain-Academy/pba-content/pull/977 --------- Co-authored-by: Radha <[email protected]> Co-authored-by: Sebastian Kunert <[email protected]> Co-authored-by: Gonçalo Pestana <[email protected]> Co-authored-by: Liam Aharon <[email protected]>
-
Serban Iorga authored
Related to https://github.com/paritytech/polkadot-sdk/issues/3475
-
Clara van Staden authored
When running `cargo test -p bridge-hub-rococo-runtime --features runtime-benchmarks`, two of the Snowbridge benchmark tests fails. The reason is that when the runtime-benchmarks feature is enabled, the `NoopMessageProcessor` message processor is used. The Snowbridge tests rely on the outbound messages to be processed using the message queue, so that we can check the expected nonce and block digest logs. This PR changes the conditional compilation to only use `NoopMessageProcessor` when compiling the executable to run benchmarks against, not when running tests. --------- Co-authored-by: claravanstaden <Cats 4 life!>
-
Javier Viola authored
Fix timeouts downloading the artifacts (e.g https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5246040) Thx! --------- Co-authored-by: Bastian Köcher <[email protected]>
-
- Feb 26, 2024
-
-
philoniare authored
# Description *Refactors `String::from_utf8` usage in the pallet benchmarking Fixes #389 --------- Co-authored-by: command-bot <>
-
Oliver Tale-Yazdi authored
Changes: - Add CI script to check that the `crate` names that are mentioned in prdocs are valid. We can extend it lateron to also validate the correct SemVer bumps as introduced in https://github.com/paritytech/polkadot-sdk/pull/3441. Example output: ```pre $ python3 .github/scripts/check-prdoc.py Cargo.toml prdoc/*.prdoc
🔎 Reading workspace polkadot-sdk/Cargo.toml.📦 Checking 36 prdocs against 494 crates.✅ All prdocs are valid. ``` Note that not all old prdocs pass the check since crates have been renamed: ```pre $ python3 .github/scripts/check-prdoc.py Cargo.toml prdoc/**/*.prdoc🔎 Reading workspace polkadot-sdk/Cargo.toml.📦 Checking 186 prdocs against 494 crates.❌ Some prdocs are invalid.💥 prdoc/1.4.0/pr_1926.prdoc lists invalid crate: node-cli💥 prdoc/1.4.0/pr_2086.prdoc lists invalid crate: xcm-executor💥 prdoc/1.4.0/pr_2107.prdoc lists invalid crate: xcm💥 prdoc/1.6.0/pr_2684.prdoc lists invalid crate: xcm-builder ``` --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> -
Vladimir Istyufeev authored
Tests run as part of the `test-linux-stable` jobs could hang as shown [here](https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5341393). This PR adds a failsafe for such cases.
-
Sebastian Kunert authored
While investigating some pruning issues I found some room for improvement in the notification pin handling. **Problem:** It was not possible to define an upper limit on notification pins. The block pinning cache has a limit, but only handles bodies and justifications. After this PR, bookkeeping for notifications is managed in the pinning worker. A limit can be defined in the worker. If that limit is crossed, blocks that were pinned for that notification are unpinned, which now affects the state as well as bodies and justifications. The pinned blocks cache still has a limit, but should never be hit. closes #19 --------- Co-authored-by: Bastian Köcher <[email protected]> Co-authored-by: André Silva <[email protected]>
-
Bastian Köcher authored
Instead of only generating the error, we now generate the actual code and the error. This generates in total less errors and helps the user to identify the actual problem and not being confronted with tons of errors.
-
brenzi authored
with the deprecation of Rococo, Encointer needs a new staging environment. Paseo will be Polkadot-focused and westend Kusama-focused, so we propose to use Westend
-
Branislav Kontur authored
## Problem During the bumping of the `polkadot-fellows` repository to `[email protected]`, I encountered a situation where the benchmarks `teleport_assets` and `reserve_transfer_assets` in AssetHubKusama started to fail. This issue arose due to a decreased ED balance for AssetHubs introduced [here](https://github.com/polkadot-fellows/runtimes/pull/158/files#diff-80668ff8e793b64f36a9a3ec512df5cbca4ad448c157a5d81abda1b15f35f1daR213), and also because of a [missing CI pipeline](https://github.com/polkadot-fellows/runtimes/issues/197) to check the benchmarks, which went unnoticed. These benchmarks expect the `caller` to have enough: 1. balance to transfer (BTT) 2. balance for paying delivery (BFPD). So the initial balance was calculated as `ED * 100`, which seems reasonable: ``` const ED_MULTIPLIER: u32 = 100; let balance = existential_deposit.saturating_mul(ED_MULTIPLIER.into());` ``` The problem arises when the price for delivery is 100 times higher than the existential deposit. In other words, when `ED * 100` does not cover `BTT` + `BFPD`. I check AHR/AHW/AHK/AHP and this problem has only AssetHubKusama ``` ED: 3333333 calculated price to parent delivery: 1031666634 (from xcm logs from the benchmark) --- 3333333 * 100 - BTT(3333333) - BFPD(1031666634) = −701666667 ``` which results in the error; ``` 2024-02-23 09:19:42 Unable to charge fee with error Module(ModuleError { index: 31, error: [17, 0, 0, 0], message: Some("FeesNotMet") }) Error: Input("Benchmark pallet_xcm::reserve_transfer_assets failed: FeesNotMet") ``` ## Solution The benchmarks `teleport_assets` and `reserve_transfer_assets` were fixed by removing `ED * 100` and replacing it with `DeliveryHelper` logic, which calculates the (almost real) price for delivery and sets it along with the existential deposit as the initial balance for the account used in the benchmark. ## TODO - [ ] patch for 1.6 - https://github.com/paritytech/polkadot-sdk/pull/3466 - [ ] patch for 1.7 - https://github.com/paritytech/polkadot-sdk/pull/3465 - [ ] patch for 1.8 - TODO: PR --------- Co-authored-by: Francisco Aguirre <[email protected]>
-
Alexandru Gheorghe authored
Add more debug logs to understand if statement-distribution is in a bad state, should be useful for debugging https://github.com/paritytech/polkadot-sdk/issues/3314 on production networks. Additionally, increase the number of parallel requests should make, since I notice that requests take around 100ms on kusama, and the 5 parallel request was picked mostly random, no reason why we can do more than that. --------- Signed-off-by: Alexandru Gheorghe <[email protected]> Co-authored-by: ordian <[email protected]>
-
eskimor authored
from Westend and Rococo. --------- Co-authored-by: eskimor <[email protected]> Co-authored-by: command-bot <>
-
- Feb 24, 2024
-
-
tmpolaczyk authored
Changes the runtime hash algorithm used in `resolve_state_version_from_wasm` from `DefaultHasher` to a caller-provided one (usually `HashingFor<Block>`), to match the one used elsewhere. This fixes an issue where the runtime wasm is compiled 3 times when starting the `tanssi-node` with `--dev`. With this fix, the runtime wasm is only compiled 2 times. The other redundant compilation is caused by the `GenesisConfigBuilderRuntimeCaller` struct, which ignores the runtime cache. --------- Co-authored-by: Bastian Köcher <[email protected]>
-
- Feb 23, 2024
-
-
Andrei Sandu authored
Fixes https://github.com/paritytech/polkadot-sdk/issues/3144 Builds on top of https://github.com/paritytech/polkadot-sdk/pull/3229 ### Summary Some preparations for Runtime to support elastic scaling, guarded by config node features bit `FeatureIndex::ElasticScalingMVP`. This PR introduces a per-candidate `CoreIndex` but does it in a hacky way to avoid changing `CandidateCommitments`, `CandidateReceipts` primitives and networking protocols. #### Including `CoreIndex` in `BackedCandidate` If the `ElasticScalingMVP` feature bit is enabled then `BackedCandidate::validator_indices` is extended by 8 bits. The value stored in these bits represents the assumed core index for the candidate. It is temporary solution which works by creating a mapping from `BackedCandidate` to `CoreIndex` by assuming the `CoreIndex` can be discovered by checking in which validator group the validator that signed the statement is. TODO: - [x] fix tests - [x] add new tests - [x] Bump runtime API for Kusama, so we have that node features thing! -> https://github.com/polkadot-fellows/runtimes/pull/194 --------- Signed-off-by: Andrei Sandu <[email protected]> Signed-off-by: alindima <[email protected]> Co-authored-by: alindima <[email protected]>
-
Serban Iorga authored
Related to https://github.com/paritytech/polkadot-sdk/issues/3400 Moving all bridges testing "framework" files under one folder in order to be able to download the entire folder when we want to add tests in other repos No significant functional changes
-
Sebastian Kunert authored
# Runtime side for PoV Reclaim ## Implementation Overview - Hostfunction to fetch the storage proof size has been added to the PVF. It uses the size tracking recorder that was introduced in my previous PR. - Mechanisms to use the reclaim HostFunction have been introduced. - 1. A SignedExtension that checks the node-reported proof size before and after application of an extrinsic. Then it reclaims the difference. - 2. A manual helper to make reclaiming easier when manual interaction is required, for example in `on_idle` or other hooks. - In order to utilize the manual reclaiming, I modified `WeightMeter` to support the reduction of consumed weight, at least for storage proof size. ## How to use To enable the general functionality for a parachain: 1. Add the SignedExtension to your parachain runtime. 2. Provide the HostFunction to the node 3. Enable proof recording during block import ## TODO - [x] PRDoc --------- Co-authored-by: Dmitry Markin <[email protected]> Co-authored-by: Davide Galassi <[email protected]> Co-authored-by: Bastian Köcher <[email protected]>
-
Dino Pačandi authored
## Summary * use benchamarked weights instead of hardcoded ones for `pallet-membership` * rename benchmark to match extrinsic name * remove unnecessary dependency from `clear_prime` --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: Oliver Tale-Yazdi <[email protected]>
-
PG Herveou authored
Add a `ApiVersion` constant to the pallet-contracts Config to communicate with developers the current state of the host functions exposed by the pallet
-
Sebastian Kunert authored
By passing `RUST_LOG=info` to the check command, we will be able to see the exact problem with a given prdoc. Before: ``` PR #3243 -> ERR ``` After: ``` [2024-02-23T12:53:55Z INFO prdoclib::commands::check] Checking directory prdoc [2024-02-23T12:53:55Z INFO prdoclib::commands::check] Using schema: /Users/sebastian/work/repos/polkadot-sdk/prdoc/schema_user.json [2024-02-23T12:53:55Z WARN prdoclib::schema] validation_result: false [2024-02-23T12:53:55Z WARN prdoclib::schema] validation_result_strict: false [2024-02-23T12:53:55Z WARN prdoclib::schema] errors: [ Required { path: "/title", }, ] [2024-02-23T12:53:55Z WARN prdoclib::schema] missing: [] [2024-02-23T12:53:55Z ERROR prdoclib::commands::check] Loading the schema failed: [2024-02-23T12:53:55Z ERROR prdoclib::commands::check] ValidationErrors(ValidationState { errors: [Required { path: "/title" }], missing: [], replacement: None, evaluated: {"/doc/0/description", "/crates/0/name", "/doc/0", "/crates", "/crates/0", "", "/doc", "/doc/0/audience"} }) PR #3243 -> ERR ```
-
Ignacio Palacios authored
The `fee` should be calculated with the reanchored asset, otherwise it could lead to a failure where the set aside fee ends up not being enough. @acatangiu --------- Co-authored-by: Adrian Catangiu <[email protected]>
-
Serban Iorga authored
Related to https://github.com/paritytech/polkadot-sdk/issues/3400 Extracting small parts of https://github.com/paritytech/polkadot-sdk/pull/3429 into separate PR: - Add support for BHP local and BHK local - Increase the timeout for the bridge zomienet tests
-
- Feb 22, 2024
-
-
Bastian Köcher authored
This introduces a check to ensure that the parachain code matches the validation code stored in the relay chain state. If not, it will print a warning. This should be mainly useful for parachain builders to make sure they have setup everything correctly.
-
Adrian Catangiu authored
As part of BEEFY worker/voter initialization the task waits for certain chain and backend conditions to be fulfilled: - BEEFY consensus enabled on-chain & GRANDPA best finalized higher than on-chain BEEFY genesis block, - backend has synced headers for BEEFY mandatory blocks between best BEEFY and best GRANDPA. During this waiting time, any messages gossiped on the BEEFY topic for current chain get enqueued in the gossip engine, leading to RAM bloating and output warning/error messages when the wait time is non-negligible (like during a clean sync). This PR adds logic to pump the gossip engine while waiting for other things to make sure gossiped messages get consumed (practically discarded until worker is fully initialized). Also raises the warning threshold for enqueued messages from 10k to 100k. This is in line with the other gossip protocols on the node. Fixes https://github.com/paritytech/polkadot-sdk/issues/3390 --------- Signed-off-by: Adrian Catangiu <[email protected]>
-
Koute authored
This PR fixes a subtle bug in `wasm-builder` first introduced in https://github.com/paritytech/polkadot-sdk/pull/1851 (sorry, my bad! I should have caught this during review) where the status code of the `cargo` subprocess is not properly checked, which results in builds silently succeeding when they shouldn't (that is: if we successfully build a runtime blob, and then modify the code so that it won't compile, and recompile it again, then the build will succeed and silently use the *old* blob). cc @athei This is the bug you were seeing. [edit]Also fixes a similar PolkaVM-specific bug where I accidentally used the wrong comparison operator.[/edit]
-
Egor_P authored
This PR adds two more public channels in matrix where should the release announcements go
-
Andrei Sandu authored
First step in implementing https://github.com/paritytech/polkadot-sdk/issues/3144 ### Summary of changes - switch statement `Table` candidate mapping from `ParaId` to `CoreIndex` - introduce experimental `InjectCoreIndex` node feature. - determine and assume a `CoreIndex` for a candidate based on statement validator index. If the signature is valid it means validator controls the validator that index and we can easily map it to a validator group/core. - introduce a temporary provisioner fix until we fully enable elastic scaling in the subystem. The fix ensures we don't fetch the same backable candidate when calling `get_backable_candidate` for each core. TODO: - [x] fix backing tests - [x] fix statement table tests - [x] add new test --------- Signed-off-by: Andrei Sandu <[email protected]> Signed-off-by: alindima <[email protected]> Co-authored-by: alindima <[email protected]>
-
Oliver Tale-Yazdi authored
Closes https://github.com/paritytech/polkadot-sdk/issues/2713 --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: André Silva <[email protected]>
-
- Feb 21, 2024
-
-
Matteo Muraca authored
part of #3326 @ggwpez @Kianenigma @shawntabrizi --------- Signed-off-by: Matteo Muraca <[email protected]> Co-authored-by: Kian Paimani <[email protected]>
-
tmpolaczyk authored
In Tanssi, we need a way to stop the collator code and then start it again. This is to support rotating the same collator between different runtimes. Currently, this works very well, except for the proposer metrics, because they only get registered the first time they are started. Afterwards, we see this warning log: > Failed to register proposer prometheus metrics: Duplicate metrics collector registration attempted ~~So this PR adds a method to set metrics, to allow us to register metrics manually before creating the `ProposerFactory`, and then clone the same metrics every time we need to start the collator.~~ Implemented Clone instead
-
Alexandru Vasile authored
This PR backports the changes from the rpc-v2 spec: https://github.com/paritytech/json-rpc-interface-spec/pull/134 The `Broadcasted` event has been removed: - it is hard to enforce a `Dropped { broadcasted: bool }` event in cases of a load-balancer being placed in front of an RPC server - when the server exists, it is impossible to guarantee this field if the server did not previously send a `Broadcasted` event - the number of peers reported by this event does not guarantee that peers are unique - the same peer can disconnect and reconnect, increasing this metric number - the number of peers that receive this transaction offers no guarantee about the transaction being included in the chain at a later time cc @paritytech/subxt-team --------- Signed-off-by: Alexandru Vasile <[email protected]> Co-authored-by: James Wilson <[email protected]>
-
Dastan authored
closes #2844 - adds `list-pallets` option which prints all unique available pallets for benchmarking ```bash ./target/release/node benchmark pallet --list=pallets ``` - adds `all` option which runs benchmarks for all available pallets and extrinsics (equivalent to `--pallet * --extrinsic *`) ```bash ./target/release/node benchmark pallet --all ``` - use the `list=pallets` syntax in `run_all_benchmarks.sh` script cc ggwpez --------- Co-authored-by: Bastian Köcher <[email protected]>
-
Clara van Staden authored
- Adds a test to check the correct digest for Snowbridge outbound messages. For the correct digest to be in the block, the the MessageQueue pallet should be configured after the EthereumOutbound queue pallet. The added test fails if the EthereumOutbound is configured after the MessageQueue pallet. - Adds a helper method `run_to_block_with_finalize` to simulate the block finalizing. The existing `run_to_block` method does not finalize and so it cannot successfully test this condition. Closes: https://github.com/paritytech/polkadot-sdk/issues/3208 --------- Co-authored-by: claravanstaden <Cats 4 life!>
-
Egor_P authored
This PR should fix the issue with the failing GHA which sends notifications to the different matrix channels, when the new release is published
-
Adrian Catangiu authored
fixes https://github.com/paritytech/polkadot-sdk/issues/3407
-
Alexander Theißen authored
This PR is fixing a bug in the sync mechanism between wasmi and pallet-contracts. This bug leads to essentially double charging all the gas that was used during the execution of the host function. When the `call` host function is used for recursion this will lead to a quadratic amount of gas consumption with regard to the nesting depth.We also took the chance to refactor the code in question and improve the rust docs. The bug was caused by not updating `GasMeter::executor_consumed` (previously `engine_consumed`) when leaving the host function. This lead to the value being stale (too low) when entering another host function. --------- Co-authored-by: PG Herveou <[email protected]>
-
- Feb 20, 2024
-
-
girazoki authored
I dont think this bound is required by anything else and it allows now to use this struct with runtimes that have 20 bytes accounts --------- Co-authored-by: Bastian Köcher <[email protected]>
-
Oliver Tale-Yazdi authored
Did not realize that https://github.com/paritytech/polkadot-sdk/pull/3370 was indeed breaking. cc @muraca --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]>
-