- Feb 13, 2024
-
-
Bastian Köcher authored
We need to bump `ahash` to make it compile again. Closes: https://github.com/paritytech/polkadot-sdk/issues/3269
-
Alexander Samusev authored
PR removes `pull_request_target` from gitspiegel trigger because it breaks the logic. With `pull_request_target` the action runs in any case even for first-time contributors. cc @mutantcornholio
-
Bruno Galvao authored
Refactor in accordance with https://github.com/paritytech/polkadot-sdk/issues/2245#issuecomment-1937025951 Prior to this PR, the `remote_tests` test module would either use `TEST_WS` or `DEFAULT_HTTP_ENDPOINT`. With the PR, `TEST_WS` is the default for the `remote_tests` test module and the fallback is `DEFAULT_HTTP_ENDPOINT`. The only downside I see to this PR is that for particular tests in the `remote_tests` module, one would want to use a different http endpoint. If that is the case, they would have to manually hardcode the http endpoint for that particular test. Note: The `TEST_WS` node should fulfill the role for all test cases e.g. include child tries. Give it a _try_: ``` TEST_WS=wss://rococo-try-runtime-node.parity-chains.parity.io:443 cargo test --features=remote-test -p frame-remote-externalities -- --nocapture ``` --------- Co-authored-by: Oliver Tale-Yazdi <[email protected]>
-
- Feb 12, 2024
-
-
Javier Viola authored
Change `parityDb` test assertions after a quick check with @alexggh in order to resolve failures in `zombienet-polkadot-misc-0001-parachains-paritydb`(e.g https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5186277). Thx!
-
Radha authored
This error suggests using either `unvote` or `reap_vote` calls which are unavailable in the pallet. The only available call for this is `remove_vote`. EDIT: Please ignore my earlier write-up. I was able to delegate with conviction after calling `remove_vote` on all decided proposals --------- Co-authored-by: command-bot <>
-
Serban Iorga authored
Related to https://github.com/paritytech/polkadot-sdk/issues/3176 This PR only adds the first bridge zombienet test back to the CI after fixing it, reverting https://github.com/paritytech/polkadot-sdk/pull/3071 Credits to @svyatonik for building all the CI infrastructure around this.
-
Alexandru Vasile authored
This PR implements the [transaction_unstable_broadcast](https://github.com/paritytech/json-rpc-interface-spec/blob/main/src/api/transaction_unstable_broadcast.md) and [transaction_unstable_stop](https://github.com/paritytech/json-rpc-interface-spec/blob/main/src/api/transaction_unstable_stop.md). The [transaction_unstable_broadcast](https://github.com/paritytech/json-rpc-interface-spec/blob/main/src/api/transaction_unstable_broadcast.md) submits the provided transaction at the best block of the chain. If the transaction is dropped or declared invalid, the API tries to resubmit the transaction at the next available best block. ### Broadcasting The broadcasting operation continues until either: - the user called `transaction_unstable_stop` with the operation ID that identifies the broadcasting operation - the transaction state is one of the following: - Finalized: the transaction is part of the chain - FinalizedTimeout: we have waited for 256 finalized blocks and timedout - Usurped the transaction has been replaced in the tx pool The broadcasting retires to submit the transaction when the transaction state is: - Invalid: the transaction might become valid at a later time - Dropped: the transaction pool's capacity is full at the moment, but might clear when other transactions are finalized/dropped ### Stopping The `transaction_unstable_broadcast` spawns an abortable future and tracks the abort handler. When the [transaction_unstable_stop](https://github.com/paritytech/json-rpc-interface-spec/blob/main/src/api/transaction_unstable_stop.md) is called with a valid operation ID; the abort handler of the corresponding `transaction_unstable_broadcast` future is called. This behavior ensures the broadcast future is finishes on the next polling. When the `transaction_unstable_stop` is called with an invalid operation ID, an invalid jsonrpc specific error object is returned. ### Testing This PR adds the testing harness of the transaction API and validates two basic scenarios: - transaction enters and exits the transaction pool - transaction stop returns appropriate values when called with valid and invalid operation IDs Closes: https://github.com/paritytech/polkadot-sdk/issues/3039 Note that the API should be enabled after: https://github.com/paritytech/polkadot-sdk/issues/3084. cc @paritytech/subxt-team --------- Signed-off-by: Alexandru Vasile <[email protected]> Co-authored-by: Sebastian Kunert <[email protected]>
-
Dónal Murray authored
Leases can be force set, but since Leases is a StorageValue, if a lease misses its sale rotation in which it should expire, it can never be cleared. This can happen if a lease is added with an until timeslice that lies in a region whose sale has already started or has passed, even if the timeslice itself hasn't passed. Trappist is currently trapped in a lease that will never end, so this will remove it at the next sale rotation. A fix was introduced in https://github.com/paritytech/polkadot-sdk/pull/3213 but this missed the 1.7 release. This PR bumps the `coretime-rococo` version to get these changes on Rococo.
-
Oliver Tale-Yazdi authored
Changes (partial https://github.com/paritytech/polkadot-sdk/issues/994): - Set log to `0.4.20` everywhere - Lift `log` to the workspace Starting with a simpler one after seeing https://github.com/paritytech/polkadot-sdk/pull/2065 from @jsdw . This sets the `default-features` to `false` in the root and then overwrites that in each create to its original value. This is necessary since otherwise the `default` features are additive and its impossible to disable them in the crate again once they are enabled in the workspace. I am using a tool to do this, so its mostly a test to see that it works as expected. --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]>
-
Alexandru Gheorghe authored
On grid distribution messages have two paths of reaching a node, so there is the possiblity of a race when two peers send each other the same statement around the same time. Statement local_knowledge will tell us that the peer should have not send the statement because we sent it to it. Fix it by also keeping track only of the statement we received from a given peer and penalize it only if it sends it to us more than once. Fixes: https://github.com/paritytech/polkadot-sdk/issues/2346 Additionally, also use different Cost labels for different paths to make it easier to debug things. --------- Signed-off-by: Alexandru Gheorghe <[email protected]>
-
Alexander Samusev authored
PR adds condition to ignore master branch for prdoc and labels GHA. This option doesn't work because all PRs are for master thus the actions won't start: ```yml on: pull_request: branches-ignore: - master ``` This option doesn't work because actions don't see the PR number and [break](https://github.com/paritytech/polkadot-sdk/actions/runs/7827272667/job/21354764953): ```yml on: push: branches-ignore: - master ``` cc https://github.com/paritytech/ci_cd/issues/940 cc https://github.com/paritytech/polkadot-sdk/issues/3240
-
Serban Iorga authored
Related to https://github.com/paritytech/polkadot-sdk/issues/3242 Reorganizing the bridge zombienet tests in order to: - separate the environment spawning from the actual tests - offer better control over the tests and some possibility to orchestrate them as opposed to running everything from the zndsl file Only rewrote the asset transfer test using this new "framework". The old logic and old tests weren't functionally modified or deleted. The plan is to get feedback on this approach first and if this is agreed upon, migrate the other 2 tests later in separate PRs and also do other improvements later.
-
Alexandru Vasile authored
This PR improves the transaction status documentation. - Added doc references for describing the main states - Extra comment wrt pool ready / future queues - `FinalityTimeout` no longer describes a lagging finality gadget, it signals that the maximum number of finality gadgets has been reached A few helper methods are added to indicate when: - a final event is generated by the transaction pool for a given event - a final event is provided, although the transaction might become valid at a later time and could be re-submitted The helper methods are used and taken from https://github.com/paritytech/polkadot-sdk/pull/3079 to help us better keep it in sync. cc @paritytech/subxt-team --------- Signed-off-by: Alexandru Vasile <[email protected]>
-
Andrei Eres authored
-
- Feb 11, 2024
-
-
maksimryndin authored
resolve https://github.com/paritytech/polkadot-sdk/issues/2321 - [x] refactor `security` module into a conditionally compiled - [x] rename `amd64` into x86-64 for consistency with conditional compilation guards and remove reference to a particular vendor - [x] run unit tests and zombienet --------- Co-authored-by: s0me0ne-unkn0wn <[email protected]>
-
- Feb 09, 2024
-
-
Eugen Snitko authored
Add [forklift caching](https://gitlab.parity.io/parity/infrastructure/ci_cd/forklift/forklift) to remainig jobs by .sh and .py scripts: - cargo-check-each-crate x6 (`.gitlab/check-each-crate.py`) - build-linux-stable (`polkadot/scripts/build-only-wasm.sh`) by before_script: - build-linux-substrate - build-subkey-linux (with `.build-subkey` job) - cargo-check-benches x2 **To disable feature set FORKLIFT_BYPASS variable to true in [project settings in gitlab](https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/settings/ci_cd)** (forklift now handles FORKLIFT_BYPASS by itself)
-
Egor_P authored
This PR backports version bumps from `1.7.0` release branch and moves related prdoc files to the appropriate folder.
-
- Feb 08, 2024
-
-
Oliver Tale-Yazdi authored
Closes #169 Fork of the `orml-parameters-pallet` as introduced by https://github.com/open-web3-stack/open-runtime-module-library/pull/927 (cc @xlc) It greatly changes how the macros work, but keeps the pallet the same. The downside of my code is now that it does only support constant keys in the form of types, not value-bearing keys. I think this is an acceptable trade off, give that it can be used by *any* pallet without any changes. The pallet allows to dynamically set parameters that can be used in pallet configs while also restricting the updating on a per-key basis. The rust-docs contains a complete example. Changes: - Add `parameters-pallet` - Use in the kitchensink as demonstration - Add experimental attribute to define dynamic params in the runtime. - Adding a bunch of traits to `frame_support::traits::dynamic_params` that can be re-used by the ORML macros ## Example First to define the parameters in the runtime file. The syntax is very explicit about the codec index and errors if there is no. ```rust #[dynamic_params(RuntimeParameters, pallet_parameters::Parameters::<Runtime>))] pub mod dynamic_params { use super::*; #[dynamic_pallet_params] #[codec(index = 0)] pub mod storage { /// Configures the base deposit of storing some data. #[codec(index = 0)] pub static BaseDeposit: Balance = 1 * DOLLARS; /// Configures the per-byte deposit of storing some data. #[codec(index = 1)] pub static ByteDeposit: Balance = 1 * CENTS; } #[dynamic_pallet_params] #[codec(index = 1)] pub mod contracts { #[codec(index = 0)] pub static DepositPerItem: Balance = deposit(1, 0); #[codec(index = 1)] pub static DepositPerByte: Balance = deposit(0, 1); } } ``` Then the pallet is configured with the aggregate: ```rust impl pallet_parameters::Config for Runtime { type AggregratedKeyValue = RuntimeParameters; type AdminOrigin = EnsureRootWithSuccess<AccountId, ConstBool<true>>; ... } ``` And then the parameters can be used in a pallet config: ```rust impl pallet_preimage::Config for Runtime { type DepositBase = dynamic_params::storage::DepositBase; } ``` A custom origin an be defined like this: ```rust pub struct DynamicParametersManagerOrigin; impl EnsureOriginWithArg<RuntimeOrigin, RuntimeParametersKey> for DynamicParametersManagerOrigin { type Success = (); fn try_origin( origin: RuntimeOrigin, key: &RuntimeParametersKey, ) -> Result<Self::Success, RuntimeOrigin> { match key { RuntimeParametersKey::Storage(_) => { frame_system::ensure_root(origin.clone()).map_err(|_| origin)?; return Ok(()) }, RuntimeParametersKey::Contract(_) => { frame_system::ensure_root(origin.clone()).map_err(|_| origin)?; return Ok(()) }, } } #[cfg(feature = "runtime-benchmarks")] fn try_successful_origin(_key: &RuntimeParametersKey) -> Result<RuntimeOrigin, ()> { Ok(RuntimeOrigin::Root) } } ``` --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: Nikhil Gupta <[email protected]> Co-authored-by: Kian Paimani <[email protected]> Co-authored-by: command-bot <>
-
Gonçalo Pestana authored
The `TotalLockedValue` storage value in nomination pools pallet may get out of sync if the staking pallet does implicit withdrawal of unlocking chunks belonging to a bonded pool stash. This fix is based on a new method in the `OnStakingUpdate` traits, `on_withdraw`, which allows the nomination pools pallet to adjust the `TotalLockedValue` every time there is an implicit or explicit withdrawal from a bonded pool's stash. This PR also adds a migration that checks and updates the on-chain TVL if it got out of sync due to the bug this PR fixes. **Changes to `trait OnStakingUpdate`** In order for staking to notify the nomination pools pallet that chunks where withdrew, we add a new method, `on_withdraw` to the `OnStakingUpdate` trait. The nomination pools pallet filters the withdraws that are related to bonded pool accounts and updates the `TotalValueLocked` accordingly. **Others** - Adds try-state checks to the EPM/staking e2e tests - Adds tests for auto withdrawing in the context of nomination pools **To-do** - [x] check if we need a migration to fix the current `TotalValueLocked` (run try-runtime) - [x] migrations to fix the current on-chain TVL value
✅ **Kusama**: ``` TotalValueLocked: 99.4559 kKSM TotalValueLocked (calculated) 99.4559 kKSM ```⚠ ️ **Westend**: ``` TotalValueLocked: 18.4060 kWND TotalValueLocked (calculated) 18.4050 kWND ``` **Polkadot**: TVL not released yet. Closes https://github.com/paritytech/polkadot-sdk/issues/3055 --------- Co-authored-by: command-bot <> Co-authored-by: Ross Bulat <[email protected]> Co-authored-by: Dónal Murray <[email protected]> -
Oliver Tale-Yazdi authored
Preparation for https://github.com/paritytech/polkadot-sdk/issues/2664 Changes: - Only require `Hash` instead of `Block` for the benchmarking - Refactor DB types to do the same ## Integration This breaking change can easily be integrated into your node via: ```patch - cmd.run::<Block, ()>(config) + cmd.run::<HashingFor<Block>, ()>(config) ``` Status: waiting for CI checks --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: Bastian Köcher <[email protected]> Co-authored-by: cheme <[email protected]>
-
Radha authored
Co-authored-by: Liam Aharon <[email protected]> Co-authored-by: Bastian Köcher <[email protected]>
-
drskalman authored
This is the significant step to make BEEFY client able to handle both ECDSA and (ECDSA, BLS) type signature. The idea is having BEEFY Client generic on crypto types makes migration to new types smoother. This makes the BEEFY Keystore generic over AuthorityId and extends its tests to cover the case when the AuthorityId is of type (ECDSA, BLS12-377) --------- Co-authored-by: Davide Galassi <[email protected]> Co-authored-by: Robert Hambrock <[email protected]>
-
PG Herveou authored
Adding Rust metadata for doc see https://docs.rs/about/metadata --------- Co-authored-by: Alexander Theißen <[email protected]>
-
Alexander Theißen authored
When switching from the instrumented gas metering to the wasmi gas metering we also removed all imposed limits regarding Wasm module internals. All those things do not interact with the host and have to be handled by wasmi. For example, Wasmi charges additional gas for parameters to each function because as they incur some overhead. Back then we took the opportunity to remove the dependency on the deprecated `parity-wasm` which was used to enforce those limits. This PR merely removes them from the `Schedule` they aren't enforced for a while.
-
Alexander Theißen authored
Those were used for some adhoc comparison of solang vs ink! with regards to ERC20 transfers. Not been used for a while. Benchmarking is done here now: [smart-bench](https://github.com/paritytech/smart-bench): Weight based benchmark to test how much transaction actually fit into a block with the current Weights [schlau](https://github.com/ascjones/schlau): Time based benchmarks to compare performance
-
Alexander Theißen authored
When doing a cross contract call you can supply an optional Weight limit for that call. If one doesn't specify the limit (setting it to 0) the sub call will have all the remaining gas available. If one does specify the limit we subtract that amount eagerly from the Weight meter and fail fast if not enough `Weight` is available. This is quite annoying because setting a fixed limit will set the `gas_required` in the gas estimation according to the specified limit. Even if in that dry-run the actual call didn't consume that whole amount. It effectively discards the more precise measurement it should have from the dry-run. This PR changes the behaviour so that the supplied limit is an actual limit: We do the cross contract call even if the limit is higher than the remaining `Weight`. We then fail and roll back in the cub call in case there is not enough weight. This makes the weight estimation in the dry-run no longer dependent on the weight limit supplied when doing a cross contract call. --------- Co-authored-by: PG Herveou <[email protected]>
-
dharjeezy authored
Part of: paritytech/polkadot-sdk#239 Polkadot address: 12GyGD3QhT4i2JJpNzvMf96sxxBLWymz4RdGCxRH5Rj5agKW --------- Co-authored-by: Liam Aharon <[email protected]>
-
Andrei Eres authored
This PR removes the configuration of subsystem benchmarks via CLI arguments. After this, we only keep configurations only in yaml files. It removes unnecessary code duplication
-
Louis Merlin authored
This adds `try_state()` and `integrity_test()` to the four runtimes of the XCM-simulator fuzzer. With this, we are able to stress-test [message-queue's try_state](https://github.com/paritytech/polkadot-sdk/blob/7df1ae3b/substrate/frame/message-queue/src/lib.rs#L1245-L1347). This also adds the `Transact` block-listing from #2424 to avoid false-positives. Thank you @ggwpez for the help with the runtime configurations.
-
Dónal Murray authored
Leases can be force set, but since `Leases` is a `StorageValue`, if a lease misses its sale rotation in which it should expire, it can never be cleared. This can happen if a lease is added with an `until` timeslice that lies in a region whose sale has already started or has passed, even if the timeslice itself hasn't passed. This solves that issue in a minimal way, with all expired leases being cleaned up in each sale rotation, not just the ones that are expiring in the coming region. TODO: - [x] Write test
-
Alexander Samusev authored
In order to make the action `Required` it should run always. cc @ggwpez
-
- Feb 06, 2024
-
-
Andrei Eres authored
1. Benchmark results are collected in a single struct. 2. The output of the results is prettified. 3. The result struct used to save the output as a yaml and store it in artifacts in a CI job. ``` $ cargo run -p polkadot-subsystem-bench --release -- test-sequence --path polkadot/node/subsystem-bench/examples/availability_read.yaml | tee output.txt $ cat output.txt polkadot/node/subsystem-bench/examples/availability_read.yaml #1 Network usage, KiB total per block Received from peers 510796.000 170265.333 Sent to peers 221.000 73.667 CPU usage, s total per block availability-recovery 38.671 12.890 Test environment 0.255 0.085 polkadot/node/subsystem-bench/examples/availability_read.yaml #2 Network usage, KiB total per block Received from peers 413633.000 137877.667 Sent to peers 353.000 117.667 CPU usage, s total per block availability-recovery 52.630 17.543 Test environment 0.271 0.090 polkadot/node/subsystem-bench/examples/availability_read.yaml #3 Network usage, KiB total per block Received from peers 424379.000 141459.667 Sent to peers 703.000 234.333 CPU usage, s total per block availability-recovery 51.128 17.043 Test environment 0.502 0.167 ``` ``` $ cargo run -p polkadot-subsystem-bench --release -- --ci test-sequence --path polkadot/node/subsystem-bench/examples/availability_read.yaml | tee output.txt $ cat output.txt - benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #1' network: - resource: Received from peers total: 509011.0 per_block: 169670.33333333334 - resource: Sent to peers total: 220.0 per_block: 73.33333333333333 cpu: - resource: availability-recovery total: 31.845848445 per_block: 10.615282815 - resource: Test environment total: 0.23582828799999941 per_block: 0.07860942933333313 - benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #2' network: - resource: Received from peers total: 411738.0 per_block: 137246.0 - resource: Sent to peers total: 351.0 per_block: 117.0 cpu: - resource: availability-recovery total: 18.93596025099999 per_block: 6.31198675033333 - resource: Test environment total: 0.2541994199999979 per_block: 0.0847331399999993 - benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #3' network: - resource: Received from peers total: 424548.0 per_block: 141516.0 - resource: Sent to peers total: 703.0 per_block: 234.33333333333334 cpu: - resource: availability-recovery total: 16.54178526900001 per_block: 5.513928423000003 - resource: Test environment total: 0.43960946299999537 per_block: 0.14653648766666513 ``` --------- Co-authored-by: Andrei Sandu <[email protected]>
-
Branislav Kontur authored
Relates to: https://github.com/paritytech/polkadot-sdk/issues/3214 ## TODO - [ ] backport to the `1.7.0` release
-
Koute authored
This PR improves compatibility with RISC-V and PolkaVM, allowing more runtimes to successfully compile. In particular, it makes the following changes: - The `sp-mmr-primitives` and `sp-consensus-beefy` crates unconditionally required an `std`-only dependency; now they only require those dependencies when the `std` feature is actually enabled. (Our RISC-V target is, unlike WASM, a true `no_std` target where you can't accidentally use stuff from `std` anymore.) - One of our dependencies (the `bitvec` trace) uses a crate called `radium` which doesn't compile under RISC-V due to incomplete autodetection logic in their `build.rs` file. The good news is that this is already fixed in the newest upstream version of `radium`, and the newest version of `bitvec` uses it. The bad news is that the newest version of `bitvec` is not currently released on crates.io, so we can't use it. I've [created an issue](https://github.com/ferrilab/ferrilab/issues/5) asking for a new release, but in the meantime I forked the currently used `radium` 0.7, [fixed the faulty logic](https://github.com/paritytech/radium-0.7-fork/commit/ed66c8a294b138c67f93499644051d97d4c7fbda) and used cargo's patching capabilities to use it for the RISC-V runtime builds. This might be a little hacky, but it is the least intrusive way to fix the problem, doesn't affect WASM builds at all, and we can trivially remove it once a new `bitvec` is released. - The new runtimes are added to the CI to make sure their compilation doesn't break.
-
Svyatoslav Nikolsky authored
backport of https://github.com/paritytech/parity-bridges-common/pull/2821 (see detailed description there)
-
Squirrel authored
First in a series of PRs that reduces our use of sp-std with a view to deprecating it. This is just looking at /substrate and moving some of the references from `sp-std` to `core`. These particular changes should be uncontroversial. Where macros are used `::core` should be used to remove any ambiguity. part of https://github.com/paritytech/polkadot-sdk/issues/2101
-
Oliver Tale-Yazdi authored
Superseeds https://github.com/paritytech/polkadot-sdk/pull/1245 This PR is a migration of the https://github.com/paritytech/substrate/pull/14577. The PR added associated types (`AddOrigin` & `RemoveOrigin`) to `Config`. It allows you to decouple types and areas of responsibility, since at the moment the same types are responsible for adding and promoting(removing and demoting). This will improve the flexibility of the pallet configuration. ``` /// The origin required to add a member. type AddOrigin: EnsureOrigin<Self::RuntimeOrigin, Success = ()>; /// The origin required to remove a member. The success value indicates the /// maximum rank *from which* the removal may be. type RemoveOrigin: EnsureOrigin<Self::RuntimeOrigin, Success = Rank>; ``` To achieve the backward compatibility, the users of the pallet can use the old type via the new morph: ``` type AddOrigin = MapSuccess<Self::PromoteOrigin, Ignore>; type RemoveOrigin = Self::DemoteOrigin; ``` --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: PraetorP <[email protected]> Co-authored-by: Pavel Orlov <[email protected]>
-
Alin Dima authored
Fixes https://github.com/paritytech/polkadot-sdk/issues/3129
-
- Feb 05, 2024
-
-
dependabot[bot] authored
Bumps [indicatif](https://github.com/console-rs/indicatif) from 0.17.6 to 0.17.7. <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/console-rs/indicatif/commit/0c037edc86449d84aa457d7d5db80b4166c18d6b"><code>0c037ed</code></a> Bump version to 0.17.7 (<a href="https://redirect.github.com/console-rs/indicatif/issues/589">#589</a>)</li> <li><a href="https://github.com/console-rs/indicatif/commit/44610121c8c0343428c12992a5bbf21255d4120b"><code>4461012</code></a> Fix attempt to subtract with overflow (<a href="https://redirect.github.com/console-rs/indicatif/issues/582">#582</a>) (<a href="https://redirect.github.com/console-rs/indicatif/issues/586">#586</a>)</li> <li><a href="https://github.com/console-rs/indicatif/commit/257d3ecc39f60a366bde98c11c4c703f91d53347"><code>257d3ec</code></a> Bump actions/checkout from 3 to 4</li> <li><a href="https://github.com/console-rs/indicatif/commit/40b40d29b6d06ae18c40b829f77e9c43bcebd7af"><code>40b40d2</code></a> fix unnecessary vec! lint instances</li> <li><a href="https://github.com/console-rs/indicatif/commit/a5a8524b4a62ac97229df5beeeff55c928e051fe"><code>a5a8524</code></a> Tick ProgressTrackers before drawing</li> <li><a href="https://github.com/console-rs/indicatif/commit/75fca29bdb9e1164092d2b40d46d9b9c3d9581f1"><code>75fca29</code></a> Add scheduled CI runs every week</li> <li><a href="https://github.com/console-rs/indicatif/commit/c0ea468ac3bd7ab9abab86a3fd3b251f7cef83b8"><code>c0ea468</code></a> Upgrade to 2021 edition</li> <li><a href="https://github.com/console-rs/indicatif/commit/73a67f8e517c64f919ce51ce62e7c5bf3cb95974"><code>73a67f8</code></a> Bump MSRV to 1.63 for tokio 1.30</li> <li><a href="https://github.com/console-rs/indicatif/commit/de090172485c7638a016b984e0c7c54e40919d34"><code>de09017</code></a> Reorder Cargo metadata fields</li> <li><a href="https://github.com/console-rs/indicatif/commit/cee6fd4fcf85c4eda3a6e5dfb555f8a9c6c62edd"><code>cee6fd4</code></a> Fix a potential overflow with a saturating add.</li> <li>Additional commits viewable in <a href="https://github.com/console-rs/indicatif/compare/0.17.6...0.17.7">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=indicatif&package-manager=cargo&previous-version=0.17.6&new-version=0.17.7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions </details> Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
-
Alexandru Gheorghe authored
## Summary Built on top of the tooling and ideas introduced in https://github.com/paritytech/polkadot-sdk/pull/2528, this PR introduces a synthetic benchmark for measuring and assessing the performance characteristics of the approval-voting and approval-distribution subsystems. Currently this allows, us to simulate the behaviours of these systems based on the following dimensions: ``` TestConfiguration: # Test 1 - objective: !ApprovalsTest last_considered_tranche: 89 min_coalesce: 1 max_coalesce: 6 enable_assignments_v2: true send_till_tranche: 60 stop_when_approved: false coalesce_tranche_diff: 12 workdir_prefix: "/tmp" num_no_shows_per_candidate: 0 approval_distribution_expected_tof: 6.0 approval_distribution_cpu_ms: 3.0 approval_voting_cpu_ms: 4.30 n_validators: 500 n_cores: 100 n_included_candidates: 100 min_pov_size: 1120 max_pov_size: 5120 peer_bandwidth: 524288000000 bandwidth: 524288000000 latency: min_latency: secs: 0 nanos: 1000000 max_latency: secs: 0 nanos: 100000000 error: 0 num_blocks: 10 ``` ## The approach 1. We build a real overseer with the real implementations for approval-voting and approval-distribution subsystems. 2. For a given network size, for each validator we pre-computed all potential assignments and approvals it would send, because this a computation heavy operation this will be cached on a file on disk and be re-used if the generation parameters don't change. 3. The messages will be sent accordingly to the configured parameters and those are split into 3 main benchmarking scenarios. ## Benchmarking scenarios ### Best case scenario *approvals_throughput_best_case.yaml* It send to the approval-distribution only the minimum required tranche to gathered the needed_approvals, so that a candidate is approved. ### Behaviour in the presence of no-shows *approvals_no_shows.yaml* It sends the tranche needed to approve a candidate when we have a maximum of *num_no_shows_per_candidate* tranches with no-shows for each candidate. ### Maximum throughput *approvals_throughput.yaml* It sends all the tranches for each block and measures the used CPU and necessary network bandwidth. by the approval-voting and approval-distribution subsystem. ## How to run it ``` cargo run -p polkadot-subsystem-bench --release -- test-sequence --path polkadot/node/subsystem-bench/examples/approvals_throughput.yaml ``` ## Evaluating performance ### Use the real subsystems metrics If you follow the steps in https://github.com/paritytech/polkadot-sdk/tree/master/polkadot/node/subsystem-bench#install-grafana for installing locally prometheus and grafana, all real metrics for the `approval-distribution`, `approval-voting` and overseer are available. E.g: <img width="2149" alt="Screenshot 2023-12-05 at 11 07 46" src="https://github.com/paritytech/polkadot-sdk/assets/49718502/cb8ae2dd-178b-4922-bfa4-dc37e572ed38"> <img width="2551" alt="Screenshot 2023-12-05 at 11 09 42" src="https://github.com/paritytech/polkadot-sdk/assets/49718502/8b4542ba-88b9-46f9-9b70-cc345366081b"> <img width="2154" alt="Screenshot 2023-12-05 at 11 10 15" src="https://github.com/paritytech/polkadot-sdk/assets/49718502/b8874d8d-632e-443a-9840-14ad8e90c54f"> <img width="2535" alt="Screenshot 2023-12-05 at 11 10 52" src="https://github.com/paritytech/polkadot-sdk/assets/49718502/779a439f-fd18-4985-bb80-85d5afad78e2"> ### Profile with pyroscope 1. Setup pyroscope following the steps in https://github.com/paritytech/polkadot-sdk/tree/master/polkadot/node/subsystem-bench#install-pyroscope, then run any of the benchmark scenario with `--profile` as the arguments. 2. Open the pyroscope dashboard in grafana, e.g: <img width="2544" alt="Screenshot 2024-01-09 at 17 09 58" src="https://github.com/paritytech/polkadot-sdk/assets/49718502/58f50c99-a910-4d20-951a-8b16639303d9"> ### Useful logs 1. Network bandwidth requirements: ``` Payload bytes received from peers: 503993 KiB total, 50399 KiB/block Payload bytes sent to peers: 629971 KiB total, 62997 KiB/block ``` 2. Cpu usage by the approval-distribution/approval-voting subsystems. ``` approval-distribution CPU usage 84.061s approval-distribution CPU usage per block 8.406s approval-voting CPU usage 96.532s approval-voting CPU usage per block 9.653s ``` 3. Time passed until a given block is approved ``` Chain selection approved after 3500 ms hash=0x0101010101010101010101010101010101010101010101010101010101010101 Chain selection approved after 4500 ms hash=0x0202020202020202020202020202020202020202020202020202020202020202 ``` ### Using benchmark to quantify improvements from https://github.com/paritytech/polkadot-sdk/pull/1178 + https://github.com/paritytech/polkadot-sdk/pull/1191 Using a versi-node we compare the scenarios where all new optimisations are disabled with a scenarios where tranche0 assignments are sent in a single message and a conservative simulation where the coalescing of approvals gives us just 50% reduction in the number of messages we send. Overall, what we see is a speedup of around 30-40% in the time it takes to process the necessary messages and a 30-40% reduction in the necessary bandwidth. #### Best case scenario comparison(minimum required tranches sent). Unoptimised ``` Number of blocks: 10 Payload bytes received from peers: 53289 KiB total, 5328 KiB/block Payload bytes sent to peers: 52489 KiB total, 5248 KiB/block approval-distribution CPU usage 6.732s approval-distribution CPU usage per block 0.673s approval-voting CPU usage 9.523s approval-voting CPU usage per block 0.952s ``` vs Optimisation enabled ``` Number of blocks: 10 Payload bytes received from peers: 32141 KiB total, 3214 KiB/block Payload bytes sent to peers: 37314 KiB total, 3731 KiB/block approval-distribution CPU usage 4.658s approval-distribution CPU usage per block 0.466s approval-voting CPU usage 6.236s approval-voting CPU usage per block 0.624s ``` #### Worst case all tranches sent, very unlikely happens when sharding breaks. Unoptimised ``` Number of blocks: 10 Payload bytes received from peers: 746393 KiB total, 74639 KiB/block Payload bytes sent to peers: 729151 KiB total, 72915 KiB/block approval-distribution CPU usage 118.681s approval-distribution CPU usage per block 11.868s approval-voting CPU usage 124.118s approval-voting CPU usage per block 12.412s ``` vs optimised ``` Number of blocks: 10 Payload bytes received from peers: 503993 KiB total, 50399 KiB/block Payload bytes sent to peers: 629971 KiB total, 62997 KiB/block approval-distribution CPU usage 84.061s approval-distribution CPU usage per block 8.406s approval-voting CPU usage 96.532s approval-voting CPU usage per block 9.653s ``` ## TODOs [x] Polish implementation. [x] Use what we have so far to evaluate https://github.com/paritytech/polkadot-sdk/pull/1191 before merging. [x] List of features and additional dimensions we want to use for benchmarking. [x] Run benchmark on hardware similar with versi and kusama nodes. [ ] Add benchmark to be run in CI for catching regression in performance. [ ] Rebase on latest changes for network emulation. --------- Signed-off-by: Andrei Sandu <[email protected]> Signed-off-by: Alexandru Gheorghe <[email protected]> Co-authored-by: Andrei Sandu <[email protected]> Co-authored-by: Andrei Sandu <[email protected]>
-