- Aug 13, 2024
-
-
Maksym H authored
Closes https://github.com/paritytech/ci_cd/issues/1014 Adds subsystem-benchmarking in GHA (only works with temp label)
-
dependabot[bot] authored
Bumps [libp2p-identity](https://github.com/libp2p/rust-libp2p) from 0.2.8 to 0.2.9. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/libp2p/rust-libp2p/releases">libp2p-identity's releases</a>.</em></p> <blockquote> <h2>libp2p-v0.53.2</h2> <p>See individual <a href="https://github.com/libp2p/rust-libp2p/blob/HEAD/CHANGELOG.md">changelogs</a> for details.</p> <h2>libp2p-v0.53.1</h2> <p>See individual <a href="https://github.com/libp2p/rust-libp2p/blob/HEAD/CHANGELOG.md">changelogs</a> for details.</p> <h2>libp2p-v0.53.0</h2> <p>The most ergonomic version of rust-libp2p yet!</p> <p>We've been busy again, with over <a href="https://github.com/libp2p/rust-libp2p/compare/libp2p-v0.52.0...master">250</a> PRs being merged into <code>master</code> since <code>v0.52.0</code> (excluding dependency updates).</p> <h2>Backwards-compatible features</h2> <p>Numerous improvements landed as patch releases since the <code>v0.52.0</code> release, for example a new, type-safe <a href="https://redirect.github.com/libp2p/rust-libp2p/pull/4120"><code>SwarmBuilder</code></a> that also encompasses the most common transport protocols:</p> <pre lang="rust"><code>let mut swarm = libp2p::SwarmBuilder::with_new_identity() .with_tokio() .with_tcp( tcp::Config::default().port_reuse(true).nodelay(true), noise::Config::new, yamux::Config::default, )? .with_quic() .with_dns()? .with_relay_client(noise::Config::new, yamux::Config::default)? .with_behaviour(|keypair, relay_client| Behaviour { relay_client, ping: ping::Behaviour::default(), dcutr: dcutr::Behaviour::new(keypair.public().to_peer_id()), })? .build(); </code></pre> <p>The new builder makes heavy use of the type-system to guide you towards a correct composition of all transports. For example, it is important to compose the DNS transport as a wrapper around all other transports but before the relay transport. Luckily, you no longer need to worry about these details as the builder takes care of that for you! Have a look yourself if you dare <a href="https://github.com/libp2p/rust-libp2p/tree/master/libp2p/src/builder">here</a> but be warned, the internals are a bit wild :)</p> <p>Some more features that we were able to ship in <code>v0.52.X</code> patch-releases include:</p> <ul> <li><a href="https://redirect.github.com/libp2p/rust-libp2p/pull/4325">stable QUIC implementation</a></li> <li>for rust-libp2p compiled to WASM running in the browser <ul> <li><a href="https://redirect.github.com/libp2p/rust-libp2p/pull/4015">WebTransport support</a></li> <li><a href="https://redirect.github.com/libp2p/rust-libp2p/pull/4248">WebRTC support</a></li> </ul> </li> <li><a href="https://redirect.github.com/libp2p/rust-libp2p/pull/4156">UPnP implementation to automatically configure port-forwarding with ones gateway</a></li> <li><a href="https://redirect.github.com/libp2p/rust-libp2p/pull/4281">option to limit connections based on available memory</a></li> </ul> <p>We always try to ship as many features as possible in a backwards-compatible way to get them to you faster. Often times, these come with deprecations to give you a heads-up about what will change in a future version. We advise updating to each intermediate version rather than skipping directly to the most recent one, to avoid missing any crucial deprecation warnings. We highly recommend you stay up-to-date with the latest version to make upgrades as smooth as possible.</p> <p>Some improvments we unfortunately cannot ship in a way that Rust considers a non-breaking change but with every release, we attempt to smoothen the way for future upgrades.</p> <h2><code>#[non_exhaustive]</code> on key enums</h2> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li>See full diff in <a href="https://github.com/libp2p/rust-libp2p/commits">compare view</a></li> </ul> </details> <br /> [](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> Signed-off-by:
dependabot[bot] <support@github.com> Co-authored-by:
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
-
- Aug 12, 2024
-
-
eskimor authored
Should be safe on all production network. I noticed that Paseo needs to be updated, it is lacking behind in a couple of things. Execution environment parameters should be updated to those of Polkadot: ``` [ { MaxMemoryPages: 8,192 } { PvfExecTimeout: [ Backing 2,500 ] } { PvfExecTimeout: [ Approval 15,000 ] } ] ] ``` --------- Co-authored-by:
eskimor <eskimor@no-such-url.com>
-
Elias Rad authored
Hello I found several spelling errors. Br, Elias.
-
dependabot[bot] authored
Bumps the known_good_semver group with 2 updates in the / directory: [serde](https://github.com/serde-rs/serde) and [serde_json](https://github.com/serde-rs/json). Updates `serde` from 1.0.204 to 1.0.206 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/serde-rs/serde/releases">serde's releases</a>.</em></p> <blockquote> <h2>v1.0.206</h2> <ul> <li>Improve support for <code>flatten</code> attribute inside of enums (<a href="https://redirect.github.com/serde-rs/serde/issues/2567">#2567</a>, thanks <a href="https://github.com/Mingun"><code>@Mingun</code></a>)</li> </ul> <h2>v1.0.205</h2> <ul> <li>Use serialize_entry instead of serialize_key + serialize_value when serialize flattened newtype enum variants (<a href="https://redirect.github.com/serde-rs/serde/issues/2785">#2785</a>, thanks <a href="https://github.com/Mingun"><code>@Mingun</code></a>)</li> <li>Avoid triggering a collection_is_never_read lint in the deserialization of enums containing flattened fields (<a href="https://redirect.github.com/serde-rs/serde/issues/2791">#2791</a>)</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/serde-rs/serde/commit/85c73ef8dea8966d88a03876e6f0dc9359e68cc9"><code>85c73ef</code></a> Release 1.0.206</li> <li><a href="https://github.com/serde-rs/serde/commit/5ba1796a7e639839d4e18c3ae23b9bb32b0700b5"><code>5ba1796</code></a> Resolve doc_markdown pedantic lint on regression test function</li> <li><a href="https://github.com/serde-rs/serde/commit/e52b7b380f88e0112c9f84e6258bdd34ad132352"><code>e52b7b3</code></a> Touch up PR 2567</li> <li><a href="https://github.com/serde-rs/serde/commit/84c7419652161bf88f88eb26302b26debfff8a8c"><code>84c7419</code></a> Merge pull request <a href="https://redirect.github.com/serde-rs/serde/issues/2794">#2794</a> from dtolnay/neverread</li> <li><a href="https://github.com/serde-rs/serde/commit/536221b1f93a5dcf97352c7d1e3b93a5a56bf747"><code>536221b</code></a> Temporarily ignore collection_is_never_read on FlattenSkipDeserializing</li> <li><a href="https://github.com/serde-rs/serde/commit/fc55ac70d34221b38672b1583e496011fbae92aa"><code>fc55ac7</code></a> Merge pull request <a href="https://redirect.github.com/serde-rs/serde/issues/2567">#2567</a> from Mingun/fix-2565</li> <li><a href="https://github.com/serde-rs/serde/commit/2afe5b4ef9d0e89587ec564eadbc7583fd1f0673"><code>2afe5b4</code></a> Add regression test for issue <a href="https://redirect.github.com/serde-rs/serde/issues/2792">#2792</a></li> <li><a href="https://github.com/serde-rs/serde/commit/b4ec2595c9dd8e380227043eba42ff85beb780c2"><code>b4ec259</code></a> Correctly process flatten fields in enum variants</li> <li><a href="https://github.com/serde-rs/serde/commit/c3ac7b675a38a73170879992976acb0009834ac0"><code>c3ac7b6</code></a> Add regression test for issue <a href="https://redirect.github.com/serde-rs/serde/issues/1904">#1904</a></li> <li><a href="https://github.com/serde-rs/serde/commit/24614e44bff5466057e46c55394bac3ae20142c4"><code>24614e4</code></a> Add regression test for issue <a href="https://redirect.github.com/serde-rs/serde/issues/2565">#2565</a></li> <li>Additional commits viewable in <a href="https://github.com/serde-rs/serde/compare/v1.0.204...v1.0.206">compare view</a></li> </ul> </details> <br /> Updates `serde_derive` from 1.0.204 to 1.0.206 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/serde-rs/serde/releases">serde_derive's releases</a>.</em></p> <blockquote> <h2>v1.0.206</h2> <ul> <li>Improve support for <code>flatten</code> attribute inside of enums (<a href="https://redirect.github.com/serde-rs/serde/issues/2567">#2567</a>, thanks <a href="https://github.com/Mingun"><code>@Mingun</code></a>)</li> </ul> <h2>v1.0.205</h2> <ul> <li>Use serialize_entry instead of serialize_key + serialize_value when serialize flattened newtype enum variants (<a href="https://redirect.github.com/serde-rs/serde/issues/2785">#2785</a>, thanks <a href="https://github.com/Mingun"><code>@Mingun</code></a>)</li> <li>Avoid triggering a collection_is_never_read lint in the deserialization of enums containing flattened fields (<a href="https://redirect.github.com/serde-rs/serde/issues/2791">#2791</a>)</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/serde-rs/serde/commit/85c73ef8dea8966d88a03876e6f0dc9359e68cc9"><code>85c73ef</code></a> Release 1.0.206</li> <li><a href="https://github.com/serde-rs/serde/commit/5ba1796a7e639839d4e18c3ae23b9bb32b0700b5"><code>5ba1796</code></a> Resolve doc_markdown pedantic lint on regression test function</li> <li><a href="https://github.com/serde-rs/serde/commit/e52b7b380f88e0112c9f84e6258bdd34ad132352"><code>e52b7b3</code></a> Touch up PR 2567</li> <li><a href="https://github.com/serde-rs/serde/commit/84c7419652161bf88f88eb26302b26debfff8a8c"><code>84c7419</code></a> Merge pull request <a href="https://redirect.github.com/serde-rs/serde/issues/2794">#2794</a> from dtolnay/neverread</li> <li><a href="https://github.com/serde-rs/serde/commit/536221b1f93a5dcf97352c7d1e3b93a5a56bf747"><code>536221b</code></a> Temporarily ignore collection_is_never_read on FlattenSkipDeserializing</li> <li><a href="https://github.com/serde-rs/serde/commit/fc55ac70d34221b38672b1583e496011fbae92aa"><code>fc55ac7</code></a> Merge pull request <a href="https://redirect.github.com/serde-rs/serde/issues/2567">#2567</a> from Mingun/fix-2565</li> <li><a href="https://github.com/serde-rs/serde/commit/2afe5b4ef9d0e89587ec564eadbc7583fd1f0673"><code>2afe5b4</code></a> Add regression test for issue <a href="https://redirect.github.com/serde-rs/serde/issues/2792">#2792</a></li> <li><a href="https://github.com/serde-rs/serde/commit/b4ec2595c9dd8e380227043eba42ff85beb780c2"><code>b4ec259</code></a> Correctly process flatten fields in enum variants</li> <li><a href="https://github.com/serde-rs/serde/commit/c3ac7b675a38a73170879992976acb0009834ac0"><code>c3ac7b6</code></a> Add regression test for issue <a href="https://redirect.github.com/serde-rs/serde/issues/1904">#1904</a></li> <li><a href="https://github.com/serde-rs/serde/commit/24614e44bff5466057e46c55394bac3ae20142c4"><code>24614e4</code></a> Add regression test for issue <a href="https://redirect.github.com/serde-rs/serde/issues/2565">#2565</a></li> <li>Additional commits viewable in <a href="https://github.com/serde-rs/serde/compare/v1.0.204...v1.0.206">compare view</a></li> </ul> </details> <br /> Updates `serde_json` from 1.0.121 to 1.0.124 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/serde-rs/json/releases">serde_json's releases</a>.</em></p> <blockquote> <h2>v1.0.124</h2> <ul> <li>Fix a bug in processing string escapes in big-endian architectures (<a href="https://redirect.github.com/serde-rs/json/issues/1173">#1173</a>, thanks <a href="https://github.com/purplesyringa"><code>@purplesyringa</code></a>)</li> </ul> <h2>v1.0.123</h2> <ul> <li>Optimize string parsing by applying SIMD-within-a-register: 30.3% improvement on <a href="https://github.com/miloyip/nativejson-benchmark/blob/v1.0.0/data/twitter.json">twitter.json</a> from 613 MB/s to 799 MB/s (<a href="https://redirect.github.com/serde-rs/json/issues/1161">#1161</a>, thanks <a href="https://github.com/purplesyringa"><code>@purplesyringa</code></a>)</li> </ul> <h2>v1.0.122</h2> <ul> <li>Support using <code>json!</code> in no-std crates (<a href="https://redirect.github.com/serde-rs/json/issues/1166">#1166</a>)</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/serde-rs/json/commit/cf771a0471dd797b6fead77e767f2f7943740c98"><code>cf771a0</code></a> Release 1.0.124</li> <li><a href="https://github.com/serde-rs/json/commit/8b314a77bf57ad8d6089536fea1b3c3b303cba92"><code>8b314a7</code></a> Merge pull request <a href="https://redirect.github.com/serde-rs/json/issues/1173">#1173</a> from iex-rs/fix-big-endian</li> <li><a href="https://github.com/serde-rs/json/commit/8eba7863b126584f4b9a5b1d3cc4cbc0d0f59976"><code>8eba786</code></a> Fix skip_to_escape on BE architectures</li> <li><a href="https://github.com/serde-rs/json/commit/2cab07e68607ab0e11c3a8b0461a472c37886210"><code>2cab07e</code></a> Release 1.0.123</li> <li><a href="https://github.com/serde-rs/json/commit/346189a524694b98b92ccccb07775868d34b144c"><code>346189a</code></a> Fix needless_borrow clippy lint in new control character test</li> <li><a href="https://github.com/serde-rs/json/commit/859ead8e6d60f4eaed97f7ac2b18f879bec5afe5"><code>859ead8</code></a> Merge pull request <a href="https://redirect.github.com/serde-rs/json/issues/1161">#1161</a> from iex-rs/vectorized-string-parsing</li> <li><a href="https://github.com/serde-rs/json/commit/e43da5ee0e64819972f08254e8ce799796238791"><code>e43da5e</code></a> Immediately bail-out on empty strings</li> <li><a href="https://github.com/serde-rs/json/commit/8389d8a11293616ce5a4358651aede271871248d"><code>8389d8a</code></a> Don't run the slow algorithm from the beginning</li> <li><a href="https://github.com/serde-rs/json/commit/1f0dcf791ab1756d7ad07c20889e50bd9a7887fb"><code>1f0dcf7</code></a> Allow clippy::items_after_statements</li> <li><a href="https://github.com/serde-rs/json/commit/a95d6df9d08611c9a11ac6524903d693921b8eae"><code>a95d6df</code></a> Big endian support</li> <li>Additional commits viewable in <a href="https://github.com/serde-rs/json/compare/v1.0.121...v1.0.124">compare view</a></li> </ul> </details> <br /> Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions </details> Signed-off-by:
dependabot[bot] <support@github.com> Co-authored-by:
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Alexander Theißen authored
This crate only uses `tempfile` on linux but includes it unconditionally in its `Cargo.toml`. It also sets `#![deny(unused_crate_dependencies)]`. This leads to an hard error to anything that is not Linux. This PR fixes this error. I am wondering why CI didn't catch that. Shouldn't the test at least be compiled (but not run) on macOS?
-
Javyer authored
Paused the action which comments on every command starting with `bot ` until we can fix all the commands which are not working.
-
Nazar Mokrynskyi authored
Trivial leftover from https://github.com/paritytech/polkadot-sdk/pull/4844 Co-authored-by:
Adrian Catangiu <adrian@parity.io> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Michal Kucharczyk authored
Added minor clarification on the genesis config patch ([link](https://substrate.stackexchange.com/questions/11813/in-the-genesis-config-what-does-the-patch-key-do/11825#11825)) --------- Co-authored-by: command-bot <>
-
Alin Dima authored
Fixes https://github.com/paritytech/polkadot-sdk/issues/5258
-
Dónal Murray authored
The polkadot.network website was recently refreshed and the `favicon-32x32.png` was removed. It was linked in some docs and so the docs have been updated to point to a working favicon on the new website. Previously the lychee link checker was failing on all PRs.
-
jserrat authored
Closes #4242 XCM programs that deposit assets to some new (empty) account will now succeed if at least one of the deposited assets satisfies ED. Before this change, the requirement was that the _first_ asset had to satisfy ED, but assets order can be changed during reanchoring so it is not reliable. With this PR, ordering doesn't matter, any one(s) of them can satisfy ED for the whole deposit to work. Kusama address: FkB6QEo8VnV3oifugNj5NeVG3Mvq1zFbrUu4P5YwRoe5mQN --------- Co-authored-by:
Adrian Catangiu <adrian@parity.io> Co-authored-by:
Francisco Aguirre <franciscoaguirreperez@gmail.com> Co-authored-by: command-bot <>
-
Alin Dima authored
Resolves https://github.com/paritytech/polkadot-sdk/issues/4800 # Problem In https://github.com/paritytech/polkadot-sdk/pull/4035, we removed support for parachain forks and cycles and added support for backing unconnected candidates (candidates for which we don't yet know the full path to the latest included block), which is useful for elastic scaling (parachains using multiple cores). Removing support for backing forks turned out to be a bad idea, as there are legitimate cases for a parachain to fork (if they have other consensus mechanism for example, like BABE or PoW). This leads to validators getting lower backing rewards (depending on whether they back the winning fork or not) and a higher pressure on only the half of the backing group (during availability-distribution for example). Since we don't yet have approval voting rewards, backing rewards are a pretty big deal (which may change in the future). # Description A backing group is now allowed to back forks. Once a candidate becomes backed (has the minimum backing votes), we don't accept new forks unless they adhere to the new fork selection rule (have a lower candidate hash). This helps with keeping the implementation simpler, since forks will only be taken into account for candidates which are not backed yet (only seconded). Having this fork selection rule also helps with reducing the work backing validators need to do, since they have a shared way of picking the winning fork. Once they see a candidate backed, they can all decide to back a fork and not accept new ones. But they still accept new ones during the seconding phase (until the backing quorum is reached). Therefore, a block author which is not part of the backing group will likely not even see the forks (only the winning one). Just as before, a parachain producing forks will still not be able to leverage elastic scaling but will still work with a single core. Also, cycles are still not accepted. ## Some implementation details `CandidateStorage` is no longer a subsystem-wide construct. It was previously holding candidates from all relay chain forks and complicated the code. Each fragment chain now holds their candidate chain and their potential candidates. This should not increase the storage consumption since the heavy candidate data is already wrapped in an Arc and shared. It however allows for great simplifications and increase readability. `FragmentChain`s are now only creating a chain with backed candidates and the fork selection rule. As said before, `FragmentChain`s are now also responsible for maintaining their own potential candidate storage. Since we no longer have the subsytem-wide `CandidateStorage`, when getting a new leaf update, we use the storage of our latest ancestor, which may contain candidates seconded/backed that are still in scope. When a candidate is backed, the fragment chains which hold it are recreated (due to the fork selection rule, it could trigger a "reorg" of the fragment chain). I generally tried to simplify the subsystem and not introduce unneccessary optimisations that would otherwise complicate the code and not gain us much (fragment chains wouldn't realistically ever hold many candidates) TODO: - [x] update metrics - [x] update docs and comments - [x] fix and add unit tests - [x] tested with fork-producing parachain - [x] tested with cycle-producing parachain - [x] versi test - [x] prdoc
-
- Aug 09, 2024
-
-
Przemek Rzad authored
Corrects the issue we had [here](https://github.com/paritytech/polkadot-sdk-parachain-template/pull/10), in which `cargo build --release` worked but `cargo build --package parachain-template-node --release` failed with missing features. The command has been added to CI to make sure it works, but at the same we're changing it in the readme to just `cargo build --release` for simplification. Labeling silent because those packages are un-published as part of the regular release process. --------- Signed-off-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by:
Shawn Tabrizi <shawntabrizi@gmail.com> Co-authored-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
-
s0me0ne-unkn0wn authored
Closes #5071 This PR aims to * Move all the blocking decompression from the candidate validation subsystem to the PVF host workers; * Run the candidate validation subsystem on the non-blocking pool again. Upsides: no blocking operations in the subsystem's main loop. PVF throughput is not limited by the ability of the subsystem to decompress a lot of stuff. Correctness and homogeneity improve, as the artifact used to be identified by the hash of decompressed code, and now they are identified by the hash of compressed code, which coincides with the on-chain `ValidationCodeHash`. Downsides: the PVF code decompression is now accounted for in the PVF preparation timeout (be it pre-checking or actual preparation). Taking into account that the decompression duration is on the order of milliseconds, and the preparation timeout is on the order of seconds, I believe it is negligible.
-
Serban Iorga authored
Fixes https://github.com/paritytech/polkadot-sdk/issues/5296
-
thiolliere authored
The code do reduce or increase the weight by comparing `benchmarked_weight` and `consumed_weight`. But `benchmarked_weight` is the pre dispatch weight. not the post dispatch weight that is actually written into the block weight by `CheckWeight`. So in case the consumed weight was: `pre dispatch weight > consumed weight > post dispatch weight` then the reclaim code was reducing the block weight instead of increasing it. Might explain this issue even better https://github.com/paritytech/polkadot-sdk/issues/5229 @skunert @s0me0ne-unkn0wn
-
Alexander Samusev authored
Closes https://github.com/paritytech/ci_cd/issues/1012
-
Adrian Catangiu authored
In the real world, not all assets are sufficient. This aligns our emulated networks to that reality. Only DOT and USDT are sufficient "by default".
-
Serban Iorga authored
Updating the BHR and BHW runtime versions as a result of the changes in https://github.com/paritytech/polkadot-sdk/pull/5074/
-
Przemek Rzad authored
Despite what we had in the [original request](https://github.com/paritytech/polkadot-sdk/issues/3155#issuecomment-1979037109), I'm proposing a change to open a PR to the destination template repositories instead of pushing the code. This will give it a chance to run through the destination CI before making changes, and to set stricter branch protection in the destination repos.
-
Egor_P authored
This PR adds the possibility to set the docker stable release tag as an input parameter to the produced docker images, so that it matches with the release version
-
- Aug 08, 2024
-
-
Alexander Samusev authored
PR adds github-action for jobs test-linux-stable-oldkernel. PR waits the latest release of forklift. cc https://github.com/paritytech/ci_cd/issues/939 cc https://github.com/paritytech/ci_cd/issues/1006 --------- Co-authored-by:
Maksym H <1177472+mordamax@users.noreply.github.com>
-
joe petrowski authored
https://github.com/paritytech/polkadot-sdk/pull/4527/files#r1706673828
-
- Aug 07, 2024
-
-
Maksym H authored
- Part of https://github.com/paritytech/ci_cd/issues/1006 - Closes: https://github.com/paritytech/ci_cd/issues/1010 - Related: https://github.com/paritytech/polkadot-sdk/pull/4405 - Possibly affecting how frame-omni-bencher works on different runtimes: https://github.com/paritytech/polkadot-sdk/pull/5083 Currently works in parallel with gitlab short benchmarks. Triggered only by adding `GHA-migration` label to assure smooth transition (kind of feature-flag). Later when tested on random PRs we'll remove the gitlab and turn on by default these tests --------- Co-authored-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
-
Oliver Tale-Yazdi authored
Uses custom metadata to exclude chain-specific crates. The only concern is that devs who want to use chain-specific crates, still need to select matching versions numbers. Could possibly be addresses with chain-specific umbrella crates, but currently it should be possible to use [psvm](https://github.com/paritytech/psvm). --------- Signed-off-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
-
Alexandru Gheorghe authored
Since `May 2023` after https://github.com/paritytech/substrate/pull/13548 optimization, `Blake2256` is faster with about 30%, that means that there is a difference of ~30% between the benchmark values we ask validators to run against and the machine we use for generating the weights.So if all validators, just barely pass the benchmarks our weights are potentially underestimated with about ~20%, so let's bring this two in sync. Same thing happened when we merged https://github.com/paritytech/polkadot-sdk/pull/2524 in `Nov 2023` SR25519-Verify became faster with about 10-15% ## Results Generated on machine from here: https://github.com/paritytech/devops/pull/3210 ``` +----------+----------------+--------------+-------------+-------------------+ | Category | Function | Score | Minimum | Result | +============================================================================+ | CPU | BLAKE2-256 | 1.00 GiBs | 783.27 MiBs |
Pass (130.7 %) | |----------+----------------+--------------+-------------+-------------------| | CPU | SR25519-Verify | 637.62 KiBs | 560.67 KiBs | Pass (113.7 %) | |----------+----------------+--------------+-------------+-------------------| | Memory | Copy | 12.19 GiBs | 11.49 GiBs | Pass (106.1 %) | ``` Discovered and discussed here: https://github.com/paritytech/polkadot-sdk/pull/5127#issuecomment-2258423469 ## Downsides Machines that barely passed the benchmark will suddenly find themselves bellow the benchmark, but since that is just an warning and everything else continues as before it shouldn't be too impactful and should give the validators the necessary information that they need to become compliant, since they actually aren't when compared with the used weights. --------- Signed-off-by:Alexandru Gheorghe <alexandru.gheorghe@parity.io>
-
Ron authored
### Context Since Rococo is now deprecated, we need another testnet to detect bleeding-edge changes to Substrate, Polkadot, & BEEFY consensus protocols that could brick the bridge. It's the mirror PR of https://github.com/Snowfork/polkadot-sdk/pull/157 which has reviewed by Snowbridge team internally. Synced with @acatangiu about that in channel https://matrix.to/#/!gxqZwOyvhLstCgPJHO:matrix.parity.io/$N0CvTfDSl3cOQLEJeZBh-wlKJUXx7EDHAuNN5HuYHY4?via=matrix.parity.io&via=parity.io&via=matrix.org --------- Co-authored-by:
Clara van Staden <claravanstaden64@gmail.com>
-
Lulu authored
Co-authored-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
-
Nazar Mokrynskyi authored
Follow-up to https://github.com/paritytech/polkadot-sdk/pull/4457, looks like more things were missing --------- Co-authored-by:
Niklas Adolfsson <niklasadolfsson1@gmail.com>
-
Przemek Rzad authored
Addresses https://github.com/paritytech/polkadot-sdk/pull/5085#issuecomment-2265725858 Luckily, in the rest of the script, github API allows (or forces?) us to read the state of PRs the same way as we read the state of issues, so it works without any more changes.
-
Alexandru Vasile authored
This PR shows a warning when the `--public-addr` is not provided for validators. In the future, we'll transform this warning into a hard failure. Validators are encouraged to provide this parameter for better availability over the network. cc @paritytech/networking --------- Signed-off-by:
Alexandru Vasile <alexandru.vasile@parity.io>
-
Sebastian Kunert authored
I propose to have `ProofSizeExt` available during benchmarking so we can improve the accuracy for extensions using it. Another thing we could do is to also enable recording for the timing benchmark here: https://github.com/paritytech/polkadot-sdk/blob/035211d7/substrate/utils/frame/benchmarking-cli/src/pallet/command.rs#L232 Parachains will need to have recording enabled during import for reclaim, so we could enable it here and provide a flag `--disable-proof-recording` for scenarios where one does not want it. Happy to hear opinions about this.
-
Pablo Andrés Dorado Suárez authored
Closes #4517 Polkadot address: 12gMhxHw8QjEwLQvnqsmMVY1z5gFa54vND74aMUbhhwN6mJR --------- Co-authored-by:
joe petrowski <25483142+joepetrowski@users.noreply.github.com>
-
- Aug 06, 2024
-
-
Oliver Tale-Yazdi authored
Test currently failing, therefore improving to include a file from the same crate to not trip up the caching. R0 silent since this is only modifying unpublished crates. --------- Signed-off-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by:
Dónal Murray <donal.murray@parity.io>
-
Sebastian Miasojed authored
[pallet_contracts] Increase the weight of the deposit_event host function to limit the memory used by events. (#4973) This PR updates the weight of the `deposit_event` host function by adding a fixed ref_time of 60,000 picoseconds per byte. Given a block time of 2 seconds and this specified ref_time, the total allocation size is 32MB. --------- Co-authored-by:
Alexander Theißen <alex.theissen@me.com>
-
Pavel Suprunyuk authored
This Workflow is not supposed to run in the paritytech/polkadot-sdk repo. This Workflow is supposed to run only in the forks of the repo, in `paritytech-release/polkadot-sdk` specifically, to automatically maintain the critical fork synced with the upstream. This Workflow should be always disabled in the paritytech/polkadot-sdk repo.
-
- Aug 05, 2024
-
-
Przemek Rzad authored
- Progresses https://github.com/paritytech/polkadot-sdk/issues/5226 There is no actual `try-runtime` or `runtime-benchmarks` functionality in the minimal template at the moment.
-
Alexandru Gheorghe authored
The errors on polkadot-parachain are not printed with their full display context(what is marked with `#[error(`) because main returns plain Result and the error will be shown in its Debug format, that's not consistent with how the polkadot binary behave and is not user friendly since it does not tell them why they got the error. Fix it by using `color_eyre` as polkadot already does it. Fixes: https://github.com/paritytech/polkadot-sdk/issues/5211 ## Output before ``` Error: NetworkKeyNotFound("/acala/data/Collator2/chains/mandala-tc9/network/secret_ed25519") ``` ## Output after ``` Error: 0: Starting an authorithy without network key in /home/alexggh/.local/share/polkadot-parachain/chains/asset-hub-kusama/network/secret_ed25519. This is not a safe operation because other authorities in the network may depend on your node having a stable identity. Otherwise these other authorities may not being able to reach you. If it is the first time running your node you could use one of the following methods: 1. [Preferred] Separately generate the key with: <NODE_BINARY> key generate-node-key --base-path <YOUR_BASE_PATH> 2. [Preferred] Separately generate the key with: <NODE_BINARY> key generate-node-key --file <YOUR_PATH_TO_NODE_KEY> 3. [Preferred] Separately generate the key with: <NODE_BINARY> key generate-node-key --default-base-path 4. [Unsafe] Pass --unsafe-force-node-key-generation and make sure you remove it for subsequent node restarts ``` --------- Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io>
-
Sergej Sakac authored
This PR adds functionality that allows tasks to enable auto-renewal. Each task eligible for renewal can enable auto-renewal. A new storage value is added to track all the cores with auto-renewal enabled and the associated task running on the core. The `BoundedVec` is sorted by `CoreIndex` to make disabling auto-renewal more efficient. Cores are renewed at the start of a new bulk sale. If auto-renewal fails(e.g. due to the sovereign account of the task not holding sufficient balance), an event will be emitted, and the renewal will continue for the other cores. The two added extrinsics are: - `enable_auto_renew`: Extrinsic for enabling auto renewal. - `disable_auto_renew`: Extrinsic for disabling auto renewal. TODOs: - [x] Write benchmarks for the newly added extrinsics. Closes: #4351 --------- Co-authored-by:
Dónal Murray <donalm@seadanda.dev>
-