- Oct 15, 2023
-
-
Daan van der Plas authored
The runtime code of a parachain can be replaced on the relay-chain via: [cumulus]: [enact_authorized_upgrade](https://github.com/paritytech/polkadot-sdk/blob/1a38d6d6/cumulus/pallets/parachain-system/src/lib.rs#L661); this is used for a runtime upgrade when a parachain is not bricked. [polkadot] (these are used when a parachain is bricked): - [force_set_current_code](https://github.com/paritytech/polkadot-sdk/blob/1a38d6d6/polkadot/runtime/parachains/src/paras/mod.rs#L823): immediately changes the runtime code of a given para without a pvf check (root). - [force_schedule_code_upgrade](https://github.com/paritytech/polkadot-sdk/blob/1a38d6d6/polkadot/runtime/parachains/src/paras/mod.rs#L864): schedules a change to the runtime code of a given para including a pvf check of the new code (root). - [schedule_code_upgrade](https://github.com/paritytech/polkadot-sdk/blob/1a38d6d6/polkadot/runtime/common/src/paras_registrar.rs#L395): schedules a change to the runtime code of a given para including a pvf check of the new code. Besides root, the parachain or parachain manager can call this extrinsic given that the parachain is unlocked. Polkadot signals a parachain to be ready for a runtime upgrade through the [GoAhead](https://github.com/paritytech/polkadot-sdk/blob/e4949344 /polkadot/primitives/src/v5/mod.rs#L1229) signal. When in cumulus `enact_authorized_upgrade` is executed, the same underlying helper function of `force_schedule_code_upgrade` & `schedule_code_upgrade`: [schedule_code_upgrade](https://github.com/paritytech/polkadot/blob/09b61286da11921a3dda0a8e4015ceb9ef9cffca/runtime/parachains/src/paras/mod.rs#L1778), is called on the relay-chain, which sets the `GoAhead` signal (if the pvf is accepted). If Cumulus receives the `GoAhead` signal from polkadot without having the `PendingValidationCode` ready, it will panic ([ref](https://github.com/paritytech/polkadot/pull/7412)). For `enact_authorized_upgrade` we know for sure the `PendingValidationCode` is set. On the contrary, for `force_schedule_code_upgrade` & `schedule_code_upgrade` this is not the case. This PR includes a flag such that the `GoAhead` signal will only be set when a runtime upgrade is enacted by the parachain (`enact_authorized_upgrade`). additional info: https://github.com/paritytech/polkadot/pull/7412 Closes #641 --------- Co-authored-by: Bastian Köcher <[email protected]> Co-authored-by: Bastian Köcher <[email protected]>
-
Gonçalo Pestana authored
This PR refactors the staking ledger logic to encapsulate all reads and mutations of `Ledger`, `Bonded`, `Payee` and stake locks within the `StakingLedger` struct implementation. With these changes, all the reads and mutations to the `Ledger`, `Payee` and `Bonded` storage map should be done through the methods exposed by StakingLedger to ensure the data and lock consistency of the operations. The new introduced methods that mutate and read Ledger are: - `ledger.update()`: inserts/updates a staking ledger in storage; updates staking locks accordingly (and ledger.bond(), which is synthatic sugar for ledger.update()) - `ledger.kill()`: removes all Bonded and StakingLedger related data for a given ledger; updates staking locks accordingly; `StakingLedger::get(account)`: queries both the `Bonded` and `Ledger` storages and returns a `Option<StakingLedger>`. The pallet impl exposes fn ledger(account) as synthatic sugar for `StakingLedger::get(account)`. Retrieving a ledger with `StakingLedger::get()` can be done by providing either a stash or controller account. The input must be wrapped in a `StakingAccount` variant (Stash or Controller) which is treated accordingly. This simplifies the caller API but will eventually be deprecated once we completely get rid of the controller account in staking. However, this refactor will help with the work necessary when completely removing the controller. Other goals: - No logical changes have been introduced in this PR; - No breaking changes or updates in wallets required; - No new storage items or need to perform storage migrations; - Centralise the changes to bonds and ledger updates to simplify the OnStakingUpdate updates to the target list (related to https://github.com/paritytech/polkadot-sdk/issues/443) Note: it would be great to prevent or at least raise a warning if `Ledger<T>`, `Payee<T>` and `Bonded<T>` storage types are accessed outside the `StakingLedger` implementation. This PR should not get blocked by that feature, but there's a tracking issue here https://github.com/paritytech/polkadot-sdk/issues/149 Related and step towards https://github.com/paritytech/polkadot-sdk/issues/443
-
drskalman authored
BEEFY needs two cryptographic keys at the same time. Validators should sign BEEFY payload using both ECDSA and BLS key. The network will gossip a payload which contains a valid ECDSA key. The prover nodes aggregate the BLS keys if aggregation fails to verifies the validator which provided a valid ECDSA signature but an invalid BLS signature is subject to slashing. As such BEEFY session should be initiated with both key. Currently there is no straight forward way of doing so, beside having a session with RuntimeApp corresponding to a crypto scheme contains both keys. This pull request implement a generic paired_crypto scheme as well as implementing it for (ECDSA, BLS) pair. --------- Co-authored-by: Davide Galassi <[email protected]> Co-authored-by: Robert Hambrock <[email protected]>
-
Julian Eager authored
closes #695 Could potentially be helpful to preserving caches when applicable, as discussed in #685 kusama address: FvpsvV1GQAAbwqX6oyRjemgdKV11QU5bXsMg9xsonD1FLGK
-
S E R A Y A authored
# Description - What does this PR do? - link added - Why are these changes needed? - improve docs - How were these changes implemented and what do they affect? - only concerns docs
-
- Oct 14, 2023
-
-
Julian Eager authored
closes #622 Pros: * simpler interface, just functions: `create_runtime_from_artifact_bytes()` and `execute_artifact()` Cons: * extra overhead of constructing executor semantics each time I could make it a combination of * `create_runtime_config(params)` (such that we could clone the constructed semantics) * `create_runtime(blob, config)` * `execute_artifact(blob, config, params)` Not sure if it's worth it though. --------- Co-authored-by: Bastian Köcher <[email protected]>
-
juangirini authored
-
- Oct 13, 2023
-
-
Dónal Murray authored
The lint `clippy::clone_double_ref` was renamed to `suspicious_double_ref_op` in [this commit](https://github.com/rust-lang/rust/commit/5c99175a9efcaa3d65712c119f361add22e3a859) (thanks @liamaharon) and now generates a lot of noise in the terminal when run both in CI and locally with 1.73. This renames the lint in line with this, but does not change functionality. May cause issues for people running earlier versions locally - @altaua requesting review to get your opinion on this.
-
0xmovses authored
- This PR refactors `alliance/src/benchmarkings.rs` to use benchmarking v2. These changes are needed to improve the readability and maintainability of the benchmarking code. - No known issue to backlink. ## Local Testing 1. `cargo build --features runtime-benchmarks` 2. `cargo run --locked --release -p node-cli --bin substrate-node --features runtime-benchmarks -- benchmark pallet --execution wasm --wasm-execution compiled --chain dev --pallet "*" --extrinsic "*" --steps 2 --repeat 1`
-
Julian Eager authored
Co-authored-by: Marcin S <[email protected]>
-
Adrian Catangiu authored
Part of #171
-
Adrian Catangiu authored
- Remove cached messages used for deduplication in `GossipValidator` since they're already deduplicated in upper layer `NetworkGossip`. - Add cache for "justified rounds" to quickly discard any further (even if potentially different) justifications at the gossip level, once a valid one (for a respective round) is submitted to the worker. - Add short-circuit in worker `finalize()` method to not attempt to finalize same block multiple times (for example when we get justifications for same block from multiple components like block-import, gossip or on-demand). - Change a test which had A LOT of latency in syncing blocks for some weird reason and would only run after ~150seconds. It now runs instantly. Fixes https://github.com/paritytech/polkadot-sdk/issues/1728
-
gupnik authored
Fixes https://github.com/paritytech/polkadot-sdk/issues/1839 Currently, `composite_enum`s do not support pallet instances. This PR allows the following: ```rust #[pallet::composite_enum] pub enum HoldReason<I: 'static = ()> { SomeHoldReason } ``` ### Todo - [x] UI Test
-
- Oct 12, 2023
-
-
Anton Vilhelm Ásgeirsson authored
# Description In a couple of cases, there were links pointing to the w3f github pages domain. In other instances, there were links pointing to the old polkadot repo's github pages. Both of these are now pointing to the relevant links in https://paritytech.github.io/polkadot-sdk/book/index.html. These changes were made specifically because the w3f github pages returns a 404, and while fixing the links, the old polkadot repo links were touched up as well even if they do redirect properly. This shouldn't affect anything as these are documentation link changes only.
-
Tsvetomir Dimitrov authored
Exposes disabled validators list via a runtime API. --------- Co-authored-by: ordian <[email protected]> Co-authored-by: ordian <[email protected]>
-
Kevin Krone authored
This PR adds a `try_state` hook for the `Treasury` pallet. Part of #239.
-
Sam Elamin authored
This pr resolves https://github.com/paritytech/polkadot-sdk/issues/1428. *Added only to Kusama for now* I did raise it [here](https://github.com/polkadot-fellows/runtimes/pull/19) and we discussed creating a chopsticks test to run an end-to-end test however, to do that I will need a build agent/custom runner that is powerful enough to run the build I will be doing that separately as I still think having chopsticks test your runtime with each commit will be very powerful and extremely useful for the ecosystem For now I have used XCM simulator and replicated what the other reserve tests do --------- Co-authored-by: Gavin Wood <[email protected]> Co-authored-by: Francisco Aguirre <[email protected]> Co-authored-by: Branislav Kontur <[email protected]> Co-authored-by: Bastian Köcher <[email protected]>
-
- Oct 11, 2023
-
-
Mira Ressel authored
-
Mira Ressel authored
Co-authored-by: command-bot <>
-
0xmovses authored
This PR refactors `identity/benchmarkings.rs` to use benchmarking v2. These changes are needed to improve the readability and maintainability of the benchmarking code. Changes were implemented using [this](https://github.com/paritytech/polkadot-sdk/commit/9ec80090 ) commit as a guide. The logic of the benchmarks remains the same. No known issue to backlink. ## Local Testing To test the new benchmarks: 1. `cargo build --features runtime-benchmarks` 2. `./target/debug/polkadot benchmark pallet --steps=5 --repeat=2 --pallet=pallet_identity --extrinsic='*'` --------- Co-authored-by: Richard Melkonian <[email protected]> Co-authored-by: joe petrowski <[email protected]> Co-authored-by: command-bot <> Co-authored-by: Francisco Aguirre <[email protected]> Co-authored-by: Oliver Tale-Yazdi <[email protected]>
-
Marcin S. authored
-
Branislav Kontur authored
# Desription ## Summary This PR introduces several nits and tweaks to xcm emulator tests for system parachains. ## Explanation **Deduplicate `XcmPallet::send(` with root origin code** - Introduced `send_transact_to_parachain` which could be easily reuse for scenarios like _governance call from relay chain to parachain_. **Refactor `send_transact_sudo_from_relay_to_system_para_works`** - Test covered just one use-case which was moved to the `do_force_create_asset_from_relay_to_system_para`, so now we can extend this test with more _governance-like_ senarios. - Renamed to `send_transact_as_superuser_from_relay_to_system_para_works`. **Remove `send_transact_native_from_relay_to_system_para_fails` test** - This test and/or description is kind of misleading, because system paras support Native from relay chain by `RelayChainAsNative` with correct xcm origin. - It tested only sending on relay chain which should go directly to the relay chain unit-tests (does not even need to be in xcm emulator level). ## Future directions Check restructure parachains integration tests [issue](https://github.com/paritytech/polkadot-sdk/issues/1389) and [PR with more TODOs](https://github.com/paritytech/polkadot-sdk/pull/1693). --------- Co-authored-by: Ignacio Palacios <[email protected]>
-
gupnik authored
Needs https://github.com/sam0x17/macro_magic/pull/13 The associated PR allows the export of tokens from macro_magic at the specified path. This fixes the path issue in derive-impl. Now, we can import the default config using the standard rust syntax: ```rust use frame_system::config_preludes::TestDefaultConfig; [derive_impl(TestDefaultConfig as frame_system::DefaultConfig)] impl frame_system::DefaultConfig for Test { //.... } ```
-
- Oct 10, 2023
-
-
Sam Johnson authored
# Description Upgrades `macro_magic` to 0.4.3, which introduces the ability to have `export_tokens` use the same name as the underlying item for its auto-generated macro name. Ultimately this will allow for better dev ux in our derive_impl feature.
-
Keith Yeung authored
Co-authored-by: Francisco Aguirre <[email protected]>
-
Liam Aharon authored
Original PR https://github.com/paritytech/substrate/pull/14746 --- ## Fixing stall ### Introduction I experienced an apparent stall downloading state from `https://rococo-try-runtime-node.parity-chains.parity.io:443` which was having networking difficulties only responding to my JSONRPC requests with 50-200KB/s of bandwidth. This PR fixes the issue causing the stall, and generally improves performance remote-ext when it downloads state by greatly reducing the chances of a timeout occuring. ### Description Introduces a new `REQUEST_DURATION_TARGET` constant and modifies `get_storage_data_dynamic_batch_size` to - Increase or decrease the batch size of the next request depending on whether the elapsed time of the last request was gt or lt the target - Reset the batch size to 1 if the request times out This fixes an issue on slow connections that can otherwise cause multiple timeouts and a stalled download when: 1. The batch size increases rapidly as remote-ext downloads keys with small associated storage values 2. remote-ext tries to process a large series of subsequent keys all with extremely large associated storage values (Rococo has a series of keys 1-5MB large) 3. The huge storage values download for 5 minutes until the request times out 4. The partially downloaded keys are thrown out and remote-ext tries again with a smaller batch size, but the batch size is still far too large and takes 5 minutes to be reduced again 5. The download will be essentially stalled for many hours while the above step cycles After this PR, the request size will - Not grow as large to begin with, as it is regulated downwards as the request duration exceeds the target - Drop immediately to 1 if the request times out. A timeout indicates the keys next in line to download have extremely large storage values compared to previously downloaded keys, and we need to reset the batch size to figure out what our new ideal batch size is. By not resetting down to 1, we risk the next request timing out again. ## Reducing memory As suggested by @bkchr, I adjusted `get_storage_data_dynamic_batch_size` from being recursive to a loop which allows removing a bunch of clones that were chewing through a lot of memory. I noticed actually it was using up to 50GB swap previously when downloading Polkadot keys on a slow connection, because it needed to recurse and clone a lot. After this change it uses only ~1.5GB memory.
-
Bulat Saifullin authored
# Description Update the DNS name of bootnodes to unify the deployment. Each bootnode have 3 port exposed: `30333, 30334, 443`. Before, we had different DNS names for `30333, 30334` and `443` ports. It may confuse people and give the impression that it is two different nodes. Fixing it by using a single domain for all
-
Oliver Tale-Yazdi authored
Adds a warning to FRAME pallets when a function argument that starts with `_` is used in the weight formula. This is in most cases an error since the weight witness needs to be checked. Example: ```rust #[pallet::call_index(0)] #[pallet::weight(T::SystemWeightInfo::remark(_remark.len() as u32))] pub fn remark(_origin: OriginFor<T>, _remark: Vec<u8>) -> DispatchResultWithPostInfo { Ok(().into()) } ``` Produces this warning: ```pre warning: use of deprecated constant `pallet::warnings::UncheckedWeightWitness_0::_w`: It is deprecated to not check weight witness data. Please instead ensure that all witness data for weight calculation is checked before usage. For more info see: <https://github.com/paritytech/polkadot-sdk/pull/1818> --> substrate/frame/system/src/lib.rs:424:40 | 424 | pub fn remark(_origin: OriginFor<T>, _remark: Vec<u8>) -> DispatchResultWithPostInfo { | ^^^^^^^ | = note: `#[warn(deprecated)]` on by default ``` Can be suppressed like this, since in this case it is legit: ```rust #[pallet::call_index(0)] #[pallet::weight(T::SystemWeightInfo::remark(remark.len() as u32))] pub fn remark(_origin: OriginFor<T>, remark: Vec<u8>) -> DispatchResultWithPostInfo { let _ = remark; // We dont need to check the weight witness. Ok(().into()) } ``` Changes: - Add warning on uncheded weight witness - Respect `subkeys` limit in `System::kill_prefix` - Fix HRMP pallet and other warnings - Update`proc_macro_warning` dependency - Delete random folder `substrate/src/src`
🙈 - Adding Prdoc --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: joe petrowski <[email protected]> -
Branislav Kontur authored
[xcm] Use `Weight::MAX` for `reserve_asset_deposited`, `receive_teleported_asset` benchmarks (#1726) # Description ## Summary Previously, the `pallet_xcm::do_reserve_transfer_assets` and `pallet_xcm::do_teleport_assets` functions relied on weight estimation for remote chain execution, which was based on guesswork derived from the local chain. This approach led to complications for runtimes that did not provide or support specific [XCM configurations](https://github.com/paritytech/polkadot-sdk/blob/7cbe0c76/polkadot/xcm/xcm-executor/src/config.rs#L43-L47) for `IsReserve` or `IsTeleporter`. Consequently, such runtimes had to resort to implementing hard-coded weights for XCM instructions like `reserve_asset_deposited` or `receive_teleported_asset` to support extrinsics such as `pallet_xcm::reserve_transfer_assets` and `pallet_xcm::teleport_assets`, which depended on remote weight estimation. The issue of remote weight estimation was addressed and resolved by [Pull Request #1645](https://github.com/paritytech/polkadot-sdk/pull/1645), which removed the need for remote weight estimation. ## Solution As a continuation of this improvement, the current PR proposes further cleanup by removing unnecessary hard-coded values and rectifying benchmark results with `Weight::MAX` that previously used `T::BlockWeights::get().max_block` as an override for unsupported XCM instructions like `ReserveAssetDeposited` and `ReceiveTeleportedAsset`. ## Questions - [x] Can we remove now also `Hardcoded till the XCM pallet is fixed` for `deposit_asset`? E.g. for AssetHubKusama [here](https://github.com/paritytech/polkadot-sdk/blob/7cbe0c76/cumulus/parachains/runtimes/assets/asset-hub-kusama/src/weights/xcm/mod.rs#L129-L134) - [x] Are comments like [this](https://github.com/paritytech/polkadot-sdk/blob/7cbe0c76/polkadot/runtime/kusama/src/weights/xcm/mod.rs#L94) `// Kusama doesn't support ReserveAssetDeposited, so this benchmark has a default weight` still relevant? Shouldnt be removed/changed? ## TODO - [x] `bench bot` regenerate xcm weights for all runtimes - [x] remove hard-coded stuff from system parachain weight files - [ ] when merged, open `polkadot-fellow/runtimes` PR ## References Fixes #1132 Closes #1132 Old polkadot repo [PR](https://github.com/paritytech/polkadot/pull/7546) --------- Co-authored-by: command-bot <>
-
Svyatoslav Nikolsky authored
-
Rahul Subramaniyam authored
When retrieving the ready blocks, verify that the parent of the first ready block is on chain. If the parent is not on chain, we are downloading from a fork. In this case, keep downloading until we have a parent on chain (common ancestor). Resolves https://github.com/paritytech/polkadot-sdk/issues/493. --------- Co-authored-by: Aaro Altonen <[email protected]>
-
David Emett authored
See #1453. Co-authored-by: Bastian Köcher <[email protected]>
-
- Oct 09, 2023
-
-
georgepisaltu authored
-
David Emett authored
See #1345, <https://github.com/paritytech/substrate/pull/14207>. This adds all the necessary mixnet components, and puts them together in the "kitchen-sink" node/runtime. The components added are: - A pallet (`frame/mixnet`). This is responsible for determining the current mixnet session and phase, and the mixnodes to use in each session. It provides a function that validators can call to register a mixnode for the next session. The logic of this pallet is very similar to that of the `im-online` pallet. - A service (`client/mixnet`). This implements the core mixnet logic, building on the `mixnet` crate. The service communicates with other nodes using notifications sent over the "mixnet" protocol. - An RPC interface. This currently only supports sending transactions over the mixnet. --------- Co-authored-by: David Emett <[email protected]> Co-authored-by: Javier Viola <[email protected]>
-
Ignacio Palacios authored
Closes: #1381 Originally from: https://github.com/paritytech/cumulus/pull/3037 --------- Co-authored-by: joe petrowski <[email protected]>
-
- Oct 07, 2023
-
-
Muharem Ismailov authored
### Summary This PR introduces new dispatchables to the treasury pallet, allowing spends of various asset types. The enhanced features of the treasury pallet, in conjunction with the asset-rate pallet, are set up and enabled for Westend and Rococo. ### Westend and Rococo runtimes. Polkadot/Kusams/Rococo Treasury can accept proposals for `spends` of various asset kinds by specifying the asset's location and ID. #### Treasury Instance New Dispatchables: - `spend(AssetKind, AssetBalance, Beneficiary, Option<ValidFrom>)` - propose and approve a spend; - `payout(SpendIndex)` - payout an approved spend or retry a failed payout - `check_payment(SpendIndex)` - check the status of a payout; - `void_spend(SpendIndex)` - void previously approved spend; > existing spend dispatchable renamed to spend_local in this context, the `AssetKind` parameter contains the asset's location and it's corresponding `asset_id`, for example: `USDT` on `AssetHub`, ``` rust location = MultiLocation(0, X1(Parachain(1000))) asset_id = MultiLocation(0, X2(PalletInstance(50), GeneralIndex(1984))) ``` the `Beneficiary` parameter is a `MultiLocation` in the context of the asset's location, for example ``` rust // the Fellowship salary pallet's location / account FellowshipSalaryPallet = MultiLocation(1, X2(Parachain(1001), PalletInstance(64))) // or custom `AccountId` Alice = MultiLocation(0, AccountId32(network: None, id: [1,...])) ``` the `AssetBalance` represents the amount of the `AssetKind` to be transferred to the `Beneficiary`. For permission checks, the asset amount is converted to the native amount and compared against the maximum spendable amount determined by the commanding spend origin. the `spend` dispatchable allows for batching spends with different `ValidFrom` arguments, enabling milestone-based spending. If the expectations tied to an approved spend are not met, it is possible to void the spend later using the `void_spend` dispatchable. Asset Rate Pallet provides the conversion rate from the `AssetKind` to the native balance. #### Asset Rate Instance Dispatchables: - `create(AssetKind, Rate)` - initialize a conversion rate to the native balance for the given asset - `update(AssetKind, Rate)` - update the conversion rate to the native balance for the given asset - `remove(AssetKind)` - remove an existing conversion rate to the native balance for the given asset the pallet's dispatchables can be executed by the Root or Treasurer origins. ### Treasury Pallet Treasury Pallet can accept proposals for `spends` of various asset kinds and pay them out through the implementation of the `Pay` trait. New Dispatchables: - `spend(Config::AssetKind, AssetBalance, Config::Beneficiary, Option<ValidFrom>)` - propose and approve a spend; - `payout(SpendIndex)` - payout an approved spend or retry a failed payout; - `check_payment(SpendIndex)` - check the status of a payout; - `void_spend(SpendIndex)` - void previously approved spend; > existing spend dispatchable renamed to spend_local The parameters' types of the `spend` dispatchable exposed via the pallet's `Config` and allows to propose and accept a spend of a certain amount. An approved spend can be claimed via the `payout` within the `Config::SpendPeriod`. Clients provide an implementation of the `Pay` trait which can pay an asset of the `AssetKind` to the `Beneficiary` in `AssetBalance` units. The implementation of the Pay trait might not have an immediate final payment status, for example if implemented over `XCM` and the actual transfer happens on a remote chain. The `check_status` dispatchable can be executed to update the spend's payment state and retry the `payout` if the payment has failed. --------- Co-authored-by: joe petrowski <[email protected]> Co-authored-by: command-bot <>
-
Javier Viola authored
Bump zombiente version. This version includes the fixes needed for `mixnet`. Thx!
-
Dmitry Borodin authored
Moving a babe and authorship pallets to the latest and greatest derive_impl. Part of https://github.com/paritytech/polkadot-sdk/issues/171 --------- Co-authored-by: Kian Paimani <[email protected]>
-
- Oct 06, 2023
-
-
dependabot[bot] authored
Bumps the known_good_semver group with 1 update: [syn](https://github.com/dtolnay/syn). <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/dtolnay/syn/releases">syn's releases</a>.</em></p> <blockquote> <h2>2.0.38</h2> <ul> <li>Fix <em>"method 'peek' has an incompatible type for trait"</em> error when defining <code>bool</code> as a custom keyword (<a href="https://redirect.github.com/dtolnay/syn/issues/1518">#1518</a>, thanks <a href="https://github.com/Vanille-N"><code>@Vanille-N</code></a>)</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/dtolnay/syn/commit/43632bfb6c78ee1f952645a268ab1ac4af162977"><code>43632bf</code></a> Release 2.0.38</li> <li><a href="https://github.com/dtolnay/syn/commit/abd2c214b44da64a5e420d72919308300eebc23d"><code>abd2c21</code></a> Merge pull request <a href="https://redirect.github.com/dtolnay/syn/issues/1518">#1518</a> from Vanille-N/master</li> <li><a href="https://github.com/dtolnay/syn/commit/6701e6077e15013ef34b15e3ffdae2657e499d83"><code>6701e60</code></a> Absolute path to <code>bool</code> in <code>custom_punctuation.rs</code></li> <li><a href="https://github.com/dtolnay/syn/commit/7313d242398111423f046386aa0a75548f63d236"><code>7313d24</code></a> Resolve single_match_else pedantic clippy lint in code generator</li> <li><a href="https://github.com/dtolnay/syn/commit/67ab64f3c09e17b23493c7cda498e7edb8830f21"><code>67ab64f</code></a> Include unexpected token in the test failure message</li> <li><a href="https://github.com/dtolnay/syn/commit/137ae33486de3f2652487f8f64436ad1429df496"><code>137ae33</code></a> Check no remaining token after the first literal</li> <li><a href="https://github.com/dtolnay/syn/commit/258e9e8a11d188c1ee1ffb2b069819239999f9ac"><code>258e9e8</code></a> Ignore single_match_else pedantic clippy lint in test</li> <li><a href="https://github.com/dtolnay/syn/commit/92fd50ee8cb52968d9c66fbe6d67638c1f838e26"><code>92fd50e</code></a> Test docs.rs documentation build in CI</li> <li>See full diff in <a href="https://github.com/dtolnay/syn/compare/2.0.37...2.0.38">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=syn&package-manager=cargo&previous-version=2.0.37&new-version=2.0.38)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions </details> Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
-
Oliver Tale-Yazdi authored
Closes https://github.com/paritytech/polkadot-sdk/issues/1450 Bringing back the Substrate crate that was forgotten in the monorepo import
😅 . It is a doc-only crate. Version number is set to `1.0.0` and publishing is enabled (so that we can link to docs.rs). --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: Bastian Köcher <[email protected]> Co-authored-by: Kian Paimani <[email protected]>
-