- Nov 02, 2023
-
-
Oliver Tale-Yazdi authored
(imported from https://github.com/paritytech/cumulus/pull/2157) ## Changes This MR refactores the XCMP, Parachains System and DMP pallets to use the [MessageQueue](https://github.com/paritytech/substrate/pull/12485) for delayed execution of incoming messages. The DMP pallet is entirely replaced by the MQ and thereby removed. This allows for PoV-bounded execution and resolves a number of issues that stem from the current work-around. All System Parachains adopt this change. The most important changes are in `primitives/core/src/lib.rs`, `parachains/common/src/process_xcm_message.rs`, `pallets/parachain-system/src/lib.rs`, `pallets/xcmp-queue/src/lib.rs` and the runtime configs. ### DMP Queue Pallet The pallet got removed and its logic refactored into parachain-system. Overweight message management can be done directly through the MQ pallet. Final undeployment migrations are provided by `cumulus_pallet_dmp_queue::UndeployDmpQueue` and `DeleteDmpQueue` that can be configured with an aux config trait like: ```rust parameter_types! { pub const DmpQueuePalletName: &'static str = \"DmpQueue\" < CHANGE ME; pub const RelayOrigin: AggregateMessageOrigin = AggregateMessageOrigin::Parent; } impl cumulus_pallet_dmp_queue::MigrationConfig for Runtime { type PalletName = DmpQueuePalletName; type DmpHandler = frame_support::traits::EnqueueWithOrigin<MessageQueue, RelayOrigin>; type DbWeight = <Runtime as frame_system::Config>::DbWeight; } // And adding them to your Migrations tuple: pub type Migrations = ( ... cumulus_pallet_dmp_queue::UndeployDmpQueue<Runtime>, cumulus_pallet_dmp_queue::DeleteDmpQueue<Runtime>, ); ``` ### XCMP Queue pallet Removed all dispatch queue functionality. Incoming XCMP messages are now either: Immediately handled if they are Signals, enqueued into the MQ pallet otherwise. New config items for the XCMP queue pallet: ```rust /// The actual queue implementation that retains the messages for later processing. type XcmpQueue: EnqueueMessage<ParaId>; /// How a XCM over HRMP from a sibling parachain should be processed. type XcmpProcessor: ProcessMessage<Origin = ParaId>; /// The maximal number of suspended XCMP channels at the same time. #[pallet::constant] type MaxInboundSuspended: Get<u32>; ``` How to configure those: ```rust // Use the MessageQueue pallet to store messages for later processing. The `TransformOrigin` is needed since // the MQ pallet itself operators on `AggregateMessageOrigin` but we want to enqueue `ParaId`s. type XcmpQueue = TransformOrigin<MessageQueue, AggregateMessageOrigin, ParaId, ParaIdToSibling>; // Process XCMP messages from siblings. This is type-safe to only accept `ParaId`s. They will be dispatched // with origin `Junction::Sibling(…)`. type XcmpProcessor = ProcessFromSibling< ProcessXcmMessage< AggregateMessageOrigin, xcm_executor::XcmExecutor<xcm_config::XcmConfig>, RuntimeCall, >, >; // Not really important what to choose here. Just something larger than the maximal number of channels. type MaxInboundSuspended = sp_core::ConstU32<1_000>; ``` The `InboundXcmpStatus` storage item was replaced by `InboundXcmpSuspended` since it now only tracks inbound queue suspension and no message indices anymore. Now only sends the most recent channel `Signals`, as all prio ones are out-dated anyway. ### Parachain System pallet For `DMP` messages instead of forwarding them to the `DMP` pallet, it now pushes them to the configured `DmpQueue`. The message processing which was triggered in `set_validation_data` is now being done by the MQ pallet `on_initialize`. XCMP messages are still handed off to the `XcmpMessageHandler` (XCMP-Queue pallet) - no change here. New config items for the parachain system pallet: ```rust /// Queues inbound downward messages for delayed processing. /// /// Analogous to the `XcmpQueue` of the XCMP queue pallet. type DmpQueue: EnqueueMessage<AggregateMessageOrigin>; ``` How to configure: ```rust /// Use the MQ pallet to store DMP messages for delayed processing. type DmpQueue = MessageQueue; ``` ## Message Flow The flow of messages on the parachain side. Messages come in from the left via the `Validation Data` and finally end up at the `Xcm Executor` on the right. ![Untitled (1)](https://github.com/paritytech/cumulus/assets/10380170/6cf8b377-88c9-4aed-96df-baace266e04d) ## Further changes - Bumped the default suspension, drop and resume thresholds in `QueueConfigData::default()`. - `XcmpQueue::{suspend_xcm_execution, resume_xcm_execution}` errors when they would be a noop. - Properly validate the `QueueConfigData` before setting it. - Marked weight files as auto-generated so they wont auto-expand in the MR files view. - Move the `hypothetical` asserts to `frame_support` under the name `experimental_hypothetically` Questions: - [ ] What about the ugly `#[cfg(feature = \"runtime-benchmarks\")]` in the runtimes? Not sure how to best fix. Just having them like this makes tests fail that rely on the real message processor when the feature is enabled. - [ ] Need a good weight for `MessageQueueServiceWeight`. The scheduler already takes 80% so I put it to 10% but that is quite low. TODO: - [x] Remove c&p code after https://github.com/paritytech/polkadot/pull/6271 - [x] Use `HandleMessage` once it is public in Substrate - [x] fix `runtime-benchmarks` feature https://github.com/paritytech/polkadot/pull/6966 - [x] Benchmarks - [x] Tests - [ ] Migrate `InboundXcmpStatus` to `InboundXcmpSuspended` - [x] Possibly cleanup Migrations (DMP+XCMP) - [x] optional: create `TransformProcessMessageOrigin` in Substrate and replace `ProcessFromSibling` - [ ] Rerun weights on ref HW --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: Liam Aharon <[email protected]> Co-authored-by: joe petrowski <[email protected]> Co-authored-by: Kian Paimani <[email protected]> Co-authored-by: command-bot <>
-
Davide Galassi authored
Closes https://github.com/paritytech/polkadot-sdk/issues/2013
-
- Nov 01, 2023
-
-
Branislav Kontur authored
## Summary Asset bridging support for AssetHub**Rococo** <-> AssetHub**Wococo** was added [here](https://github.com/paritytech/polkadot-sdk/pull/1215), so now we aim to bridge AssetHub**Rococo** and AssetHub**Westend**. (And perhaps retire AssetHubWococo and the Wococo chains). ## Solution **bridge-hub-westend-runtime** - added new runtime as a copy of `bridge-hub-rococo-runtime` - added support for bridging to `BridgeHubRococo` - added tests and benchmarks **bridge-hub-rococo-runtime** - added support for bridging to `BridgeHubWestend` - added tests and benchmarks - internal refactoring by splitting bridge configuration per network, e.g., `bridge_to_whatevernetwork_config.rs`. **asset-hub-rococo-runtime** - added support for asset bridging to `AssetHubWestend` (allows to receive only WNDs) - added new xcm router for `Westend` - added tests and benchmarks **asset-hub-westend-runtime** - added support for asset bridging to `AssetHubRococo` (allows to receive only ROCs) - added new xcm router for `Rococo` - added tests and benchmarks ## Deployment All changes will be deployed as a part of https://github.com/paritytech/polkadot-sdk/issues/1988. ## TODO - [x] benchmarks for all pallet instances - [x] integration tests - [x] local run scripts Relates to: https://github.com/paritytech/parity-bridges-common/issues/2602 Relates to: https://github.com/paritytech/polkadot-sdk/issues/1988 --------- Co-authored-by: command-bot <> Co-authored-by: Adrian Catangiu <[email protected]> Co-authored-by: joe petrowski <[email protected]>
-
jserrat authored
Co-authored-by: Marcin S <[email protected]>
-
Ankan authored
helps https://github.com/paritytech/polkadot-sdk/issues/439. closes https://github.com/paritytech/polkadot-sdk/issues/473. PR link in the older substrate repository: https://github.com/paritytech/substrate/pull/13498. # Context Rewards payout is processed today in a single block and limited to `MaxNominatorRewardedPerValidator`. This number is currently 512 on both Kusama and Polkadot. This PR tries to scale the nominators payout to an unlimited count in a multi-block fashion. Exposures are stored in pages, with each page capped to a certain number (`MaxExposurePageSize`). Starting out, this number would be the same as `MaxNominatorRewardedPerValidator`, but eventually, this number can be lowered through new runtime upgrades to limit the rewardeable nominators per dispatched call instruction. The changes in the PR are backward compatible. ## How payouts would work like after this change Staking exposes two calls, 1) the existing `payout_stakers` and 2) `payout_stakers_by_page`. ### payout_stakers This remains backward compatible with no signature change. If for a given era a validator has multiple pages, they can call `payout_stakers` multiple times. The pages are executed in an ascending sequence and the runtime takes care of preventing double claims. ### payout_stakers_by_page Very similar to `payout_stakers` but also accepts an extra param `page_index`. An account can choose to payout rewards only for an explicitly passed `page_index`. **Lets look at an example scenario** Given an active validator on Kusama had 1100 nominators, `MaxExposurePageSize` set to 512 for Era e. In order to pay out rewards to all nominators, the caller would need to call `payout_stakers` 3 times. - `payout_stakers(origin, stash, e)` => will pay the first 512 nominators. - `payout_stakers(origin, stash, e)` => will pay the second set of 512 nominators. - `payout_stakers(origin, stash, e)` => will pay the last set of 76 nominators. ... - `payout_stakers(origin, stash, e)` => calling it the 4th time would return an error `InvalidPage`. The above calls can also be replaced by `payout_stakers_by_page` and passing a `page_index` explicitly. ## Commission note Validator commission is paid out in chunks across all the pages where each commission chunk is proportional to the total stake of the current page. This implies higher the total stake of a page, higher will be the commission. If all the pages of a validator's single era are paid out, the sum of commission paid to the validator across all pages should be equal to what the commission would have been if we had a non-paged exposure. ### Migration Note Strictly speaking, we did not need to bump our storage version since there is no migration of storage in this PR. But it is still useful to mark a storage upgrade for the following reasons: - New storage items are introduced in this PR while some older storage items are deprecated. - For the next `HistoryDepth` eras, the exposure would be incrementally migrated to its corresponding paged storage item. - Runtimes using staking pallet would strictly need to wait at least `HistoryDepth` eras with current upgraded version (14) for the migration to complete. At some era `E` such that `E > era_at_which_V14_gets_into_effect + HistoryDepth`, we will upgrade to version X which will remove the deprecated storage items. In other words, it is a strict requirement that E<sub>x</sub> - E<sub>14</sub> > `HistoryDepth`, where E<sub>x</sub> = Era at which deprecated storages are removed from runtime, E<sub>14</sub> = Era at which runtime is upgraded to version 14. - For Polkadot and Kusama, there is a [tracker ticket](https://github.com/paritytech/polkadot-sdk/issues/433) to clean up the deprecated storage items. ### Storage Changes #### Added - ErasStakersOverview - ClaimedRewards - ErasStakersPaged #### Deprecated The following can be cleaned up after 84 eras which is tracked [here](https://github.com/paritytech/polkadot-sdk/issues/433). - ErasStakers. - ErasStakersClipped. - StakingLedger.claimed_rewards, renamed to StakingLedger.legacy_claimed_rewards. ### Config Changes - Renamed MaxNominatorRewardedPerValidator to MaxExposurePageSize. ### TODO - [x] Tracker ticket for cleaning up the old code after 84 eras. - [x] Add companion. - [x] Redo benchmarks before merge. - [x] Add Changelog for pallet_staking. - [x] Pallet should be configurable to enable/disable paged rewards. - [x] Commission payouts are distributed across pages. - [x] Review documentation thoroughly. - [x] Rename `MaxNominatorRewardedPerValidator` -> `MaxExposurePageSize`. - [x] NMap for `ErasStakersPaged`. - [x] Deprecate ErasStakers. - [x] Integrity tests. ### Followup issues [Runtime api for deprecated ErasStakers storage item](https://github.com/paritytech/polkadot-sdk/issues/426) --------- Co-authored-by: Javier Viola <[email protected]> Co-authored-by: Ross Bulat <[email protected]> Co-authored-by: command-bot <>
-
Dmitry Markin authored
This PR moves syncing-related code from `sc-network-common` to `sc-network-sync`. Unfortunately, some parts are tightly integrated with networking, so they were left in `sc-network-common` for now: 1. `SyncMode` in `common/src/sync.rs` (used in `NetworkConfiguration`). 2. `BlockAnnouncesHandshake`, `BlockRequest`, `BlockResponse`, etc. in `common/src/sync/message.rs` (used in `src/protocol.rs` and `src/protocol/message.rs`). More substantial refactoring is needed to decouple syncing and networking completely, including getting rid of the hardcoded sync protocol. ## Release notes Move syncing-related code from `sc-network-common` to `sc-network-sync`. Delete `ChainSync` trait as it's never used (the only implementation is accessed directly from `SyncingEngine` and exposes a lot of public methods that are not part of the trait). Some new trait(s) for syncing will likely be introduced as part of Sync 2.0 refactoring to represent syncing strategies.
-
Davide Galassi authored
-
- Oct 31, 2023
-
-
Lulu authored
Co-authored-by: Sergejs Kostjucenko <[email protected]> Co-authored-by: Bastian Köcher <[email protected]>
-
Adel Arja authored
# Description The `trigger_defensive` call has been added to the `root-testing` pallet. The idea is to have this pallet running on `Rococo/Westend` and use it to verify if the runtime monitoring works end-to-end. To accomplish this, `trigger_defensive` dispatches an event when it is called. Closes #1953 # Checklist - [x] My PR includes a detailed description as outlined in the "Description" section above - [ ] My PR follows the [labeling requirements](CONTRIBUTING.md#Process) of this project (at minimum one label for `T` required) - [ ] I have made corresponding changes to the documentation (if applicable) - [ ] I have added tests that prove my fix is effective or that my feature works (if applicable) You can remove the "Checklist" section once all have been checked. Thank you for your contribution! ✄ ----------------------------------------------------------------------------- --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: Oliver Tale-Yazdi <[email protected]>
-
Davide Galassi authored
- Usage the new published [arkworks-extensions](https://github.com/paritytech/arkworks-extensions) crates. Hooks are internally defined to jump into the proper host functions. - Conditional compilation of each curve (gated by feature with curve name) - Separation in smaller host functions sets, divided by curve (fits nicely with prev point)
-
Rahul Subramaniyam authored
The change adds a test to show the failure scenario that caused #1812 to be rolled back (more context: https://github.com/paritytech/polkadot-sdk/issues/493#issuecomment-1772009924) Summary of the scenario: 1. Node has finished downloading up to block 1000 from the peers, from the canonical chain. 2. Peers are undergoing re-org around this time. One of the peers has switched to a non-canonical chain, announces block 1001 from that chain 3. Node downloads 1001 from the peer, and tries to import which would fail (as we don't have the parent block 1000 from the other chain) --------- Co-authored-by: Dmitry Markin <[email protected]>
-
Marcin S. authored
-
- Oct 30, 2023
-
-
Michal Kucharczyk authored
Switch from: https://crates.io/crates/tiny-bip39 to: https://crates.io/crates/bip39 Required for: https://github.com/paritytech/polkadot-sdk/pull/2044
-
- Oct 29, 2023
-
-
Davide Galassi authored
Currently the CLI `-h/--help` commad output is almost unreadable as (for some commands) it: - doesn't provide a short brief of what the command does. - doesn't separate the options description in smaller paragraphs. - doesn't use a smart wrap strategy for lines longer than the number of columns in the terminal. Follow some pics taken with a 100 cols wide term ## Short help (./node -h) ### Before ![20231028-174531-grim](https://github.com/paritytech/polkadot-sdk/assets/8143589/11b62c3c-dcd5-43f4-ac58-f1b299e3f4b9) ### After ![20231028-175041-grim](https://github.com/paritytech/polkadot-sdk/assets/8143589/dc08f6fd-b287-40fb-8b33-71a185922104) ## Long help (./node --help) ### Before ![20231028-175257-grim](https://github.com/paritytech/polkadot-sdk/assets/8143589/9ebdc0ae-54ee-4760-b873-a7e813523cb6) ### After ![20231028-175155-grim](https://github.com/paritytech/polkadot-sdk/assets/8143589/69cbe5cb-eb2f-46a5-8ebf-76c0cf8c4bad) --------- Co-authored-by: command-bot <>
-
- Oct 27, 2023
-
-
Sam Johnson authored
Updates `docify` to 0.2.6, which fixes a bug that was preventing nesting `#[docify::export]` within sub-items of items that already have `#[docify::export]` attached to them from working properly. Release notes here: https://github.com/sam0x17/docify/releases/tag/v0.2.6 cc @ggwpez @Kianenigma
-
juangirini authored
### Original PR https://github.com/paritytech/substrate/pull/14137 This PR brings in the first version of the "_`frame` umbrella crate_". This crate is intended to serve two purposes: 1. documentation 2. easier development with frame. Ideally, we want most users to be able to build a frame-based pallet and runtime using just `frame` (plus `scale-codec` and `scale-info`). The crate is not finalized and is not yet intended for external use. Therefore, the version is set to `0.0.1-dev`, this PR is `silent`, and the entire crate is hidden behind the `experimental` flag. The main intention in merging it early on is to be able to iterate on it in the rest of [`developer-hub`](https://github.com/paritytech/polkadot-sdk-docs/) efforts. The public API of the `frame` crate is at the moment as follows: ``` pub mod frame pub use frame::log pub use frame::pallet pub mod frame::arithmetic pub use frame::arithmetic::<<sp_arithmetic::*>> pub use frame::arithmetic::<<sp_arithmetic::traits::*>> pub mod frame::deps pub use frame::deps::codec pub use frame::deps::frame_executive pub use frame::deps::frame_support pub use frame::deps::frame_system pub use frame::deps::scale_info pub use frame::deps::sp_api pub use frame::deps::sp_arithmetic pub use frame::deps::sp_block_builder pub use frame::deps::sp_consensus_aura pub use frame::deps::sp_consensus_grandpa pub use frame::deps::sp_core pub use frame::deps::sp_inherents pub use frame::deps::sp_io pub use frame::deps::sp_offchain pub use frame::deps::sp_runtime pub use frame::deps::sp_std pub use frame::deps::sp_version pub mod frame::derive pub use frame::derive::CloneNoBound pub use frame::derive::Debug pub use frame::derive::Debug pub use frame::derive::DebugNoBound pub use frame::derive::Decode pub use frame::derive::Decode pub use frame::derive::DefaultNoBound pub use frame::derive::Encode pub use frame::derive::Encode pub use frame::derive::EqNoBound pub use frame::derive::PartialEqNoBound pub use frame::derive::RuntimeDebug pub use frame::derive::RuntimeDebugNoBound pub use frame::derive::TypeInfo pub use frame::derive::TypeInfo pub mod frame::prelude pub use frame::prelude::<<frame_support::pallet_prelude::*>> pub use frame::prelude::<<frame_system::pallet_prelude::*>> pub use frame::prelude::<<sp_std::prelude::*>> pub use frame::prelude::CloneNoBound pub use frame::prelude::Debug pub use frame::prelude::Debug pub use frame::prelude::DebugNoBound pub use frame::prelude::Decode pub use frame::prelude::Decode pub use frame::prelude::DefaultNoBound pub use frame::prelude::Encode pub use frame::prelude::Encode pub use frame::prelude::EqNoBound pub use frame::prelude::PartialEqNoBound pub use frame::prelude::RuntimeDebug pub use frame::prelude::RuntimeDebugNoBound pub use frame::prelude::TypeInfo pub use frame::prelude::TypeInfo pub use frame::prelude::frame_system pub mod frame::primitives pub use frame::primitives::BlakeTwo256 pub use frame::primitives::H160 pub use frame::primitives::H256 pub use frame::primitives::H512 pub use frame::primitives::Hash pub use frame::primitives::Keccak256 pub use frame::primitives::U256 pub use frame::primitives::U512 pub mod frame::runtime pub mod frame::runtime::apis pub use frame::runtime::apis::<<frame_system_rpc_runtime_api::*>> pub use frame::runtime::apis::<<sp_api::*>> pub use frame::runtime::apis::<<sp_block_builder::*>> pub use frame::runtime::apis::<<sp_consensus_aura::*>> pub use frame::runtime::apis::<<sp_consensus_grandpa::*>> pub use frame::runtime::apis::<<sp_offchain::*>> pub use frame::runtime::apis::<<sp_session::runtime_api::*>> pub use frame::runtime::apis::<<sp_transaction_pool::runtime_api::*>> pub use frame::runtime::apis::ApplyExtrinsicResult pub use frame::runtime::apis::CheckInherentsResult pub use frame::runtime::apis::InherentData pub use frame::runtime::apis::OpaqueMetadata pub use frame::runtime::apis::impl_runtime_apis pub use frame::runtime::apis::sp_api pub mod frame::runtime::prelude pub use frame::runtime::prelude::<<frame_executive::*>> pub use frame::runtime::prelude::ConstBool pub use frame::runtime::prelude::ConstI128 pub use frame::runtime::prelude::ConstI16 pub use frame::runtime::prelude::ConstI32 pub use frame::runtime::prelude::ConstI64 pub use frame::runtime::prelude::ConstI8 pub use frame::runtime::prelude::ConstU128 pub use frame::runtime::prelude::ConstU16 pub use frame::runtime::prelude::ConstU32 pub use frame::runtime::prelude::ConstU64 pub use frame::runtime::prelude::ConstU8 pub use frame::runtime::prelude::NativeVersion pub use frame::runtime::prelude::RuntimeVersion pub use frame::runtime::prelude::construct_runtime pub use frame::runtime::prelude::create_runtime_str pub use frame::runtime::prelude::derive_impl pub use frame::runtime::prelude::frame_support pub use frame::runtime::prelude::ord_parameter_types pub use frame::runtime::prelude::parameter_types pub use frame::runtime::prelude::runtime_version pub mod frame::runtime::testing_prelude pub use frame::runtime::testing_prelude::BuildStorage pub use frame::runtime::testing_prelude::Storage pub mod frame::runtime::types_common pub type frame::runtime::types_common::AccountId = <<frame::runtime::types_common::Signature as sp_runtime::traits::Verify>::Signer as sp_runtime::traits::IdentifyAccount>::AccountId pub type frame::runtime::types_common::BlockNumber = u32 pub type frame::runtime::types_common::BlockOf<T, Extra> = sp_runtime::generic::block::Block<sp_runtime::generic::header::Header<frame::runtime::types_common::BlockNumber, sp_runtime::traits::BlakeTwo256>, sp_runtime::generic::unchecked_extrinsic::UncheckedExtrinsic<sp_runtime::multiaddress::MultiAddress<frame::runtime::types_common::AccountId, ()>, <T as frame_system::pallet::Config>::RuntimeCall, frame::runtime::types_common::Signature, Extra>> pub type frame::runtime::types_common::OpaqueBlock = sp_runtime::generic::block::Block<sp_runtime::generic::header::Header<frame::runtime::types_common::BlockNumber, sp_runtime::traits::BlakeTwo256>, sp_runtime::OpaqueExtrinsic> pub type frame::runtime::types_common::Signature = sp_runtime::MultiSignature pub type frame::runtime::types_common::SystemSignedExtensionsOf<T> = (frame_system::extensions::check_non_zero_sender::CheckNonZeroSender<T>, frame_system::extensions::check_spec_version::CheckSpecVersion<T>, frame_system::extensions::check_tx_version::CheckTxVersion<T>, frame_system::extensions::check_genesis::CheckGenesis<T>, frame_system::extensions::check_mortality::CheckMortality<T>, frame_system::extensions::check_nonce::CheckNonce<T>, frame_system::extensions::check_weight::CheckWeight<T>) pub mod frame::testing_prelude pub use frame::testing_prelude::<<frame_executive::*>> pub use frame::testing_prelude::<<frame_system::mocking::*>> pub use frame::testing_prelude::BuildStorage pub use frame::testing_prelude::ConstBool pub use frame::testing_prelude::ConstI128 pub use frame::testing_prelude::ConstI16 pub use frame::testing_prelude::ConstI32 pub use frame::testing_prelude::ConstI64 pub use frame::testing_prelude::ConstI8 pub use frame::testing_prelude::ConstU128 pub use frame::testing_prelude::ConstU16 pub use frame::testing_prelude::ConstU32 pub use frame::testing_prelude::ConstU64 pub use frame::testing_prelude::ConstU8 pub use frame::testing_prelude::NativeVersion pub use frame::testing_prelude::RuntimeVersion pub use frame::testing_prelude::Storage pub use frame::testing_prelude::TestState pub use frame::testing_prelude::assert_err pub use frame::testing_prelude::assert_err_ignore_postinfo pub use frame::testing_prelude::assert_error_encoded_size pub use frame::testing_prelude::assert_noop pub use frame::testing_prelude::assert_ok pub use frame::testing_prelude::assert_storage_noop pub use frame::testing_prelude::construct_runtime pub use frame::testing_prelude::create_runtime_str pub use frame::testing_prelude::derive_impl pub use frame::testing_prelude::frame_support pub use frame::testing_prelude::frame_system pub use frame::testing_prelude::if_std pub use frame::testing_prelude::ord_parameter_types pub use frame::testing_prelude::parameter_types pub use frame::testing_prelude::runtime_version pub use frame::testing_prelude::storage_alias pub mod frame::traits pub use frame::traits::<<frame_support::traits::*>> pub use frame::traits::<<sp_runtime::traits::*>> ``` --- The road to full stabilization is - [ ] https://github.com/paritytech/polkadot-sdk/issues/127 - [ ] have a more intentional version bump, as opposed to the current bi weekly force-major-bump - [ ] revise the internal API of `frame`, especially what goes into the `prelude`s. - [ ] migrate all internal pallets and runtime to use `frame` --------- Co-authored-by: kianenigma <[email protected]> Co-authored-by: Kian Paimani <[email protected]> Co-authored-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: Francisco Aguirre <[email protected]>
-
Sam Johnson authored
Updates `docify` to 0.2.5, which fixes some indentation bugs and adds the new `#[docify::export_content]` attribute which can be used like regular `#[docify::export]` but will only export the _underlying contents_ of the item it is attached to, if applicable (otherwise it just behaves exactly like `#[docify::export]`). Release notes here: https://github.com/sam0x17/docify/releases/tag/v0.2.5 cc @Kianenigma
-
- Oct 25, 2023
-
-
Joshy Orndorff authored
This PR does not make any functional changes to the code. Rather, it restructures the dependency graph. Before this PR, the crate `polkadot-parachain-primitives` depended directly on the crate `frame-support`. This is wrong in principal because a parachain does not necessarily have anything to do with frame. This dependency was only for the `Weight` type which was just a re-export from `sp-weights` anyway. So this PR changes the dependency to be directly on the much lighter `sp-weights`. --------- Co-authored-by: Joshy Orndorff <[email protected]> Co-authored-by: command-bot <>
-
- Oct 24, 2023
-
-
Marcin S. authored
-
Brian Anderson authored
Just keeping wasm-opt up to date. I don't see anything in the [binaryen changelog](https://github.com/WebAssembly/binaryen/blob/main/CHANGELOG.md) that should affect substrate. This release includes dwarf passes that were accidentally omitted from previous versions of the wasm-opt crate. I suspect this will not affect substrate as their omission hasn't been noticed until recently.
-
- Oct 23, 2023
-
-
ordian authored
Fixes #768.
-
Branislav Kontur authored
## Problem This PR addresses the issue with testnet AssetHub builds, which was discovered during the execution of `bot bench`. https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/4038738 ``` Compiling asset-hub-rococo-runtime-wasm v1.0.0 (/builds/parity/mirrors/polkadot-sdk/target/production/wbuild/asset-hub-rococo-runtime) warning: Linking globals named 'Core_version': symbol multiply defined! error: failed to load bitcode of module "rococo_runtime-8799ee884447805a.rococo_runtime.0bc572b8-cgu.0.rcgu.o": warning: `asset-hub-rococo-runtime-wasm` (lib) generated 1 warning error: could not compile `asset-hub-rococo-runtime-wasm` (lib) due to previous error; 1 warning emitted ``` https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/4038739 ``` Compiling asset-hub-westend-runtime-wasm v1.0.0 (/builds/parity/mirrors/polkadot-sdk/target/production/wbuild/asset-hub-westend-runtime) warning: Linking globals named 'Core_version': symbol multiply defined! error: failed to load bitcode of module "westend_runtime-86d7844430f97d5c.westend_runtime.b7678d03-cgu.0.rcgu.o": warning: `asset-hub-westend-runtime-wasm` (lib) generated 1 warning error: could not compile `asset-hub-westend-runtime-wasm` (lib) due to previous error; 1 warning emitted ``` ## Solution - Removed dependencies on `rococo-runtime` and `westend-runtime` introduced by [this PR](https://github.com/paritytech/polkadot-sdk/pull/1234/files#diff-a86375df98e04ca3cce1ea35c40257a222e2d5087f5f528ff33307678b78dc2dR534-R550). - Replaced `<rococo_runtime::Treasury as PalletInfoAccess>::index()` with `rococo_runtime_constants::TREASURY_PALLET_ID`. - Added `check_treasury_pallet_id` to the relay runtimes to ensure that the constant is aligned with the pallet id. - Added "Rococo Treasury" to the waived locations (that will not be charged fees in the executor) for `BridgeHubRococo` (to be aligned with AssetHubs). ## References [Full element discussion here](https://matrix.to/#/!JUeaZUiYbdrvzvtwSL:parity.io/$2PnjYMsWRjR7M3oOfGuRI0XkjdoqJLtRcAPVcDLuLVg?via=parity.io&via=web3.foundation). --------- Co-authored-by: command-bot <>
-
- Oct 22, 2023
-
-
Bastian Köcher authored
Changes the maximum instances count for `wasmtime` to `64`. It also allows to only pass in maximum `32` for `--max-runtime-instances` as `256` was way too big. With `64` instances in total and `32` that can be configured in maximum, there should be enough space to accommodate for extra instances that are may required to be allocated adhoc. --------- Co-authored-by: Koute <[email protected]>
-
- Oct 21, 2023
-
-
dependabot[bot] authored
Bumps [aes-gcm](https://github.com/RustCrypto/AEADs) from 0.10.2 to 0.10.3. <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/RustCrypto/AEADs/commit/7e82b01cd4901f6a35b5153536f11b87f5e4e622"><code>7e82b01</code></a> aes-gcm v0.10.3 (<a href="https://redirect.github.com/RustCrypto/AEADs/issues/552">#552</a>)</li> <li><a href="https://github.com/RustCrypto/AEADs/commit/b587b27270cc300d39c496a1ab06be80d72ac107"><code>b587b27</code></a> aes-gcm: avoid exposing plaintext on tag verification failure (<a href="https://redirect.github.com/RustCrypto/AEADs/issues/551">#551</a>)</li> <li><a href="https://github.com/RustCrypto/AEADs/commit/2209bcaa9edc65e9a60498e7ece5b50e66f32ebf"><code>2209bca</code></a> build(deps): bump actions/checkout from 3 to 4 (<a href="https://redirect.github.com/RustCrypto/AEADs/issues/548">#548</a>)</li> <li><a href="https://github.com/RustCrypto/AEADs/commit/035ec25362886735a0f44098f85ba0501a9b4038"><code>035ec25</code></a> build(deps): bump ascon from 0.3.1 to 0.4.0 (<a href="https://redirect.github.com/RustCrypto/AEADs/issues/545">#545</a>)</li> <li><a href="https://github.com/RustCrypto/AEADs/commit/e94ba5ab9fb2c7f3a18c92fb9dc8df14ac36f06b"><code>e94ba5a</code></a> xsalsa20poly1305: remove source code (<a href="https://redirect.github.com/RustCrypto/AEADs/issues/543">#543</a>)</li> <li><a href="https://github.com/RustCrypto/AEADs/commit/31240c1285144aeabef3e80eb9a1b4137dc2b43f"><code>31240c1</code></a> Update Cargo.lock</li> <li><a href="https://github.com/RustCrypto/AEADs/commit/40240c4a852df21048830de4eed4782c0fbddaef"><code>40240c4</code></a> Update Cargo.lock</li> <li><a href="https://github.com/RustCrypto/AEADs/commit/be4ea6fd3bcc1c8a5a23974a43e0fc35104d8cba"><code>be4ea6f</code></a> Update Cargo.lock</li> <li><a href="https://github.com/RustCrypto/AEADs/commit/2aef39e90d39c247cc89ccc31628468c9a9f60de"><code>2aef39e</code></a> Update Clippy version (<a href="https://redirect.github.com/RustCrypto/AEADs/issues/534">#534</a>)</li> <li><a href="https://github.com/RustCrypto/AEADs/commit/50710da0cbd47a4614b6d37119877f206c207e95"><code>50710da</code></a> Update Cargo.lock</li> <li>Additional commits viewable in <a href="https://github.com/RustCrypto/AEADs/compare/aes-gcm-v0.10.2...aes-gcm-v0.10.3">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=aes-gcm&package-manager=cargo&previous-version=0.10.2&new-version=0.10.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/paritytech/polkadot-sdk/network/alerts). </details> Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
-
- Oct 20, 2023
-
-
Bastian Köcher authored
The `xcm` crate was renamed to `staging-xcm` to be able to publish it to crates.io as someone as squatted `xcm`. The problem with this rename is that the `TypeInfo` includes the crate name which ultimately lands in the metadata. The metadata is consumed by downstream users like `polkadot-js` or people building on top of `polkadot-js`. These people are using the entire `path` to find the type in the type registry. Thus, their code would break as the type path would now be [`staging_xcm`, `VersionedXcm`] instead of [`xcm`, `VersionedXcm`]. This pull request fixes this by renaming the path segment `staging_xcm` to `xcm`. This requires: https://github.com/paritytech/scale-info/pull/197 --------- Co-authored-by: Francisco Aguirre <[email protected]>
-
cheme authored
Use a more secure seed for hashsets of cache.
-
- Oct 19, 2023
-
-
Sebastian Kunert authored
Until now prometheus metrics were not exposed by the minimal relay chain node. Also slight cleanups.
-
- Oct 18, 2023
-
-
Keith Yeung authored
Combination of paritytech/polkadot#7005, its addon PR paritytech/polkadot#7585 and its companion paritytech/cumulus#2433. This PR introduces a new XcmFeesToAccount struct which implements the `FeeManager` trait, and assigns this struct as the `FeeManager` in the XCM config for all runtimes. The struct simply deposits all fees handled by the XCM executor to a specified account. In all runtimes, the specified account is configured as the treasury account. XCM __delivery__ fees are now being introduced (unless the root origin is sending a message to a system parachain on behalf of the originating chain). # Note for reviewers Most file changes are tests that had to be modified to account for the new fees. Main changes are in: - cumulus/pallets/xcmp-queue/src/lib.rs <- To make it track the delivery fees exponential factor - polkadot/xcm/xcm-builder/src/fee_handling.rs <- Added. Has the FeeManager implementation - All runtime xcm_config files <- To add the FeeManager to the XCM configuration # Important note After this change, instructions that create and send a new XCM (Query*, Report*, ExportMessage, InitiateReserveWithdraw, InitiateTeleport, DepositReserveAsset, TransferReserveAsset, LockAsset and RequestUnlock) will require the corresponding origin account in the origin register to pay for transport delivery fees, and the onward message will fail to be sent if the origin account does not have the required amount. This delivery fee is on top of what we already collect as tx fees in pallet-xcm and XCM BuyExecution fees! Wallet UIs that want to expose the new delivery fee can do so using the formula: ``` delivery_fee_factor * (base_fee + encoded_msg_len * per_byte_fee) ``` where the delivery fee factor can be obtained from the corresponding pallet based on which transport you are using (UMP, HRMP or bridges), the base fee is a constant, the encoded message length from the message itself and the per byte fee is the same as the configured per byte fee for txs (i.e. `TransactionByteFee`). --------- Co-authored-by: Branislav Kontur <[email protected]> Co-authored-by: joe petrowski <[email protected]> Co-authored-by: Giles Cope <[email protected]> Co-authored-by: command-bot <> Co-authored-by: Francisco Aguirre <[email protected]> Co-authored-by: Liam Aharon <[email protected]> Co-authored-by: Kian Paimani <[email protected]>
-
Adrian Catangiu authored
cumulus: add asset-hub-rococo runtime based on asset-hub-kusama and add asset-bridging support to it (#1215) This commit adds Rococo Asset Hub dedicated runtime so we can test new features here, before merging them in Kusama Asset Hub. Also adds one such feature: asset transfer over bridge (Rococo AssetHub <> Wococo AssetHub) - clone `asset-hub-kusama-runtime` -> `asset-hub-rococo-runtime` - make it use Rococo primitives, names, assets, constants, etc - add asset-transfer-over-bridge support to Rococo AssetHub <> Wococo AssetHub Fixes #1128 --------- Co-authored-by: Branislav Kontur <[email protected]> Co-authored-by: joe petrowski <[email protected]> Co-authored-by: Francisco Aguirre <[email protected]>
-
- Oct 16, 2023
-
-
Ignacio Palacios authored
Make System Parachain trusted Teleporters of each other. Migration of https://github.com/paritytech/cumulus/pull/2842 --------- Co-authored-by: command-bot <> Co-authored-by: joe petrowski <[email protected]>
-
Davide Galassi authored
- Removal of Arkworks unit tests. These tests were just testing the arkworks upstream implementation which should be assumed correct. This is not the place to test well known dependencies. - Removal of some over-engineering. We just store the calls to Arkworks in one file. Per-curve sources are not required. - Docs formatting --- I also took the opportunity to bump the `bandersnatch-vrfs` crate revision internally providing some new shiny stuff.
-
- Oct 15, 2023
-
-
Julian Eager authored
closes #695 Could potentially be helpful to preserving caches when applicable, as discussed in #685 kusama address: FvpsvV1GQAAbwqX6oyRjemgdKV11QU5bXsMg9xsonD1FLGK
-
- Oct 14, 2023
-
-
juangirini authored
-
- Oct 11, 2023
-
-
Marcin S. authored
-
- Oct 10, 2023
-
-
Sam Johnson authored
# Description Upgrades `macro_magic` to 0.4.3, which introduces the ability to have `export_tokens` use the same name as the underlying item for its auto-generated macro name. Ultimately this will allow for better dev ux in our derive_impl feature.
-
Liam Aharon authored
Original PR https://github.com/paritytech/substrate/pull/14746 --- ## Fixing stall ### Introduction I experienced an apparent stall downloading state from `https://rococo-try-runtime-node.parity-chains.parity.io:443` which was having networking difficulties only responding to my JSONRPC requests with 50-200KB/s of bandwidth. This PR fixes the issue causing the stall, and generally improves performance remote-ext when it downloads state by greatly reducing the chances of a timeout occuring. ### Description Introduces a new `REQUEST_DURATION_TARGET` constant and modifies `get_storage_data_dynamic_batch_size` to - Increase or decrease the batch size of the next request depending on whether the elapsed time of the last request was gt or lt the target - Reset the batch size to 1 if the request times out This fixes an issue on slow connections that can otherwise cause multiple timeouts and a stalled download when: 1. The batch size increases rapidly as remote-ext downloads keys with small associated storage values 2. remote-ext tries to process a large series of subsequent keys all with extremely large associated storage values (Rococo has a series of keys 1-5MB large) 3. The huge storage values download for 5 minutes until the request times out 4. The partially downloaded keys are thrown out and remote-ext tries again with a smaller batch size, but the batch size is still far too large and takes 5 minutes to be reduced again 5. The download will be essentially stalled for many hours while the above step cycles After this PR, the request size will - Not grow as large to begin with, as it is regulated downwards as the request duration exceeds the target - Drop immediately to 1 if the request times out. A timeout indicates the keys next in line to download have extremely large storage values compared to previously downloaded keys, and we need to reset the batch size to figure out what our new ideal batch size is. By not resetting down to 1, we risk the next request timing out again. ## Reducing memory As suggested by @bkchr, I adjusted `get_storage_data_dynamic_batch_size` from being recursive to a loop which allows removing a bunch of clones that were chewing through a lot of memory. I noticed actually it was using up to 50GB swap previously when downloading Polkadot keys on a slow connection, because it needed to recurse and clone a lot. After this change it uses only ~1.5GB memory.
-
Oliver Tale-Yazdi authored
Adds a warning to FRAME pallets when a function argument that starts with `_` is used in the weight formula. This is in most cases an error since the weight witness needs to be checked. Example: ```rust #[pallet::call_index(0)] #[pallet::weight(T::SystemWeightInfo::remark(_remark.len() as u32))] pub fn remark(_origin: OriginFor<T>, _remark: Vec<u8>) -> DispatchResultWithPostInfo { Ok(().into()) } ``` Produces this warning: ```pre warning: use of deprecated constant `pallet::warnings::UncheckedWeightWitness_0::_w`: It is deprecated to not check weight witness data. Please instead ensure that all witness data for weight calculation is checked before usage. For more info see: <https://github.com/paritytech/polkadot-sdk/pull/1818> --> substrate/frame/system/src/lib.rs:424:40 | 424 | pub fn remark(_origin: OriginFor<T>, _remark: Vec<u8>) -> DispatchResultWithPostInfo { | ^^^^^^^ | = note: `#[warn(deprecated)]` on by default ``` Can be suppressed like this, since in this case it is legit: ```rust #[pallet::call_index(0)] #[pallet::weight(T::SystemWeightInfo::remark(remark.len() as u32))] pub fn remark(_origin: OriginFor<T>, remark: Vec<u8>) -> DispatchResultWithPostInfo { let _ = remark; // We dont need to check the weight witness. Ok(().into()) } ``` Changes: - Add warning on uncheded weight witness - Respect `subkeys` limit in `System::kill_prefix` - Fix HRMP pallet and other warnings - Update`proc_macro_warning` dependency - Delete random folder `substrate/src/src`
🙈 - Adding Prdoc --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: joe petrowski <[email protected]>
-
- Oct 09, 2023
-
-
David Emett authored
See #1345, <https://github.com/paritytech/substrate/pull/14207>. This adds all the necessary mixnet components, and puts them together in the "kitchen-sink" node/runtime. The components added are: - A pallet (`frame/mixnet`). This is responsible for determining the current mixnet session and phase, and the mixnodes to use in each session. It provides a function that validators can call to register a mixnode for the next session. The logic of this pallet is very similar to that of the `im-online` pallet. - A service (`client/mixnet`). This implements the core mixnet logic, building on the `mixnet` crate. The service communicates with other nodes using notifications sent over the "mixnet" protocol. - An RPC interface. This currently only supports sending transactions over the mixnet. --------- Co-authored-by: David Emett <[email protected]> Co-authored-by: Javier Viola <[email protected]>
-
- Oct 07, 2023
-
-
Muharem Ismailov authored
### Summary This PR introduces new dispatchables to the treasury pallet, allowing spends of various asset types. The enhanced features of the treasury pallet, in conjunction with the asset-rate pallet, are set up and enabled for Westend and Rococo. ### Westend and Rococo runtimes. Polkadot/Kusams/Rococo Treasury can accept proposals for `spends` of various asset kinds by specifying the asset's location and ID. #### Treasury Instance New Dispatchables: - `spend(AssetKind, AssetBalance, Beneficiary, Option<ValidFrom>)` - propose and approve a spend; - `payout(SpendIndex)` - payout an approved spend or retry a failed payout - `check_payment(SpendIndex)` - check the status of a payout; - `void_spend(SpendIndex)` - void previously approved spend; > existing spend dispatchable renamed to spend_local in this context, the `AssetKind` parameter contains the asset's location and it's corresponding `asset_id`, for example: `USDT` on `AssetHub`, ``` rust location = MultiLocation(0, X1(Parachain(1000))) asset_id = MultiLocation(0, X2(PalletInstance(50), GeneralIndex(1984))) ``` the `Beneficiary` parameter is a `MultiLocation` in the context of the asset's location, for example ``` rust // the Fellowship salary pallet's location / account FellowshipSalaryPallet = MultiLocation(1, X2(Parachain(1001), PalletInstance(64))) // or custom `AccountId` Alice = MultiLocation(0, AccountId32(network: None, id: [1,...])) ``` the `AssetBalance` represents the amount of the `AssetKind` to be transferred to the `Beneficiary`. For permission checks, the asset amount is converted to the native amount and compared against the maximum spendable amount determined by the commanding spend origin. the `spend` dispatchable allows for batching spends with different `ValidFrom` arguments, enabling milestone-based spending. If the expectations tied to an approved spend are not met, it is possible to void the spend later using the `void_spend` dispatchable. Asset Rate Pallet provides the conversion rate from the `AssetKind` to the native balance. #### Asset Rate Instance Dispatchables: - `create(AssetKind, Rate)` - initialize a conversion rate to the native balance for the given asset - `update(AssetKind, Rate)` - update the conversion rate to the native balance for the given asset - `remove(AssetKind)` - remove an existing conversion rate to the native balance for the given asset the pallet's dispatchables can be executed by the Root or Treasurer origins. ### Treasury Pallet Treasury Pallet can accept proposals for `spends` of various asset kinds and pay them out through the implementation of the `Pay` trait. New Dispatchables: - `spend(Config::AssetKind, AssetBalance, Config::Beneficiary, Option<ValidFrom>)` - propose and approve a spend; - `payout(SpendIndex)` - payout an approved spend or retry a failed payout; - `check_payment(SpendIndex)` - check the status of a payout; - `void_spend(SpendIndex)` - void previously approved spend; > existing spend dispatchable renamed to spend_local The parameters' types of the `spend` dispatchable exposed via the pallet's `Config` and allows to propose and accept a spend of a certain amount. An approved spend can be claimed via the `payout` within the `Config::SpendPeriod`. Clients provide an implementation of the `Pay` trait which can pay an asset of the `AssetKind` to the `Beneficiary` in `AssetBalance` units. The implementation of the Pay trait might not have an immediate final payment status, for example if implemented over `XCM` and the actual transfer happens on a remote chain. The `check_status` dispatchable can be executed to update the spend's payment state and retry the `payout` if the payment has failed. --------- Co-authored-by: joe petrowski <[email protected]> Co-authored-by: command-bot <>
-
- Oct 06, 2023
-
-
dependabot[bot] authored
Bumps the known_good_semver group with 1 update: [syn](https://github.com/dtolnay/syn). <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/dtolnay/syn/releases">syn's releases</a>.</em></p> <blockquote> <h2>2.0.38</h2> <ul> <li>Fix <em>"method 'peek' has an incompatible type for trait"</em> error when defining <code>bool</code> as a custom keyword (<a href="https://redirect.github.com/dtolnay/syn/issues/1518">#1518</a>, thanks <a href="https://github.com/Vanille-N"><code>@Vanille-N</code></a>)</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/dtolnay/syn/commit/43632bfb6c78ee1f952645a268ab1ac4af162977"><code>43632bf</code></a> Release 2.0.38</li> <li><a href="https://github.com/dtolnay/syn/commit/abd2c214b44da64a5e420d72919308300eebc23d"><code>abd2c21</code></a> Merge pull request <a href="https://redirect.github.com/dtolnay/syn/issues/1518">#1518</a> from Vanille-N/master</li> <li><a href="https://github.com/dtolnay/syn/commit/6701e6077e15013ef34b15e3ffdae2657e499d83"><code>6701e60</code></a> Absolute path to <code>bool</code> in <code>custom_punctuation.rs</code></li> <li><a href="https://github.com/dtolnay/syn/commit/7313d242398111423f046386aa0a75548f63d236"><code>7313d24</code></a> Resolve single_match_else pedantic clippy lint in code generator</li> <li><a href="https://github.com/dtolnay/syn/commit/67ab64f3c09e17b23493c7cda498e7edb8830f21"><code>67ab64f</code></a> Include unexpected token in the test failure message</li> <li><a href="https://github.com/dtolnay/syn/commit/137ae33486de3f2652487f8f64436ad1429df496"><code>137ae33</code></a> Check no remaining token after the first literal</li> <li><a href="https://github.com/dtolnay/syn/commit/258e9e8a11d188c1ee1ffb2b069819239999f9ac"><code>258e9e8</code></a> Ignore single_match_else pedantic clippy lint in test</li> <li><a href="https://github.com/dtolnay/syn/commit/92fd50ee8cb52968d9c66fbe6d67638c1f838e26"><code>92fd50e</code></a> Test docs.rs documentation build in CI</li> <li>See full diff in <a href="https://github.com/dtolnay/syn/compare/2.0.37...2.0.38">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=syn&package-manager=cargo&previous-version=2.0.37&new-version=2.0.38)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions </details> Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
-