- Apr 17, 2024
-
-
Oliver Tale-Yazdi authored
Preparation for https://github.com/paritytech/polkadot-sdk/pull/3935 Changes: - Add some `default-features = false` for the case that a crate and that dependency both support nostd builds. - Shuffle files around of some benchmarking-only crates. These conditionally disabled the `cfg_attr` for nostd and pulled in libstd. Example [here](https://github.com/ggwpez/zepter/pull/95). The actual logic is moved into a `inner.rs` to preserve nostd capability of the crate in case the benchmarking feature is disabled. - Add some `use sp_std::vec` where needed. - Remove some `optional = true` in cases where it was not optional. - Removed one superfluous `cfg_attr(not(feature = "std"), no_std..`. All in all this should be logical no-op. --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]>
-
Alexandru Vasile authored
This PR ensures that the reported pruned blocks are unique. While at it, ensure that the best block event is properly generated when the last best block is a fork that will be pruned in the future. To achieve this, the chainHead keeps a LRU set of reported pruned blocks to ensure the following are not reported twice: ```bash finalized -> block 1 -> block 2 -> block 3 -> block 2 -> block 4 -> block 5 -> block 1 -> block 2_f -> block 6 -> block 7 -> block 8 ``` When block 7 is finalized the branch [block 2; block 3] is reported as pruned. When block 8 is finalized the branch [block 2; block 4; block 5] should be reported as pruned, however block 2 was already reported as pruned at the previous step. This is a side-effect of the pruned blocks being reported at level N - 1. For example, if all pruned forks would be reported with the first encounter (when block 6 is finalized we know that block 3 and block 5 are stale), we would not need the LRU cache. cc @paritytech/subxt-team Closes https://github.com/paritytech/polkadot-sdk/issues/3658 --------- Signed-off-by: Alexandru Vasile <[email protected]> Co-authored-by: Sebastian Kunert <[email protected]>
-
thiolliere authored
Improve doc: * the pallet macro is actually referring to 2 places, for the module and for the struct placeholder but doesn't really clarify it (I should have named the latter just `pallet_struct` or something but it is a bit late) * The doc of `with_default` is a bit confusing too IMO. CC @Kianenigma --------- Co-authored-by: Liam Aharon <[email protected]>
-
Muharem Ismailov authored
Introduce `PalletId` as an additional seed parameter for pool's account id derivation. The PR also introduces the `pallet_asset_conversion_ops` pallet with a call to migrate a given pool to thew new account. Additionally `fungibles::lifetime::ResetTeam` and `fungible::lifetime::Refund` traits, to facilitate the migration of pools. --------- Co-authored-by: command-bot <>
-
Sergej Sakac authored
This PR introduces changes enabling the transfer of coretime regions via XCM. TL;DR: There are two primary issues that are resolved in this PR: 1. The `mint` and `burn` functions were not implemented for coretime regions. These operations are essential for moving assets to and from the XCM holding register. 2. The transfer of non-fungible assets through XCM was previously disallowed. This was due to incorrectly benchmarking non-fungible asset transfers via XCM, which led to assigning it a weight of `Weight::Max`, effectively preventing its execution. ### `mint_into` and `burn` implementation This PR addresses the issue with cross-chain transferring regions back to the Coretime chain. Remote reserve transfers are performed by withdrawing and depositing the asset to and from the holding registry. This requires the asset to support burning and minting functionality. This PR adds burning and minting; however, they work a bit differently than usual so that the associated region record is not lost when burning. Instead of removing all the data, burning will set the owner of the region to `None`, and when minting it back, it will set it to an actual value. So, when cross-chain transferring, withdrawing into the registry will remove the region from its original owner, and when depositing it from the registry, it will set its owner to another account This was originally implemented in this PR: #3455, however we decided to move all of it to this single PR (https://github.com/paritytech/polkadot-sdk/pull/3455#discussion_r1547324892) ### Fixes made in this PR - Update the `XcmReserveTransferFilter` on coretime chain since it is meant as a reserve chain for coretime regions. - Update the XCM benchmark to use `AssetTransactor` instead of assuming `pallet-balances` for fungible transfers. - Update the XCM benchmark to properly measure weight consumption for nonfungible reserve asset transfers. ATM reserve transfers via the extrinsic do not work since the weight for it is set to `Weight::max()`. Closes: https://github.com/paritytech/polkadot-sdk/issues/865 --------- Co-authored-by: Branislav Kontur <[email protected]> Co-authored-by: Francisco Aguirre <[email protected]> Co-authored-by: Dónal Murray <[email protected]>
-
Branislav Kontur authored
This PR adjusts `xcm-bridge-hub-router` to be usable in the chain of routers when a `NotApplicable` error occurs. Closes: https://github.com/paritytech/polkadot-sdk/issues/4133 ## TODO - [ ] backport to polkadot-sdk 1.10.0 crates.io release
-
Alexandru Vasile authored
This PR sends the GrandpaNeighbor packet to lightclients similarly to the full-nodes. Previously, the lightclient would receive a GrandpaNeigbor packet only when the note set changed. Related to: https://github.com/paritytech/polkadot-sdk/issues/4120 Next steps: - [ ] check with lightclient --------- Signed-off-by: Alexandru Vasile <[email protected]>
-
- Apr 16, 2024
-
-
PG Herveou authored
Add verify statement to ensure that benchmarks call do not revert Also updated [benchmarks](https://weights.tasty.limo/compare?unit=time&ignore_errors=true&threshold=10&method=asymptotic&repo=polkadot-sdk&old=master&new=pg/verify-benchmarks&path_pattern=substrate%2Fframe%2Fcontracts%2Fsrc%2Fweights.rs) --------- Co-authored-by: command-bot <>
-
Muharem Ismailov authored
Introduce types to define 1:1 balance conversion for different relative asset ids/locations of native asset. Examples: native asset on Asset Hub presented as `VersionedLocatableAsset` type in the context of Relay Chain is ``` { `location`: (0, Parachain(1000)), `asset_id`: (1, Here), } ``` and it's balance should be converted 1:1 by implementations of `ConversionToAssetBalance` trait. --------- Co-authored-by: Branislav Kontur <[email protected]>
-
Oliver Tale-Yazdi authored
Updating the prdoc doc file to be a bit more useful for new contributors and adding a SemVer section. --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]>
-
Maksym H authored
Followup after https://github.com/paritytech/polkadot-sdk/pull/3431 Per https://stackoverflow.com/questions/63188674/github-actions-detect-author-association and https://michaelheap.com/github-actions-check-permission/ looks like just checking NOT a MEMBER is not correct, Not a CONTRIBUTORs check should be included
-
Javyer authored
Follow up to #3431 Added an api check to verify that there are pre-existing approvals in the PR before dismissing reviews and posting a message
-
Oliver Tale-Yazdi authored
Changes: - Saturate in the input validation of he drop history function or pallet-broker. --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]>
-
Alin Dima authored
Will make https://github.com/paritytech/polkadot-sdk/pull/4035 easier to review (the mentioned PR already does this move so the diff will be clearer). Also called out as part of: https://github.com/paritytech/polkadot-sdk/pull/3233#discussion_r1490867383
-
- Apr 15, 2024
-
-
thiolliere authored
`LiveAsset` is an error to be returned when an asset is not supposed to be live. And `AssetNotLive` is an error to be returned when an asset is supposed to be live, I don't think frozen qualifies as live.
-
Dónal Murray authored
The first test proves that parachains who were migrated over on a legacy lease can renew without downtime. The exception is if their lease expires in period 0 - aka within `region_length` timeslices after `start_sales` is called. The second test is designed such that it passes if the issue exists and should be fixed. This will require an intervention on Kusama to add these renewals to storage as it is too tight to schedule a runtime upgrade before the start_sales call. All leases will still have at least two full regions of coretime.
-
Alexandru Vasile authored
This PR ensures the proper logging target (ie `libp2p_tcp` or `beefy`) is displayed. The issue has been introduced in: https://github.com/paritytech/polkadot-sdk/pull/4059, which removes the normalized metadata of logs. From [documentation](https://docs.rs/tracing-log/latest/tracing_log/trait.NormalizeEvent.html#tymethod.normalized_metadata): > In tracing-log, an Event produced by a log (through [AsTrace](https://docs.rs/tracing-log/latest/tracing_log/trait.AsTrace.html)) has an hard coded “log” target > [normalized_metadata](https://docs.rs/tracing-log/latest/tracing_log/trait.NormalizeEvent.html#tymethod.normalized_metadata): If this Event comes from a log, this method provides a new normalized Metadata which has all available attributes from the original log, including file, line, module_path and target This has low implications if a version was deployed containing the mentioned pull request, as we'll lose the ability to distinguish between log targets. ### Before this PR ``` 2024-04-15 12:45:40.327 INFO main log: Parity Polkadot 2024-04-15 12:45:40.328 INFO main log:
✌ ️ version 1.10.0-d1b0ef76 2024-04-15 12:45:40.328 INFO main log:❤ ️ by Parity Technologies <[email protected]>, 2017-2024 2024-04-15 12:45:40.328 INFO main log:📋 Chain specification: Development 2024-04-15 12:45:40.328 INFO main log:🏷 Node name: yellow-eyes-2963 2024-04-15 12:45:40.328 INFO main log:👤 Role: AUTHORITY 2024-04-15 12:45:40.328 INFO main log:💾 Database: RocksDb at /tmp/substrated39i9J/chains/rococo_dev/db/full 2024-04-15 12:45:44.508 WARN main log: Took active validators from set with wrong size ... 2024-04-15 12:45:45.805 INFO main log:👶 Starting BABE Authorship worker 2024-04-15 12:45:45.806 INFO tokio-runtime-worker log: 🥩 BEEFY gadget waiting for BEEFY pallet to become available... 2024-04-15 12:45:45.806 DEBUG tokio-runtime-worker log: New listen address: /ip6/::1/tcp/30333 2024-04-15 12:45:45.806 DEBUG tokio-runtime-worker log: New listen address: /ip4/127.0.0.1/tcp/30333 ``` ### After this PR ``` 2024-04-15 12:59:45.623 INFO main sc_cli::runner: Parity Polkadot 2024-04-15 12:59:45.623 INFO main sc_cli::runner:✌ ️ version 1.10.0-d1b0ef76 2024-04-15 12:59:45.623 INFO main sc_cli::runner:❤ ️ by Parity Technologies <[email protected]>, 2017-2024 2024-04-15 12:59:45.623 INFO main sc_cli::runner:📋 Chain specification: Development 2024-04-15 12:59:45.623 INFO main sc_cli::runner:🏷 Node name: helpless-lizards-0550 2024-04-15 12:59:45.623 INFO main sc_cli::runner:👤 Role: AUTHORITY ... 2024-04-15 12:59:50.204 INFO tokio-runtime-worker beefy: 🥩 BEEFY gadget waiting for BEEFY pallet to become available... 2024-04-15 12:59:50.204 DEBUG tokio-runtime-worker libp2p_tcp: New listen address: /ip6/::1/tcp/30333 2024-04-15 12:59:50.204 DEBUG tokio-runtime-worker libp2p_tcp: New listen address: /ip4/127.0.0.1/tcp/30333 ``` Signed-off-by: Alexandru Vasile <[email protected]> -
Javyer authored
Closes https://github.com/paritytech/opstooling/issues/174 Added a new step in the action that triggers review bot to stop approval from new pushes. This step works in the following way: - If the **author of the PR**, who **is not** a member of the org, pushed a new commit then: - Review-Trigger requests new reviews from the reviewers and fails. It *does not dismiss reviews*. It simply request them again, but they will still be available. This way, if the author changed something in the code, they will still need to have this latest change approved to stop them from uploading malicious code. Find the requested issue linked to this PR (it is from a private repo so I can't link it here)
-
Bastian Köcher authored
Part of: https://github.com/paritytech/polkadot-sdk/issues/4107
-
Bastian Köcher authored
While `sp-api-proc-macro` isn't used directly and thus, it should have the same features enabled as `sp-api`. However, I have seen issues around `frame-metadata` not being enabled for `sp-api`, but for `sp-api-proc-macro`. This can be prevented by using the `frame_metadata_enabled` macro from `sp-api` that ensures we have the same feature set between both crates.
-
Svyatoslav Nikolsky authored
Extracted to a separate PR as requested here: https://github.com/paritytech/parity-bridges-common/pull/2873#discussion_r1562459573
-
Alexandru Gheorghe authored
As discovered during investigation of https://github.com/paritytech/polkadot-sdk/issues/3314 and https://github.com/paritytech/polkadot-sdk/issues/3673 there are active validators which accidentally might change their network key during restart, that's not a safe operation when you are in the active set because of distributed nature of DHT, so the old records would still exist in the network until they expire 36h, so unless they have a good reason validators should avoid changing their key when they restart their nodes. There is an effort in parallel to improve this situation https://github.com/paritytech/polkadot-sdk/pull/3786, but those changes are way more intrusive and will need more rigorous testing, additionally they will reduce the time to less than 36h, but the propagation won't be instant anyway, so not changing your network during restart should be the safest way to run your node, unless you have a really good reason to change it. ## Proposal 1. Do not auto-generate the network if the network file does not exist in the provided path. Nodes where the key file does not exist will get the following error: ``` Error: 0: Starting an authorithy without network key in /home/alexggh/.local/share/polkadot/chains/ksmcc3/network/secret_ed25519. This is not a safe operation because the old identity still lives in the dht for 36 hours. Because of it your node might suffer from not being properly connected to other nodes for validation purposes. If it is the first time running your node you could use one of the following methods. 1. Pass --unsafe-force-node-key-generation and make sure you remove it for subsequent node restarts 2. Separetly generate the key with: polkadot key generate-node-key --file <YOUR_PATH_TO_NODE_KEY> ``` 2. Add an explicit parameters for nodes that do want to change their network despite the warnings or if they run the node for the first time. `--unsafe-force-node-key-generation` 3. For `polkadot key generate-node-key` add two new mutually exclusive parameters `base_path` and `default_base_path` to help with the key generation in the same path the polkadot main command would expect it. 4. Modify the installation scripts to auto-generate a key in default path if one was not present already there, this should help with making the executable work out of the box after an instalation. ## Notes Nodes that do not have already the key persisted will fail to start after this change, however I do consider that better than the current situation where they start but they silently hide that they might not be properly connected to their peers. ## TODO - [x] Make sure only nodes that are authorities on producation chains will be affected by this restrictions. - [x] Proper PRDOC, to make sure node operators are aware this is coming. --------- Signed-off-by: Alexandru Gheorghe <[email protected]> Co-authored-by: Dmitry Markin <[email protected]> Co-authored-by: s0me0ne-unkn0wn <[email protected]> Co-authored-by: Bastian Köcher <[email protected]>
-
- Apr 14, 2024
-
-
Bastian Köcher authored
Co-authored-by: Liam Aharon <[email protected]>
-
Jonathan Udd authored
Verified by running a node using `--reserved-only` and `--reserved-nodes`.
-
- Apr 13, 2024
-
-
Oliver Tale-Yazdi authored
Changes: - Add pallet-parameters to Rococo to configure the NIS and preimage pallet. - Fix names of expanded dynamic params. Apparently, `to_class_case` removes suffix `s`, and `Nis` becomes `Ni`
😑 . Now using `to_pascal_case`. --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: Alessandro Siniscalchi <[email protected]> Co-authored-by: Kian Paimani <[email protected]> Co-authored-by: command-bot <> -
Przemek Rzad authored
-
Przemek Rzad authored
This workflow will automatically add issues related to async backing to the Parachain team board, updating a custom "meta" field. Requested by @the-right-joyce
-
Bastian Köcher authored
Closes: https://github.com/paritytech/polkadot-sdk/issues/4100
-
Serban Iorga authored
Fixes https://github.com/paritytech/polkadot-sdk/issues/3999 --------- Co-authored-by: Branislav Kontur <[email protected]>
-
- Apr 12, 2024
-
-
Vedhavyas Singareddi authored
This PR introduces `BlockHashProvider` into `pallet_mmr::Config` This type is used to get `block_hash` for a given `block_number` rather than directly using `frame_system::Pallet::block_hash` The `DefaultBlockHashProvider` uses `frame_system::Pallet::block_hash` to get the `block_hash` Closes: #4062
-
Branislav Kontur authored
This PR mainly removes `xcm::v3` stuff from `assets-common` to make it more generic and facilitate the transition to newer XCM versions. Some of the implementations here used hard-coded `xcm::v3::Location`, but now it's up to the runtime to configure according to its needs. Additional/consequent changes: - `penpal` runtime uses now `xcm::latest::Location` for `pallet_assets` as `AssetId`, because we don't care about migrations here - it pretty much simplify xcm-emulator integration tests, where we don't need now a lots of boilerplate conversions: ``` v3::Location::try_from(...).expect("conversion works")` ``` - xcm-emulator tests - split macro `impl_assets_helpers_for_parachain` to the `impl_assets_helpers_for_parachain` and `impl_foreign_assets_helpers_for_parachain` (avoids using hard-coded `xcm::v3::Location`)
-
Squirrel authored
1. The `CustomFmtContext::ContextWithFormatFields` enum arm isn't actually used and thus we don't need the enum anymore. 2. We don't do anything with most of the normalized metadata that's created by calling `event.normalized_metadata();` - the `target` we can get from `event.metadata.target()` and level we can get from `event.metadata.level()` - let's just call them direct to simplify things. (`event.metadata()` is just a field call under the hood) Changelog: No functional changes, might run a tad faster with lots of logging turned on. --------- Co-authored-by: Bastian Köcher <[email protected]>
-
Przemek Rzad authored
- Progresses https://github.com/paritytech/polkadot-sdk/issues/3155 ### What's inside A job, that will take each of the three [templates](https://github.com/paritytech/polkadot-sdk/tree/master/templates), yank them out of the monorepo workspace, and push to individual repositories ([1](https://github.com/paritytech/polkadot-sdk-minimal-template), [2](https://github.com/paritytech/polkadot-sdk-parachain-template), [3](https://github.com/paritytech/polkadot-sdk-solochain-template)). In case the build/test does not succeed, a PR such as [this one](https://github.com/paritytech-stg/polkadot-sdk-solochain-template/pull/2) gets created instead. I'm proposing a manual dispatch trigger for now - so we can test and iterate faster - and change it to fully automatic triggered by releases later. The manual trigger looks like this: <img width="340px" src="https://github.com/paritytech/polkadot-sdk/assets/12039224/e87e0fda-23a3-4735-9035-af801e8417fc"/> ### How it works The job replaces dependencies [referenced by git](https://github.com/paritytech/polkadot-sdk/blob/d733c77e/templates/minimal/pallets/template/Cargo.toml#L25) with a reference to released crates using [psvm](https://github.com/paritytech/psvm). It creates a new workspace for the template, and adapts what's needed from the `polkadot-sdk` workspace. ### See the results The action has been tried out in staging, and the results can be observed here: - [minimal stg](https://github.com/paritytech-stg/polkadot-sdk-minimal-template/) - [parachain stg](https://github.com/paritytech-stg/polkadot-sdk-parachain-template/) - [solochain stg](https://github.com/paritytech-stg/polkadot-sdk-solochain-template/) These are based on the `1.9.0` release (using `release-crates-io-v1.9.0` branch).
-
wersfeds authored
-
Adrian Catangiu authored
# Description Add `transfer_assets_using()` for transferring assets from local chain to destination chain using explicit XCM transfer types such as: - `TransferType::LocalReserve`: transfer assets to sovereign account of destination chain and forward a notification XCM to `dest` to mint and deposit reserve-based assets to `beneficiary`. - `TransferType::DestinationReserve`: burn local assets and forward a notification to `dest` chain to withdraw the reserve assets from this chain's sovereign account and deposit them to `beneficiary`. - `TransferType::RemoteReserve(reserve)`: burn local assets, forward XCM to `reserve` chain to move reserves from this chain's SA to `dest` chain's SA, and forward another XCM to `dest` to mint and deposit reserve-based assets to `beneficiary`. Typically the remote `reserve` is Asset Hub. - `TransferType::Teleport`: burn local assets and forward XCM to `dest` chain to mint/teleport assets and deposit them to `beneficiary`. By default, an asset's reserve is its origin chain. But sometimes we may want to explicitly use another chain as reserve (as long as allowed by runtime `IsReserve` filter). This is very helpful for transferring assets with multiple configured reserves (such as Asset Hub ForeignAssets), when the transfer strictly depends on the used reserve. E.g. For transferring Foreign Assets over a bridge, Asset Hub must be used as the reserve location. # Example usage scenarios ## Transfer bridged ethereum ERC20-tokenX between ecosystem parachains. ERC20-tokenX is registered on AssetHub as a ForeignAsset by the Polkadot<>Ethereum bridge (Snowbridge). Its asset_id is something like `(parents:2, (GlobalConsensus(Ethereum), Address(tokenX_contract)))`. Its _original_ reserve is Ethereum (only we can't use Ethereum as a reserve in local transfers); but, since tokenX is also registered on AssetHub as a ForeignAsset, we can use AssetHub as a reserve. With this PR we can transfer tokenX from ParaA to ParaB while using AssetHub as a reserve. ## Transfer AssetHub ForeignAssets between parachains AssetA created on ParaA but also registered as foreign asset on Asset Hub. Can use AssetHub as a reserve. And all of the above can be done while still controlling transfer type for `fees` so mixing assets in same transfer is supported. # Tests Added integration tests for showcasing: - transferring local (not bridged) assets from parachain over bridge using local Asset Hub reserve, - transferring foreign assets from parachain to Asset Hub, - transferring foreign assets from Asset Hub to parachain, - transferring foreign assets from parachain to parachain using local Asset Hub reserve. --------- Co-authored-by: Branislav Kontur <[email protected]> Co-authored-by: command-bot <>
-
Andrei Sandu authored
Fixes https://github.com/paritytech/polkadot-sdk/issues/3576 Required by elastic scaling collators. Deprecates old API: `candidate_pending_availability`. TODO: - [x] PRDoc --------- Signed-off-by: Andrei Sandu <[email protected]>
-
Svyatoslav Nikolsky authored
As requested in https://github.com/paritytech/parity-bridges-common/pull/2873#discussion_r1558974215
-
eskimor authored
Small adjustments which should make understanding what is going on much easier for future readers. Initialization is a bit messy, the very least we should do is adding documentation to make it harder to use wrongly. I was thinking about calling `request_core_count` right from `start_sales`, but as explained in the docs, this is not necessarily what you want. --------- Co-authored-by: eskimor <[email protected]> Co-authored-by: Bastian Köcher <[email protected]> Co-authored-by: Dónal Murray <[email protected]>
-
Xiliang Chen authored
Implements https://github.com/polkadot-fellows/RFCs/issues/82 Allow any parachain to have bidirectional channel with any system parachains Looking for initial feedbacks before continue finish it TODOs: - [x] docs - [x] benchmarks - [x] tests --------- Co-authored-by: joe petrowski <[email protected]> Co-authored-by: Adrian Catangiu <[email protected]>
-
Andrei Sandu authored
One more try to make this test robust from a resource perspective. --------- Signed-off-by: Andrei Sandu <[email protected]> Co-authored-by: Javier Viola <[email protected]>
-