- Jan 20, 2025
-
-
Benjamin Gallois authored
## Description The `frame-benchmarking-cli` crate has not been buildable without the `rocksdb` feature since version 1.17.0. **Error:** ```rust self.database()?.unwrap_or(Database::RocksDb), ^^^^^^^ variant or associated item not found in `Database` ``` This issue is also related to the `rocksdb` feature bleeding (#3793), where the `rocksdb` feature was always activated even when compiling this crate with `--no-default-features`. **Fix:** - Resolved the error by choosing `paritydb` as the default database when compiled without the `rocksdb` feature. - Fixed the issue where the `sc-cli` crate's `rocksdb` feature was always active, even compiling `frame-benchmarking-cli` with `--no-default-features`. ## Review Notes Fix the crate to be built without rocksdb, not intended to solve #3793. --------- Co-authored-by: command-bot <>
-
Branislav Kontur authored
This PR addresses a few minor issues found while working on the polkadot-fellows PR [https://github.com/polkadot-fellows/runtimes/pull/522](https://github.com/polkadot-fellows/runtimes/pull/522): - Incorrect generic type for `InboundLaneData` in `check_message_lane_weights`. - Renaming leftovers: `assigner_on_demand` -> `on_demand`.
-
Sebastian Kunert authored
Saw this test flake a few times, last time [here](https://github.com/paritytech/polkadot-sdk/actions/runs/12834432188/job/35791830215). We first fetch all processes in the test, then query `/proc/<pid>/stat` for every one of them. When the file was not found, we would error. Now we tolerate not finding this file. Ran 200 times locally without error, before would fail a few times, probably depending on process fluctuation (which I expect to be high on CI runners).
-
- Jan 17, 2025
-
-
Santi Balaguer authored
This adds a new Proxy type to Westend Runtime called ParaRegistration. This is related to: https://github.com/polkadot-fellows/runtimes/pull/520. This new proxy allows: 1. Reserve paraID 2. Register Parachain 3. Leverage Utilites pallet 4. Remove proxy. --------- Co-authored-by: command-bot <> Co-authored-by:
Dónal Murray <donal.murray@parity.io>
-
thiolliere authored
We already use it for lots of pallet. Keeping it feature gated by experimental means we lose the information of which pallet was using experimental before the migration to frame crate usage. We can consider `polkadot-sdk-frame` crate unstable but let's not use the feature `experimental`. --------- Co-authored-by: command-bot <>
-
- Jan 16, 2025
-
-
Ankan authored
Migrate staking currency from `traits::LockableCurrency` to `traits::fungible::holds`. Resolves part of https://github.com/paritytech/polkadot-sdk/issues/226. ## Changes ### Nomination Pool TransferStake is now incompatible with fungible migration as old pools were not meant to have additional ED. Since they are anyways deprecated, removed its usage from all test runtimes. ### Staking - Config: `Currency` becomes of type `Fungible` while `OldCurrency` is the `LockableCurrency` used before. - Lazy migration of accounts. Any ledger update will create a new hold with no extra reads/writes. A permissionless extrinsic `migrate_currency()` releases the old `lock` along with some housekeeping. - Staking now requires ED to be left free. It also adds no consumer to staking accounts. - If hold cannot be applied to all stake, the un-holdable part is force withdrawn from the ledger. ### Delegated Staking The pallet does not add provider for agents anymore. ## Migration stats ### Polkadot Total accounts that can be migrated: 59564 Accounts failing to migrate: 0 Accounts with stake force withdrawn greater than ED: 59 Total force withdrawal: 29591.26 DOT ### Kusama Total accounts that can be migrated: 26311 Accounts failing to migrate: 0 Accounts with stake force withdrawn greater than ED: 48 Total force withdrawal: 1036.05 KSM [Full logs here](https://hackmd.io/@ak0n/BklDuFra0). ## Note about locks (freeze) vs holds With locks or freezes, staking could use total balance of an account. But with holds, the account needs to be left with at least Existential Deposit in free balance. This would also affect nomination pools which till now has been able to stake all funds contributed to it. An alternate version of this PR is https://github.com/paritytech/polkadot-sdk/pull/5658 where staking pallet does not add any provider, but means pools and delegated-staking pallet has to provide for these accounts and makes the end to end logic (of provider and consumer ref) lot less intuitive and prone to bug. This PR now introduces requirement for stakers to maintain ED in their free balance. This helps with removing the bug prone incrementing and decrementing of consumers and providers. ## TODO - [x] Test: Vesting + governance locked funds can be staked. - [ ] can `Call::restore_ledger` be removed? @gpestana - [x] Ensure unclaimed withdrawals is not affected by no provider for pool accounts. - [x] Investigate kusama accounts with balance between 0 and ED. - [x] Permissionless call to release lock. - [x] Migration of consumer (dec) and provider (inc) for direct stakers. - [x] force unstake if hold cannot be applied to all stake. - [x] Fix try state checks (it thinks nothing is staked for unmigrated ledgers). - [x] Bench `migrate_currency`. - [x] Virtual Staker migration test. - [x] Ensure total issuance is upto date when minting rewards. ## Followup - https://github.com/paritytech/polkadot-sdk/issues/5742 --------- Co-authored-by: command-bot <>
-
Liam Aharon authored
Closes #3149 ## Description This PR introduces `pallet-asset-rewards`, which allows accounts to be rewarded for freezing `fungible` tokens. The motivation for creating this pallet is to allow incentivising LPs. See the pallet docs for more info about the pallet. ## Runtime changes The pallet has been added to - `asset-hub-rococo` - `asset-hub-westend` The `NativeAndAssets` `fungibles` Union did not contain `PoolAssets`, so it has been renamed `NativeAndNonPoolAssets` A new `fungibles` Union `NativeAndAllAssets` was created to encompass all assets and the native token. ## TODO - [x] Emulation tests - [x] Fill in Freeze logic (blocked https://github.com/paritytech/polkadot-sdk/issues/3342) and re-run benchmarks --------- Co-authored-by: command-bot <> Co-authored-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by:
muharem <ismailov.m.h@gmail.com> Co-authored-by:
Guillaume Thiolliere <gui.thiolliere@gmail.com>
-
- Jan 15, 2025
-
-
Xavier Lau authored
Introduce `frame_system::Pallet::run_to_block`, `frame_system::Pallet::run_to_block_with`, and `frame_system::RunToBlockHooks` to establish a generic `run_to_block` mechanism for mock tests, minimizing redundant implementations across various pallets. Closes #299. --- Polkadot address: 156HGo9setPcU2qhFMVWLkcmtCEGySLwNqa3DaEiYSWtte4Y --------- Signed-off-by:
Xavier Lau <x@acg.box> Co-authored-by:
Bastian Köcher <git@kchr.de> Co-authored-by: command-bot <> Co-authored-by:
Guillaume Thiolliere <guillaume.thiolliere@parity.io> Co-authored-by:
Guillaume Thiolliere <gui.thiolliere@gmail.com>
-
Alexandru Gheorghe authored
Normally, approval-voting wouldn't receive duplicate assignments because approval-distribution makes sure of it, however in the situation where we restart we might receive the same assignment again and since approval-voting already persisted it we will end up inserting it twice in `ApprovalEntry.tranches.assignments` because that's an array. Fix this by making sure duplicate assignments are a noop if the validator already had an assignment imported at the same tranche. --------- Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io> Co-authored-by:
ordian <write@reusable.software>
-
- Jan 14, 2025
-
-
Carlo Sala authored
Port #6459 changes to relays as well, which were probably forgotten in that PR. Thanks! --------- Co-authored-by:
Francisco Aguirre <franciscoaguirreperez@gmail.com> Co-authored-by: command-bot <>
-
Alexandru Gheorghe authored
There is a problem on restart where nodes will not trigger their needed assignment if they were offline while the time of the assignment passed. That happens because after restart we will hit this condition https://github.com/paritytech/polkadot-sdk/blob/4e805ca0/polkadot/node/core/approval-voting/src/lib.rs#L2495 and considered will be `tick_now` which is already higher than the tick of our assignment. The fix is to schedule a wakeup for untriggered assignments at restart and let the logic of processing an wakeup decide if it needs to trigger the assignment or not. One thing that we need to be careful here is to make sure we don't schedule the wake up immediately after restart because, the node would still be behind with all the assignments that should have received and might make it wrongfully decide it needs to trigger its assignment, so I added a `RESTART_WAKEUP_DELAY: Tick = 12` which should be more t...
-
Alexandru Gheorghe authored
Recovering the POV can fail in situation where the node just restart and the DHT topology wasn't fully discovered yet, so the current node can't connect to most of its Peers. This is bad because for gossiping the assignment you need to be connected to just a few peers, so because we can't approve the candidate and other nodes will see this as a no show. This becomes bad in the scenario where you've got a lot of nodes restarting at the same time, so you end up having a lot of no-shows in the network that are never covered, in that case it makes sense for nodes to actually retry approving the candidate at a later data in time and retry several times if the block containing the candidate wasn't approved. ## TODO - [x] Add a subsystem test. --------- Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io>
-
Alin Dima authored
-
- Jan 13, 2025
-
-
polka.dom authored
As per #3326, removes pallet::getter macro usage from pallet-grandpa. The syntax `StorageItem::<T, I>::get()` should be used instead. cc @muraca --------- Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Alexandru Gheorghe authored
Reference hardware requirements have been bumped to at least 8 cores so we can no allocate 50% of that capacity to PVF execution. --------- Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io>
-
Branislav Kontur authored
## Description This PR deprecates `UnpaidLocalExporter` in favor of the new `LocalExporter`. First, the name is misleading, as it can be used in both paid and unpaid scenarios. Second, it contains a hard-coded channel 0, whereas `LocalExporter` uses the same algorithm as `xcm-exporter`. ## Future Improvements Remove the `channel` argument and slightly modify the `ExportXcm::validate` signature as part of [this issue](https://github.com/orgs/paritytech/projects/145/views/8?pane=issue&itemId=84899273). --------- Co-authored-by: command-bot <>
-
- Jan 09, 2025
-
-
wmjae authored
Co-authored-by:
Dónal Murray <donalm@seadanda.dev>
-
- Jan 07, 2025
-
-
Alin Dima authored
Will fix: https://github.com/paritytech/polkadot-sdk/issues/6574 https://github.com/paritytech/polkadot-sdk/issues/6644 https://github.com/paritytech/polkadot-sdk/issues/6062 --------- Co-authored-by:
Javier Viola <javier@parity.io>
-
- Jan 06, 2025
-
-
Alin Dima authored
Fix this zombienet test It was failing because in https://github.com/paritytech/polkadot-sdk/pull/6452 I enabled the v2 receipts for testnet genesis, so the collators started sending v2 receipts with zeroed collator signatures to old validators that were still checking those signatures (which lead to disputes, since new validators considered the candidates valid). The fix is to also use an old image for collators, so that we don't create v2 receipts. We cannot remove this test yet because collators also perform chunk recovery, so until all collators are upgraded, we need to maintain this compatibility with the old protocol version (which is also why systematic recovery was not yet enabled)
-
taozui472 authored
Co-authored-by:
Dónal Murray <donal.murray@parity.io>
-
- Jan 05, 2025
-
-
thiolliere authored
Implement cumulus StorageWeightReclaim as wrapping transaction extension + frame system ReclaimWeight (#6140) (rebasing of https://github.com/paritytech/polkadot-sdk/pull/5234) ## Issues: * Transaction extensions have weights and refund weight. So the reclaiming of unused weight must happen last in the transaction extension pipeline. Currently it is inside `CheckWeight`. * cumulus storage weight reclaim transaction extension misses the proof size of logic happening prior to itself. ## Done: * a new storage `ExtrinsicWeightReclaimed` in frame-system. Any logic which attempts to do some reclaim must use this storage to avoid double reclaim. * a new function `reclaim_weight` in frame-system pallet: info and post info in arguments, read the already reclaimed weight, calculate the new unused weight from info and post info. do the more accurate reclaim if higher. * `CheckWeight` is unchanged and still reclaim the weight in post dispatch * `ReclaimWeight` is a new transaction extension in frame system. For s...
-
- Dec 29, 2024
-
-
Xavier Lau authored
Migrate inclusion benchmark to v2. --- Polkadot address: 156HGo9setPcU2qhFMVWLkcmtCEGySLwNqa3DaEiYSWtte4Y --------- Co-authored-by:
GitHub Action <action@github.com> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
- Dec 27, 2024
-
-
Bastian Köcher authored
This pr improves the error reporting by paras registrar when an owner wants to access a locked parachain. Closes: https://github.com/paritytech/polkadot-sdk/issues/6745 --------- Co-authored-by: command-bot <>
-
- Dec 22, 2024
-
-
Xavier Lau authored
Make pallet-recovery supports `BlockNumberProvider`. Part of #6297. --- Polkadot address: 156HGo9setPcU2qhFMVWLkcmtCEGySLwNqa3DaEiYSWtte4Y --------- Co-authored-by:
Guillaume Thiolliere <guillaume.thiolliere@parity.io> Co-authored-by:
GitHub Action <action@github.com>
-
- Dec 20, 2024
-
-
Xavier Lau authored
It doesn't make sense to only reorder the features array. For example: This makes it hard for me to compare the dependencies and features, especially some crates have a really really long dependencies list. ```toml [dependencies] c = "*" a = "*" b = "*" [features] std = [ "a", "b", "c", ] ``` This makes my life easier. ```toml [dependencies] a = "*" b = "*" c = "*" [features] std = [ "a", "b", "c", ] ``` --------- Co-authored-by:
Bastian Köcher <git@kchr.de> Co-authored-by: command-bot <>
-
- Dec 19, 2024
-
-
Egor_P authored
This PR includes backport of the regular version bumps and `prdocs` reordering from the `stable2412` branch back ro master --------- Co-authored-by:
ParityReleases <release-team@parity.io> Co-authored-by: command-bot <>
-
clangenb authored
[polkadot-runtime-parachains] migrate disputes and disputes/slashing to benchmarking to bench v2 syntax (#6577) [polkadot-runtime-parachains] migrate disputes and disputes/slashing to benchmarking to bench v2 syntax Part of: * #6202 --------- Co-authored-by:
Giuseppe Re <giuseppe.re@parity.io> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Ludovic_Domingues authored
Linked to issue #590 Extracted code from mod.rs to new tests, mock and benchmarking files
-
Ludovic_Domingues authored
# Description Linked to issue #590. I moved the tests and benchmarking to their own seperate file to reduce the bloat inside auctions.rs Co-authored-by:
Shawn Tabrizi <shawntabrizi@gmail.com>
-
Ludovic_Domingues authored
Linked to issue #590 I moved the mod, tests, mock and benchmarking to their own seperate file to reduce the bloat inside purchase.rs --------- Co-authored-by:
Guillaume Thiolliere <gui.thiolliere@gmail.com> Co-authored-by:
Shawn Tabrizi <shawntabrizi@gmail.com>
-
- Dec 18, 2024
-
-
Branislav Kontur authored
Relates to: https://github.com/paritytech/polkadot-sdk/issues/6918 --------- Co-authored-by: command-bot <>
-
Ludovic_Domingues authored
Linked to issue #590. I moved the mod, tests, mock and benchmarking to their own seperate file to reduce the bloat inside claims.rs --------- Co-authored-by:
Guillaume Thiolliere <gui.thiolliere@gmail.com>
-
clangenb authored
Migrates pallet-xcm benchmarks to benchmark v2 syntax * Part of #6202
-
Alin Dima authored
Adds a new zombienet-sdk test which verifies that elastic scaling works correctly both with the MVP and the new RFC 103 implementation which sends the core selector as a UMP signal. Also enables the V2 receipts node feature for testnet genesis config. Part of https://github.com/paritytech/polkadot-sdk/issues/5049 --------- Co-authored-by:
Javier Viola <javier@parity.io> Co-authored-by:
Javier Viola <363911+pepoviola@users.noreply.github.com>
-
- Dec 14, 2024
-
-
Jarkko Sakkinen authored
Bump `polkavm` to 0.18.0, and update `sc-polkavm-executor` to be compatible with the API changes. In addition, bump also `polkavm-derive` and `polkavm-linker` in order to make sure that the all parts of the Polkadot SDK use the exact same ABI for `.polkavm` binaries. Purely relying on RV32E/RV64E ABI is not possible, as PolkaVM uses a RISCV-V alike ISA, which is derived from RV32E/RV64E but it is still its own microarchitecture, i.e. not fully binary compatible. --------- Signed-off-by:
Jarkko Sakkinen <jarkko@parity.io> Co-authored-by:
Koute <koute@users.noreply.github.com> Co-authored-by:
Alexander Theißen <alex.theissen@me.com>
-
- Dec 13, 2024
-
-
Alexandru Gheorghe authored
Approval voting canonicalize is off by one that means if we are finalizing blocks one by one, approval-voting cleans it up every other block for example: - With 1, 2, 3, 4, 5, 6 blocks created, the stored range would be StoredBlockRange(1,7) - When block 3 is finalized the canonicalize works and StoredBlockRange is (4,7) - When block 4 is finalized the canonicalize exists early because of the `if range.0 > canon_number` break clause, so blocks are not cleaned up. - When block 5 is finalized the canonicalize works and StoredBlockRange becomes (6,7) and both block 4 and 5 are cleaned up. The consequences of this is that sometimes we keep block entries around after they are finalized, so at restart we consider this blocks and send them to approval-distribution. In most cases this is not a problem, but in the case when finality is lagging on restart approval-distribution will receive 4 as being the oldest block it needs to work on, and since BlockFinalize...
-
Tsvetomir Dimitrov authored
Related to https://github.com/paritytech/polkadot-sdk/issues/1797 # The problem When fetching collations in collator protocol/validator side we need to ensure that each parachain has got a fair core time share depending on its assignments in the claim queue. This means that the number of collations fetched per parachain should ideally be equal to (but definitely not bigger than) the number of claims for the particular parachain in the claim queue. # Why the current implementation is not good enough The current implementation doesn't guarantee such fairness. For each relay parent there is a `waiting_queue` (PerRelayParent -> Collations -> waiting_queue) which holds any unfetched collations advertised to the validator. The collations are fetched on first in first out principle which means that if two parachains share a core and one of the parachains is more aggressive it might starve the second parachain. How? At each relay parent up to `max_candidate_depth` candidates are accepted (enforced in `fn is_seconded_limit_reached`) so if one of the parachains is quick enough to fill in the queue with its advertisements the validator will never fetch anything from the rest of the parachains despite they are scheduled. This doesn't mean that the aggressive parachain will occupy all the core time (this is guaranteed by the runtime) but it will deny the rest of the parachains sharing the same core to have collations backed. # How to fix it The solution I am proposing is to limit fetches and advertisements based on the state of the claim queue. At each relay parent the claim queue for the core assigned to the validator is fetched. For each parachain a fetch limit is calculated (equal to the number of entries in the claim queue). Advertisements are not fetched for a parachain which has exceeded its claims in the claim queue. This solves the problem with aggressive parachains advertising too much collations. The second part is in collation fetching logic. The collator will keep track on which collations it has fetched so far. When a new collation needs to be fetched instead of popping the first entry from the `waiting_queue` the validator examines the claim queue and looks for the earliest claim which hasn't got a corresponding fetch. This way the collator will always try to prioritise the most urgent entries. ## How the 'fair share of coretime' for each parachain is determined? Thanks to async backing we can accept more than one candidate per relay parent (with some constraints). We also have got the claim queue which gives us a hint which parachain will be scheduled next on each core. So thanks to the claim queue we can determine the maximum number of claims per parachain. For example the claim queue is [A A A] at relay parent X so we know that at relay parent X we can accept three candidates for parachain A. There are two things to consider though: 1. If we accept more than one candidate at relay parent X we are claiming the slot of a future relay parent. So accepting two candidates for relay parent X means that we are claiming the slot at rp X+1 or rp X+2. 2. At the same time the slot at relay parent X could have been claimed by a previous relay parent(s). This means that we need to accept less candidates at X or even no candidates. There are a few cases worth considering: 1. Slot claimed by previous relay parent. CQ @ rp X: [A A A] Advertisements at X-1 for para A: 2 Advertisements at X-2 for para A: 2 Outcome - at rp X we can accept only 1 advertisement since our slots were already claimed. 2. Slot in our claim queue already claimed at future relay parent CQ @ rp X: [A A A] Advertisements at X+1 for para A: 1 Advertisements at X+2 for para A: 1 Outcome: at rp X we can accept only 1 advertisement since the slots in our relay parents were already claimed. The situation becomes more complicated with multiple leaves (forks). Imagine we have got a fork at rp X: ``` CQ @ rp X: [A A A] (rp X) -> (rp X+1) -> rp(X+2) \-> (rp X+1') ``` Now when we examine the claim queue at RP X we need to consider both forks. This means that accepting a candidate at X means that we should have a slot for it in *BOTH* leaves. If for example there are three candidates accepted at rp X+1' we can't accept any candidates at rp X because there will be no slot for it in one of the leaves. ## How the claims are counted There are two solutions for counting the claims at relay parent X: 1. Keep a state for the claim queue (number of claims and which of them are claimed) and look it up when accepting a collation. With this approach we need to keep the state up to date with each new advertisement and each new leaf update. 2. Calculate the state of the claim queue on the fly at each advertisement. This way we rebuild the state of the claim queue at each advertisements. Solution 1 is hard to implement with forks. There are too many variants to keep track of (different state for each leaf) and at the same time we might never need to use them. So I decided to go with option 2 - building claim queue state on the fly. To achieve this I've extended `View` from backing_implicit_view to keep track of the outer leaves. I've also added a method which accepts a relay parent and return all paths from an outer leaf to it. Let's call it `paths_to_relay_parent`. So how the counting works for relay parent X? First we examine the number of seconded and pending advertisements (more on pending in a second) from relay parent X to relay parent X-N (inclusive) where N is the length of the claim queue. Then we use `paths_to_relay_parent` to obtain all paths from outer leaves to relay parent X. We calculate the claims at relay parents X+1 to X+N (inclusive) for each leaf and get the maximum value. This way we guarantee that the candidate at rp X can be included in each leaf. This is the state of the claim queue which we use to decide if we can fetch one more advertisement at rp X or not. ## What is a pending advertisement I mentioned that we count seconded and pending advertisements at relay parent X. A pending advertisement is: 1. An advertisement which is being fetched right now. 2. An advertisement pending validation at backing subsystem. 3. An advertisement blocked for seconding by backing because we don't know on of its parent heads. Any of these is considered a 'pending fetch' and a slot for it is kept. All of them are already tracked in `State`. --------- Co-authored-by:
Maciej <maciej.zyszkiewicz@parity.io> Co-authored-by: command-bot <> Co-authored-by:
Alin Dima <alin@parity.io>
-
- Dec 12, 2024
-
-
clangenb authored
[polkadot-runtime-parachains] migrate paras module to benchmarking v2 syntax Part of: * #6202 --------- Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Bastian Köcher authored
Co-authored-by:
GitHub Action <action@github.com> Co-authored-by:
Branislav Kontur <bkontur@gmail.com> Co-authored-by: command-bot <>
-
Kazunobu Ndong authored
# Description Issue #6476 Collation-generation is not needed for validators node, and should be removed. ## Implementation Use a `DummySubsystem` for `collation_generation` --------- Co-authored-by:
Bastian Köcher <git@kchr.de> Co-authored-by: command-bot <> Co-authored-by:
Dmitry Markin <dmitry@markin.tech> Co-authored-by:
Alexandru Vasile <60601340+lexnv@users.noreply.github.com>
-