- Jan 17, 2025
-
-
thiolliere authored
We already use it for lots of pallet. Keeping it feature gated by experimental means we lose the information of which pallet was using experimental before the migration to frame crate usage. We can consider `polkadot-sdk-frame` crate unstable but let's not use the feature `experimental`. --------- Co-authored-by: command-bot <>
-
- Jan 16, 2025
-
-
Ankan authored
Migrate staking currency from `traits::LockableCurrency` to `traits::fungible::holds`. Resolves part of https://github.com/paritytech/polkadot-sdk/issues/226. ## Changes ### Nomination Pool TransferStake is now incompatible with fungible migration as old pools were not meant to have additional ED. Since they are anyways deprecated, removed its usage from all test runtimes. ### Staking - Config: `Currency` becomes of type `Fungible` while `OldCurrency` is the `LockableCurrency` used before. - Lazy migration of accounts. Any ledger update will create a new hold with no extra reads/writes. A permissionless extrinsic `migrate_currency()` releases the old `lock` along with some housekeeping. - Staking now requires ED to be left free. It also adds no consumer to staking accounts. - If hold cannot be applied to all stake, the un-holdable part is force withdrawn from the ledger. ### Delegated Staking The pallet does not add provider for agents anymore. ## Migration stats ### Polkadot Total accounts that can be migrated: 59564 Accounts failing to migrate: 0 Accounts with stake force withdrawn greater than ED: 59 Total force withdrawal: 29591.26 DOT ### Kusama Total accounts that can be migrated: 26311 Accounts failing to migrate: 0 Accounts with stake force withdrawn greater than ED: 48 Total force withdrawal: 1036.05 KSM [Full logs here](https://hackmd.io/@ak0n/BklDuFra0). ## Note about locks (freeze) vs holds With locks or freezes, staking could use total balance of an account. But with holds, the account needs to be left with at least Existential Deposit in free balance. This would also affect nomination pools which till now has been able to stake all funds contributed to it. An alternate version of this PR is https://github.com/paritytech/polkadot-sdk/pull/5658 where staking pallet does not add any provider, but means pools and delegated-staking pallet has to provide for these accounts and makes the end to end logic (of provider and consumer ref) lot less intuitive and prone to bug. This PR now introduces requirement for stakers to maintain ED in their free balance. This helps with removing the bug prone incrementing and decrementing of consumers and providers. ## TODO - [x] Test: Vesting + governance locked funds can be staked. - [ ] can `Call::restore_ledger` be removed? @gpestana - [x] Ensure unclaimed withdrawals is not affected by no provider for pool accounts. - [x] Investigate kusama accounts with balance between 0 and ED. - [x] Permissionless call to release lock. - [x] Migration of consumer (dec) and provider (inc) for direct stakers. - [x] force unstake if hold cannot be applied to all stake. - [x] Fix try state checks (it thinks nothing is staked for unmigrated ledgers). - [x] Bench `migrate_currency`. - [x] Virtual Staker migration test. - [x] Ensure total issuance is upto date when minting rewards. ## Followup - https://github.com/paritytech/polkadot-sdk/issues/5742 --------- Co-authored-by: command-bot <>
-
Liam Aharon authored
Closes #3149 ## Description This PR introduces `pallet-asset-rewards`, which allows accounts to be rewarded for freezing `fungible` tokens. The motivation for creating this pallet is to allow incentivising LPs. See the pallet docs for more info about the pallet. ## Runtime changes The pallet has been added to - `asset-hub-rococo` - `asset-hub-westend` The `NativeAndAssets` `fungibles` Union did not contain `PoolAssets`, so it has been renamed `NativeAndNonPoolAssets` A new `fungibles` Union `NativeAndAllAssets` was created to encompass all assets and the native token. ## TODO - [x] Emulation tests - [x] Fill in Freeze logic (blocked https://github.com/paritytech/polkadot-sdk/issues/3342) and re-run benchmarks --------- Co-authored-by: command-bot <> Co-authored-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by:
muharem <ismailov.m.h@gmail.com> Co-authored-by:
Guillaume Thiolliere <gui.thiolliere@gmail.com>
-
- Jan 15, 2025
-
-
Xavier Lau authored
Introduce `frame_system::Pallet::run_to_block`, `frame_system::Pallet::run_to_block_with`, and `frame_system::RunToBlockHooks` to establish a generic `run_to_block` mechanism for mock tests, minimizing redundant implementations across various pallets. Closes #299. --- Polkadot address: 156HGo9setPcU2qhFMVWLkcmtCEGySLwNqa3DaEiYSWtte4Y --------- Signed-off-by:
Xavier Lau <x@acg.box> Co-authored-by:
Bastian Köcher <git@kchr.de> Co-authored-by: command-bot <> Co-authored-by:
Guillaume Thiolliere <guillaume.thiolliere@parity.io> Co-authored-by:
Guillaume Thiolliere <gui.thiolliere@gmail.com>
-
Alexandru Gheorghe authored
Normally, approval-voting wouldn't receive duplicate assignments because approval-distribution makes sure of it, however in the situation where we restart we might receive the same assignment again and since approval-voting already persisted it we will end up inserting it twice in `ApprovalEntry.tranches.assignments` because that's an array. Fix this by making sure duplicate assignments are a noop if the validator already had an assignment imported at the same tranche. --------- Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io> Co-authored-by:
ordian <write@reusable.software>
-
- Jan 14, 2025
-
-
Carlo Sala authored
Port #6459 changes to relays as well, which were probably forgotten in that PR. Thanks! --------- Co-authored-by:
Francisco Aguirre <franciscoaguirreperez@gmail.com> Co-authored-by: command-bot <>
-
Alexandru Gheorghe authored
There is a problem on restart where nodes will not trigger their needed assignment if they were offline while the time of the assignment passed. That happens because after restart we will hit this condition https://github.com/paritytech/polkadot-sdk/blob/4e805ca0 /polkadot/node/core/approval-voting/src/lib.rs#L2495 and considered will be `tick_now` which is already higher than the tick of our assignment. The fix is to schedule a wakeup for untriggered assignments at restart and let the logic of processing an wakeup decide if it needs to trigger the assignment or not. One thing that we need to be careful here is to make sure we don't schedule the wake up immediately after restart because, the node would still be behind with all the assignments that should have received and might make it wrongfully decide it needs to trigger its assignment, so I added a `RESTART_WAKEUP_DELAY: Tick = 12` which should be more than enough for the node to catch up. --------- Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io> Co-authored-by:
ordian <write@reusable.software> Co-authored-by:
Andrei Eres <eresav@me.com>
-
Alexandru Gheorghe authored
Recovering the POV can fail in situation where the node just restart and the DHT topology wasn't fully discovered yet, so the current node can't connect to most of its Peers. This is bad because for gossiping the assignment you need to be connected to just a few peers, so because we can't approve the candidate and other nodes will see this as a no show. This becomes bad in the scenario where you've got a lot of nodes restarting at the same time, so you end up having a lot of no-shows in the network that are never covered, in that case it makes sense for nodes to actually retry approving the candidate at a later data in time and retry several times if the block containing the candidate wasn't approved. ## TODO - [x] Add a subsystem test. --------- Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io>
-
Alin Dima authored
-
- Jan 13, 2025
-
-
polka.dom authored
As per #3326, removes pallet::getter macro usage from pallet-grandpa. The syntax `StorageItem::<T, I>::get()` should be used instead. cc @muraca --------- Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Alexandru Gheorghe authored
Reference hardware requirements have been bumped to at least 8 cores so we can no allocate 50% of that capacity to PVF execution. --------- Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io>
-
Branislav Kontur authored
## Description This PR deprecates `UnpaidLocalExporter` in favor of the new `LocalExporter`. First, the name is misleading, as it can be used in both paid and unpaid scenarios. Second, it contains a hard-coded channel 0, whereas `LocalExporter` uses the same algorithm as `xcm-exporter`. ## Future Improvements Remove the `channel` argument and slightly modify the `ExportXcm::validate` signature as part of [this issue](https://github.com/orgs/paritytech/projects/145/views/8?pane=issue&itemId=84899273). --------- Co-authored-by: command-bot <>
-
- Jan 09, 2025
-
-
wmjae authored
Co-authored-by:
Dónal Murray <donalm@seadanda.dev>
-
- Jan 07, 2025
-
-
Alin Dima authored
Will fix: https://github.com/paritytech/polkadot-sdk/issues/6574 https://github.com/paritytech/polkadot-sdk/issues/6644 https://github.com/paritytech/polkadot-sdk/issues/6062 --------- Co-authored-by:
Javier Viola <javier@parity.io>
-
- Jan 06, 2025
-
-
Alin Dima authored
Fix this zombienet test It was failing because in https://github.com/paritytech/polkadot-sdk/pull/6452 I enabled the v2 receipts for testnet genesis, so the collators started sending v2 receipts with zeroed collator signatures to old validators that were still checking those signatures (which lead to disputes, since new validators considered the candidates valid). The fix is to also use an old image for collators, so that we don't create v2 receipts. We cannot remove this test yet because collators also perform chunk recovery, so until all collators are upgraded, we need to maintain this compatibility with the old protocol version (which is also why systematic recovery was not yet enabled)
-
taozui472 authored
Co-authored-by:
Dónal Murray <donal.murray@parity.io>
-
- Jan 05, 2025
-
-
thiolliere authored
Implement cumulus StorageWeightReclaim as wrapping transaction extension + frame system ReclaimWeight (#6140) (rebasing of https://github.com/paritytech/polkadot-sdk/pull/5234) ## Issues: * Transaction extensions have weights and refund weight. So the reclaiming of unused weight must happen last in the transaction extension pipeline. Currently it is inside `CheckWeight`. * cumulus storage weight reclaim transaction extension misses the proof size of logic happening prior to itself. ## Done: * a new storage `ExtrinsicWeightReclaimed` in frame-system. Any logic which attempts to do some reclaim must use this storage to avoid double reclaim. * a new function `reclaim_weight` in frame-system pallet: info and post info in arguments, read the already reclaimed weight, calculate the new unused weight from info and post info. do the more accurate reclaim if higher. * `CheckWeight` is unchanged and still reclaim the weight in post dispatch * `ReclaimWeight` is a new transaction extension in frame system. For solo chains it must be used last in the transactino extension pipeline. It does the final most accurate reclaim * `StorageWeightReclaim` is moved from cumulus primitives into its own pallet (in order to define benchmark) and is changed into a wrapping transaction extension. It does the recording of proof size and does the reclaim using this recording and the info and post info. So parachains don't need to use `ReclaimWeight`. But also if they use it, there is no bug. ```rust /// The TransactionExtension to the basic transaction logic. pub type TxExtension = cumulus_pallet_weight_reclaim::StorageWeightReclaim< Runtime, ( frame_system::CheckNonZeroSender<Runtime>, frame_system::CheckSpecVersion<Runtime>, frame_system::CheckTxVersion<Runtime>, frame_system::CheckGenesis<Runtime>, frame_system::CheckEra<Runtime>, frame_system::CheckNonce<Runtime>, frame_system::CheckWeight<Runtime>, pallet_transaction_payment::ChargeTransactionPayment<Runtime>, BridgeRejectObsoleteHeadersAndMessages, (bridge_to_rococo_config::OnBridgeHubWestendRefundBridgeHubRococoMessages,), frame_metadata_hash_extension::CheckMetadataHash<Runtime>, ), >; ``` --------- Co-authored-by:
GitHub Action <action@github.com> Co-authored-by:
georgepisaltu <52418509+georgepisaltu@users.noreply.github.com> Co-authored-by:
Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by:
Sebastian Kunert <skunert49@gmail.com> Co-authored-by: command-bot <>
-
- Dec 29, 2024
-
-
Xavier Lau authored
Migrate inclusion benchmark to v2. --- Polkadot address: 156HGo9setPcU2qhFMVWLkcmtCEGySLwNqa3DaEiYSWtte4Y --------- Co-authored-by:
GitHub Action <action@github.com> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
- Dec 27, 2024
-
-
Bastian Köcher authored
This pr improves the error reporting by paras registrar when an owner wants to access a locked parachain. Closes: https://github.com/paritytech/polkadot-sdk/issues/6745 --------- Co-authored-by: command-bot <>
-
- Dec 22, 2024
-
-
Xavier Lau authored
Make pallet-recovery supports `BlockNumberProvider`. Part of #6297. --- Polkadot address: 156HGo9setPcU2qhFMVWLkcmtCEGySLwNqa3DaEiYSWtte4Y --------- Co-authored-by:
Guillaume Thiolliere <guillaume.thiolliere@parity.io> Co-authored-by:
GitHub Action <action@github.com>
-
- Dec 20, 2024
-
-
Xavier Lau authored
It doesn't make sense to only reorder the features array. For example: This makes it hard for me to compare the dependencies and features, especially some crates have a really really long dependencies list. ```toml [dependencies] c = "*" a = "*" b = "*" [features] std = [ "a", "b", "c", ] ``` This makes my life easier. ```toml [dependencies] a = "*" b = "*" c = "*" [features] std = [ "a", "b", "c", ] ``` --------- Co-authored-by:
Bastian Köcher <git@kchr.de> Co-authored-by: command-bot <>
-
- Dec 19, 2024
-
-
Egor_P authored
This PR includes backport of the regular version bumps and `prdocs` reordering from the `stable2412` branch back ro master --------- Co-authored-by:
ParityReleases <release-team@parity.io> Co-authored-by: command-bot <>
-
clangenb authored
[polkadot-runtime-parachains] migrate disputes and disputes/slashing to benchmarking to bench v2 syntax (#6577) [polkadot-runtime-parachains] migrate disputes and disputes/slashing to benchmarking to bench v2 syntax Part of: * #6202 --------- Co-authored-by:
Giuseppe Re <giuseppe.re@parity.io> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Ludovic_Domingues authored
Linked to issue #590 Extracted code from mod.rs to new tests, mock and benchmarking files
-
Ludovic_Domingues authored
# Description Linked to issue #590. I moved the tests and benchmarking to their own seperate file to reduce the bloat inside auctions.rs Co-authored-by:
Shawn Tabrizi <shawntabrizi@gmail.com>
-
Ludovic_Domingues authored
Linked to issue #590 I moved the mod, tests, mock and benchmarking to their own seperate file to reduce the bloat inside purchase.rs --------- Co-authored-by:
Guillaume Thiolliere <gui.thiolliere@gmail.com> Co-authored-by:
Shawn Tabrizi <shawntabrizi@gmail.com>
-
- Dec 18, 2024
-
-
Branislav Kontur authored
Relates to: https://github.com/paritytech/polkadot-sdk/issues/6918 --------- Co-authored-by: command-bot <>
-
Ludovic_Domingues authored
Linked to issue #590. I moved the mod, tests, mock and benchmarking to their own seperate file to reduce the bloat inside claims.rs --------- Co-authored-by:
Guillaume Thiolliere <gui.thiolliere@gmail.com>
-
clangenb authored
Migrates pallet-xcm benchmarks to benchmark v2 syntax * Part of #6202
-
Alin Dima authored
Adds a new zombienet-sdk test which verifies that elastic scaling works correctly both with the MVP and the new RFC 103 implementation which sends the core selector as a UMP signal. Also enables the V2 receipts node feature for testnet genesis config. Part of https://github.com/paritytech/polkadot-sdk/issues/5049 --------- Co-authored-by:
Javier Viola <javier@parity.io> Co-authored-by:
Javier Viola <363911+pepoviola@users.noreply.github.com>
-
- Dec 14, 2024
-
-
Jarkko Sakkinen authored
Bump `polkavm` to 0.18.0, and update `sc-polkavm-executor` to be compatible with the API changes. In addition, bump also `polkavm-derive` and `polkavm-linker` in order to make sure that the all parts of the Polkadot SDK use the exact same ABI for `.polkavm` binaries. Purely relying on RV32E/RV64E ABI is not possible, as PolkaVM uses a RISCV-V alike ISA, which is derived from RV32E/RV64E but it is still its own microarchitecture, i.e. not fully binary compatible. --------- Signed-off-by:
Jarkko Sakkinen <jarkko@parity.io> Co-authored-by:
Koute <koute@users.noreply.github.com> Co-authored-by:
Alexander Theißen <alex.theissen@me.com>
-
- Dec 13, 2024
-
-
Alexandru Gheorghe authored
Approval voting canonicalize is off by one that means if we are finalizing blocks one by one, approval-voting cleans it up every other block for example: - With 1, 2, 3, 4, 5, 6 blocks created, the stored range would be StoredBlockRange(1,7) - When block 3 is finalized the canonicalize works and StoredBlockRange is (4,7) - When block 4 is finalized the canonicalize exists early because of the `if range.0 > canon_number` break clause, so blocks are not cleaned up. - When block 5 is finalized the canonicalize works and StoredBlockRange becomes (6,7) and both block 4 and 5 are cleaned up. The consequences of this is that sometimes we keep block entries around after they are finalized, so at restart we consider this blocks and send them to approval-distribution. In most cases this is not a problem, but in the case when finality is lagging on restart approval-distribution will receive 4 as being the oldest block it needs to work on, and since BlockFinalized is never resent for block 4 after restart it won't get the opportunity to clean that up. Therefore it will end running approval-distribution aggression on block 4, because that is the oldest block it received from approval-voting for which it did not see a BlockFinalized signal. --------- Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io>
-
Tsvetomir Dimitrov authored
Related to https://github.com/paritytech/polkadot-sdk/issues/1797 # The problem When fetching collations in collator protocol/validator side we need to ensure that each parachain has got a fair core time share depending on its assignments in the claim queue. This means that the number of collations fetched per parachain should ideally be equal to (but definitely not bigger than) the number of claims for the particular parachain in the claim queue. # Why the current implementation is not good enough The current implementation doesn't guarantee such fairness. For each relay parent there is a `waiting_queue` (PerRelayParent -> Collations -> waiting_queue) which holds any unfetched collations advertised to the validator. The collations are fetched on first in first out principle which means that if two parachains share a core and one of the parachains is more aggressive it might starve the second parachain. How? At each relay parent up to `max_candidate_depth` candidates are accepted (enforced in `fn is_seconded_limit_reached`) so if one of the parachains is quick enough to fill in the queue with its advertisements the validator will never fetch anything from the rest of the parachains despite they are scheduled. This doesn't mean that the aggressive parachain will occupy all the core time (this is guaranteed by the runtime) but it will deny the rest of the parachains sharing the same core to have collations backed. # How to fix it The solution I am proposing is to limit fetches and advertisements based on the state of the claim queue. At each relay parent the claim queue for the core assigned to the validator is fetched. For each parachain a fetch limit is calculated (equal to the number of entries in the claim queue). Advertisements are not fetched for a parachain which has exceeded its claims in the claim queue. This solves the problem with aggressive parachains advertising too much collations. The second part is in collation fetching logic. The collator will keep track on which collations it has fetched so far. When a new collation needs to be fetched instead of popping the first entry from the `waiting_queue` the validator examines the claim queue and looks for the earliest claim which hasn't got a corresponding fetch. This way the collator will always try to prioritise the most urgent entries. ## How the 'fair share of coretime' for each parachain is determined? Thanks to async backing we can accept more than one candidate per relay parent (with some constraints). We also have got the claim queue which gives us a hint which parachain will be scheduled next on each core. So thanks to the claim queue we can determine the maximum number of claims per parachain. For example the claim queue is [A A A] at relay parent X so we know that at relay parent X we can accept three candidates for parachain A. There are two things to consider though: 1. If we accept more than one candidate at relay parent X we are claiming the slot of a future relay parent. So accepting two candidates for relay parent X means that we are claiming the slot at rp X+1 or rp X+2. 2. At the same time the slot at relay parent X could have been claimed by a previous relay parent(s). This means that we need to accept less candidates at X or even no candidates. There are a few cases worth considering: 1. Slot claimed by previous relay parent. CQ @ rp X: [A A A] Advertisements at X-1 for para A: 2 Advertisements at X-2 for para A: 2 Outcome - at rp X we can accept only 1 advertisement since our slots were already claimed. 2. Slot in our claim queue already claimed at future relay parent CQ @ rp X: [A A A] Advertisements at X+1 for para A: 1 Advertisements at X+2 for para A: 1 Outcome: at rp X we can accept only 1 advertisement since the slots in our relay parents were already claimed. The situation becomes more complicated with multiple leaves (forks). Imagine we have got a fork at rp X: ``` CQ @ rp X: [A A A] (rp X) -> (rp X+1) -> rp(X+2) \-> (rp X+1') ``` Now when we examine the claim queue at RP X we need to consider both forks. This means that accepting a candidate at X means that we should have a slot for it in *BOTH* leaves. If for example there are three candidates accepted at rp X+1' we can't accept any candidates at rp X because there will be no slot for it in one of the leaves. ## How the claims are counted There are two solutions for counting the claims at relay parent X: 1. Keep a state for the claim queue (number of claims and which of them are claimed) and look it up when accepting a collation. With this approach we need to keep the state up to date with each new advertisement and each new leaf update. 2. Calculate the state of the claim queue on the fly at each advertisement. This way we rebuild the state of the claim queue at each advertisements. Solution 1 is hard to implement with forks. There are too many variants to keep track of (different state for each leaf) and at the same time we might never need to use them. So I decided to go with option 2 - building claim queue state on the fly. To achieve this I've extended `View` from backing_implicit_view to keep track of the outer leaves. I've also added a method which accepts a relay parent and return all paths from an outer leaf to it. Let's call it `paths_to_relay_parent`. So how the counting works for relay parent X? First we examine the number of seconded and pending advertisements (more on pending in a second) from relay parent X to relay parent X-N (inclusive) where N is the length of the claim queue. Then we use `paths_to_relay_parent` to obtain all paths from outer leaves to relay parent X. We calculate the claims at relay parents X+1 to X+N (inclusive) for each leaf and get the maximum value. This way we guarantee that the candidate at rp X can be included in each leaf. This is the state of the claim queue which we use to decide if we can fetch one more advertisement at rp X or not. ## What is a pending advertisement I mentioned that we count seconded and pending advertisements at relay parent X. A pending advertisement is: 1. An advertisement which is being fetched right now. 2. An advertisement pending validation at backing subsystem. 3. An advertisement blocked for seconding by backing because we don't know on of its parent heads. Any of these is considered a 'pending fetch' and a slot for it is kept. All of them are already tracked in `State`. --------- Co-authored-by:
Maciej <maciej.zyszkiewicz@parity.io> Co-authored-by: command-bot <> Co-authored-by:
Alin Dima <alin@parity.io>
-
- Dec 12, 2024
-
-
clangenb authored
[polkadot-runtime-parachains] migrate paras module to benchmarking v2 syntax Part of: * #6202 --------- Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Bastian Köcher authored
Co-authored-by:
GitHub Action <action@github.com> Co-authored-by:
Branislav Kontur <bkontur@gmail.com> Co-authored-by: command-bot <>
-
Kazunobu Ndong authored
# Description Issue #6476 Collation-generation is not needed for validators node, and should be removed. ## Implementation Use a `DummySubsystem` for `collation_generation` --------- Co-authored-by:
Bastian Köcher <git@kchr.de> Co-authored-by: command-bot <> Co-authored-by:
Dmitry Markin <dmitry@markin.tech> Co-authored-by:
Alexandru Vasile <60601340+lexnv@users.noreply.github.com>
-
- Dec 11, 2024
-
-
Francisco Aguirre authored
`InitiateTransfer`, the new instruction introduced in XCMv5, allows preserving the origin after a cross-chain transfer via the usage of the `AliasOrigin` instruction. The receiving chain needs to be configured to allow such this instruction to have its intended effect and not just throw an error. In this PR, I add the alias rules specified in the [RFC for origin preservation](https://github.com/polkadot-fellows/RFCs/blob/main/text/0122-alias-origin-on-asset-transfers.md) to westend chains so we can test these scenarios in the testnet. The new scenarios include: - Sending a cross-chain transfer from one system chain to another and doing a Transact on the same message (1 hop) - Sending a reserve asset transfer from one chain to another going through asset hub and doing Transact on the same message (2 hops) The updated chains are: - Relay: added `AliasChildLocation` - Collectives: added `AliasChildLocation` and `AliasOriginRootUsingFilter<AssetHubLocation, Everything>` - People: added `AliasChildLocation` and `AliasOriginRootUsingFilter<AssetHubLocation, Everything>` - Coretime: added `AliasChildLocation` and `AliasOriginRootUsingFilter<AssetHubLocation, Everything>` AssetHub already has `AliasChildLocation` and doesn't need the other config item. BridgeHub is not intended to be used by end users so I didn't add any config item. Only added `AliasChildOrigin` to the relay since we intend for it to be used less. --------- Co-authored-by:
GitHub Action <action@github.com> Co-authored-by: command-bot <>
-
Alexandru Gheorghe authored
After finality started lagging on kusama around `2025-11-25 15:55:40` nodes started being overloaded with messages and some restarted with ``` Subsystem approval-distribution-subsystem appears unresponsive when sending a message of type polkadot_node_subsystem_types::messages::ApprovalDistributionMessage. origin=polkadot_service::relay_chain_selection::SelectRelayChainInner<sc_client_db::Backend<sp_runtime::generic::block::Block<sp_runtime::generic::header::Header<u32, sp_runtime::traits::BlakeTwo256>, sp_runtime::OpaqueExtrinsic>>, polkadot_overseer::Handle> ``` I think this happened because our aggression in the current form is way too spammy and create problems in situation where we already constructed blocks with a load of candidates to check which what happened around `#25933682` before and after. However aggression, does help in the nightmare scenario where the network is segmented and sparsely connected, so I tend to think we shouldn't completely remove it. The current configuration is: ``` l1_threshold: Some(16), l2_threshold: Some(28), resend_unfinalized_period: Some(8), ``` The way aggression works right now : 1. After L1 is triggered all nodes send all messages they created to all the other nodes and all messages they would have they already send according to the topology. 2. Because of resend_unfinalized_period for each block all messages at step 1) are sent every 8 blocks, so for example let's say we have blocks 1 to 24 unfinalized, then at block 25, all messages for block 1, 9 will be resent, and consequently at block 26, all messages for block 2, 10 will be resent, this becomes worse as more blocks are created if backing backpressure did not kick in yet. In total this logic makes that each node receive 3 * total_number_of messages_per_block 3. L2 aggression is way too spammy, when L2 aggression is enabled all nodes sends all messages of a block on GridXY, that means that all messages are received and sent by node at least 2*sqrt(num_validators), so on kusama would be 66 * NUM_MESSAGES_AT_FIRST_UNFINALIZED_BLOCK, so even with a reasonable number of messages like 10K, which you can have if you escalated because of no shows, you end-up sending and receiving ~660k messages at once, I think that's what makes the approval-distribution to appear unresponsive on some nodes. 4. Duplicate messages are received by the nodes which turn, mark the node as banned, which may create more no-shows. ## Proposed improvements: 1. Make L2 trigger way later 28 blocks, instead of 64, this should literally the last resort, until then we should try to let the approval-voting escalation mechanism to do its things and cover the no-shows. 2. On L1 aggression don't send messages for blocks too far from the first_unfinalized there is no point in sending the messages for block 20, if block 1 is still unfinalized. 3. On L1 aggression, send messages then back-off for 3 * resend_unfinalized_period to give time for everyone to clear up their queues. 4. If aggression is enabled accept duplicate messages from validators and don't punish them by reducting their reputation which, which may create no-shows. --------- Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io> Co-authored-by:
Andrei Sandu <54316454+sandreim@users.noreply.github.com>
-
Ludovic_Domingues authored
# Description Migrated polkadot-runtime-common auctions benchmarking to the new benchmarking syntax v2. This is part of #6202 --------- Co-authored-by:
Giuseppe Re <giuseppe.re@parity.io> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
- Dec 10, 2024
-
-
Ron authored
## Description Our smoke tests transfer `WETH` from Sepolia to Westend-AssetHub breaks, try to reregister `WETH` on AH but fails as following: https://bridgehub-westend.subscan.io/xcm_message/westend-4796d6b3600aca32ef63b9953acf6a456cfd2fbe https://assethub-westend.subscan.io/extrinsic/9731267-0?event=9731267-2 The reason is that the transact call encoded on BH to register the asset https://github.com/paritytech/polkadot-sdk/blob/a77940ba /bridges/snowbridge/primitives/router/src/inbound/mod.rs#L282-L289 ``` 0x3500020209079edaa8020300fff9976782d46cc05630d1f6ebab18b2324d6b1400ce796ae65569a670d0c1cc1ac12515a3ce21b5fbf729d63d7b289baad070139d01000000000000000000000000000000 ``` the `asset_id` which is the xcm location can't be decoded on AH in V5 Issue initial post in https://matrix.to/#/!qUtSTcfMJzBdPmpFKa:parity.io/$RNMAxIIOKGtBAqkgwiFuQf4eNaYpmOK-Pfw4d6vv1aU?via=parity.io&via=matrix.org&via=web3.foundation --------- Co-authored-by:
Adrian Catangiu <adrian@parity.io> Co-authored-by:
Francisco Aguirre <franciscoaguirreperez@gmail.com>
-