- Dec 19, 2024
-
-
Egor_P authored
This PR includes backport of the regular version bumps and `prdocs` reordering from the `stable2412` branch back ro master --------- Co-authored-by:
ParityReleases <release-team@parity.io> Co-authored-by: command-bot <>
-
clangenb authored
[polkadot-runtime-parachains] migrate disputes and disputes/slashing to benchmarking to bench v2 syntax (#6577) [polkadot-runtime-parachains] migrate disputes and disputes/slashing to benchmarking to bench v2 syntax Part of: * #6202 --------- Co-authored-by:
Giuseppe Re <giuseppe.re@parity.io> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Ludovic_Domingues authored
Linked to issue #590 Extracted code from mod.rs to new tests, mock and benchmarking files
-
Ludovic_Domingues authored
# Description Linked to issue #590. I moved the tests and benchmarking to their own seperate file to reduce the bloat inside auctions.rs Co-authored-by:
Shawn Tabrizi <shawntabrizi@gmail.com>
-
Ludovic_Domingues authored
Linked to issue #590 I moved the mod, tests, mock and benchmarking to their own seperate file to reduce the bloat inside purchase.rs --------- Co-authored-by:
Guillaume Thiolliere <gui.thiolliere@gmail.com> Co-authored-by:
Shawn Tabrizi <shawntabrizi@gmail.com>
-
- Dec 18, 2024
-
-
Branislav Kontur authored
Relates to: https://github.com/paritytech/polkadot-sdk/issues/6918 --------- Co-authored-by: command-bot <>
-
Ludovic_Domingues authored
Linked to issue #590. I moved the mod, tests, mock and benchmarking to their own seperate file to reduce the bloat inside claims.rs --------- Co-authored-by:
Guillaume Thiolliere <gui.thiolliere@gmail.com>
-
clangenb authored
Migrates pallet-xcm benchmarks to benchmark v2 syntax * Part of #6202
-
Alin Dima authored
Adds a new zombienet-sdk test which verifies that elastic scaling works correctly both with the MVP and the new RFC 103 implementation which sends the core selector as a UMP signal. Also enables the V2 receipts node feature for testnet genesis config. Part of https://github.com/paritytech/polkadot-sdk/issues/5049 --------- Co-authored-by:
Javier Viola <javier@parity.io> Co-authored-by:
Javier Viola <363911+pepoviola@users.noreply.github.com>
-
- Dec 14, 2024
-
-
Jarkko Sakkinen authored
Bump `polkavm` to 0.18.0, and update `sc-polkavm-executor` to be compatible with the API changes. In addition, bump also `polkavm-derive` and `polkavm-linker` in order to make sure that the all parts of the Polkadot SDK use the exact same ABI for `.polkavm` binaries. Purely relying on RV32E/RV64E ABI is not possible, as PolkaVM uses a RISCV-V alike ISA, which is derived from RV32E/RV64E but it is still its own microarchitecture, i.e. not fully binary compatible. --------- Signed-off-by:
Jarkko Sakkinen <jarkko@parity.io> Co-authored-by:
Koute <koute@users.noreply.github.com> Co-authored-by:
Alexander Theißen <alex.theissen@me.com>
-
- Dec 13, 2024
-
-
Alexandru Gheorghe authored
Approval voting canonicalize is off by one that means if we are finalizing blocks one by one, approval-voting cleans it up every other block for example: - With 1, 2, 3, 4, 5, 6 blocks created, the stored range would be StoredBlockRange(1,7) - When block 3 is finalized the canonicalize works and StoredBlockRange is (4,7) - When block 4 is finalized the canonicalize exists early because of the `if range.0 > canon_number` break clause, so blocks are not cleaned up. - When block 5 is finalized the canonicalize works and StoredBlockRange becomes (6,7) and both block 4 and 5 are cleaned up. The consequences of this is that sometimes we keep block entries around after they are finalized, so at restart we consider this blocks and send them to approval-distribution. In most cases this is not a problem, but in the case when finality is lagging on restart approval-distribution will receive 4 as being the oldest block it needs to work on, and since BlockFinalized is never resent for block 4 after restart it won't get the opportunity to clean that up. Therefore it will end running approval-distribution aggression on block 4, because that is the oldest block it received from approval-voting for which it did not see a BlockFinalized signal. --------- Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io>
-
Tsvetomir Dimitrov authored
Related to https://github.com/paritytech/polkadot-sdk/issues/1797 # The problem When fetching collations in collator protocol/validator side we need to ensure that each parachain has got a fair core time share depending on its assignments in the claim queue. This means that the number of collations fetched per parachain should ideally be equal to (but definitely not bigger than) the number of claims for the particular parachain in the claim queue. # Why the current implementation is not good enough The current implementation doesn't guarantee such fairness. For each relay parent there is a `waiting_queue` (PerRelayParent -> Collations -> waiting_queue) which holds any unfetched collations advertised to the validator. The collations are fetched on first in first out principle which means that if two parachains share a core and one of the parachains is more aggressive it might starve the second parachain. How? At each relay parent up to `max_candidate_depth` candidates are accepted (enforced in `fn is_seconded_limit_reached`) so if one of the parachains is quick enough to fill in the queue with its advertisements the validator will never fetch anything from the rest of the parachains despite they are scheduled. This doesn't mean that the aggressive parachain will occupy all the core time (this is guaranteed by the runtime) but it will deny the rest of the parachains sharing the same core to have collations backed. # How to fix it The solution I am proposing is to limit fetches and advertisements based on the state of the claim queue. At each relay parent the claim queue for the core assigned to the validator is fetched. For each parachain a fetch limit is calculated (equal to the number of entries in the claim queue). Advertisements are not fetched for a parachain which has exceeded its claims in the claim queue. This solves the problem with aggressive parachains advertising too much collations. The second part is in collation fetching logic. The collator will keep track on which collations it has fetched so far. When a new collation needs to be fetched instead of popping the first entry from the `waiting_queue` the validator examines the claim queue and looks for the earliest claim which hasn't got a corresponding fetch. This way the collator will always try to prioritise the most urgent entries. ## How the 'fair share of coretime' for each parachain is determined? Thanks to async backing we can accept more than one candidate per relay parent (with some constraints). We also have got the claim queue which gives us a hint which parachain will be scheduled next on each core. So thanks to the claim queue we can determine the maximum number of claims per parachain. For example the claim queue is [A A A] at relay parent X so we know that at relay parent X we can accept three candidates for parachain A. There are two things to consider though: 1. If we accept more than one candidate at relay parent X we are claiming the slot of a future relay parent. So accepting two candidates for relay parent X means that we are claiming the slot at rp X+1 or rp X+2. 2. At the same time the slot at relay parent X could have been claimed by a previous relay parent(s). This means that we need to accept less candidates at X or even no candidates. There are a few cases worth considering: 1. Slot claimed by previous relay parent. CQ @ rp X: [A A A] Advertisements at X-1 for para A: 2 Advertisements at X-2 for para A: 2 Outcome - at rp X we can accept only 1 advertisement since our slots were already claimed. 2. Slot in our claim queue already claimed at future relay parent CQ @ rp X: [A A A] Advertisements at X+1 for para A: 1 Advertisements at X+2 for para A: 1 Outcome: at rp X we can accept only 1 advertisement since the slots in our relay parents were already claimed. The situation becomes more complicated with multiple leaves (forks). Imagine we have got a fork at rp X: ``` CQ @ rp X: [A A A] (rp X) -> (rp X+1) -> rp(X+2) \-> (rp X+1') ``` Now when we examine the claim queue at RP X we need to consider both forks. This means that accepting a candidate at X means that we should have a slot for it in *BOTH* leaves. If for example there are three candidates accepted at rp X+1' we can't accept any candidates at rp X because there will be no slot for it in one of the leaves. ## How the claims are counted There are two solutions for counting the claims at relay parent X: 1. Keep a state for the claim queue (number of claims and which of them are claimed) and look it up when accepting a collation. With this approach we need to keep the state up to date with each new advertisement and each new leaf update. 2. Calculate the state of the claim queue on the fly at each advertisement. This way we rebuild the state of the claim queue at each advertisements. Solution 1 is hard to implement with forks. There are too many variants to keep track of (different state for each leaf) and at the same time we might never need to use them. So I decided to go with option 2 - building claim queue state on the fly. To achieve this I've extended `View` from backing_implicit_view to keep track of the outer leaves. I've also added a method which accepts a relay parent and return all paths from an outer leaf to it. Let's call it `paths_to_relay_parent`. So how the counting works for relay parent X? First we examine the number of seconded and pending advertisements (more on pending in a second) from relay parent X to relay parent X-N (inclusive) where N is the length of the claim queue. Then we use `paths_to_relay_parent` to obtain all paths from outer leaves to relay parent X. We calculate the claims at relay parents X+1 to X+N (inclusive) for each leaf and get the maximum value. This way we guarantee that the candidate at rp X can be included in each leaf. This is the state of the claim queue which we use to decide if we can fetch one more advertisement at rp X or not. ## What is a pending advertisement I mentioned that we count seconded and pending advertisements at relay parent X. A pending advertisement is: 1. An advertisement which is being fetched right now. 2. An advertisement pending validation at backing subsystem. 3. An advertisement blocked for seconding by backing because we don't know on of its parent heads. Any of these is considered a 'pending fetch' and a slot for it is kept. All of them are already tracked in `State`. --------- Co-authored-by:
Maciej <maciej.zyszkiewicz@parity.io> Co-authored-by: command-bot <> Co-authored-by:
Alin Dima <alin@parity.io>
-
- Dec 12, 2024
-
-
clangenb authored
[polkadot-runtime-parachains] migrate paras module to benchmarking v2 syntax Part of: * #6202 --------- Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Bastian Köcher authored
Co-authored-by:
GitHub Action <action@github.com> Co-authored-by:
Branislav Kontur <bkontur@gmail.com> Co-authored-by: command-bot <>
-
Kazunobu Ndong authored
# Description Issue #6476 Collation-generation is not needed for validators node, and should be removed. ## Implementation Use a `DummySubsystem` for `collation_generation` --------- Co-authored-by:
Bastian Köcher <git@kchr.de> Co-authored-by: command-bot <> Co-authored-by:
Dmitry Markin <dmitry@markin.tech> Co-authored-by:
Alexandru Vasile <60601340+lexnv@users.noreply.github.com>
-
- Dec 11, 2024
-
-
Francisco Aguirre authored
`InitiateTransfer`, the new instruction introduced in XCMv5, allows preserving the origin after a cross-chain transfer via the usage of the `AliasOrigin` instruction. The receiving chain needs to be configured to allow such this instruction to have its intended effect and not just throw an error. In this PR, I add the alias rules specified in the [RFC for origin preservation](https://github.com/polkadot-fellows/RFCs/blob/main/text/0122-alias-origin-on-asset-transfers.md) to westend chains so we can test these scenarios in the testnet. The new scenarios include: - Sending a cross-chain transfer from one system chain to another and doing a Transact on the same message (1 hop) - Sending a reserve asset transfer from one chain to another going through asset hub and doing Transact on the same message (2 hops) The updated chains are: - Relay: added `AliasChildLocation` - Collectives: added `AliasChildLocation` and `AliasOriginRootUsingFilter<AssetHubLocation, Everything>` - People: added `AliasChildLocation` and `AliasOriginRootUsingFilter<AssetHubLocation, Everything>` - Coretime: added `AliasChildLocation` and `AliasOriginRootUsingFilter<AssetHubLocation, Everything>` AssetHub already has `AliasChildLocation` and doesn't need the other config item. BridgeHub is not intended to be used by end users so I didn't add any config item. Only added `AliasChildOrigin` to the relay since we intend for it to be used less. --------- Co-authored-by:
GitHub Action <action@github.com> Co-authored-by: command-bot <>
-
Alexandru Gheorghe authored
After finality started lagging on kusama around `2025-11-25 15:55:40` nodes started being overloaded with messages and some restarted with ``` Subsystem approval-distribution-subsystem appears unresponsive when sending a message of type polkadot_node_subsystem_types::messages::ApprovalDistributionMessage. origin=polkadot_service::relay_chain_selection::SelectRelayChainInner<sc_client_db::Backend<sp_runtime::generic::block::Block<sp_runtime::generic::header::Header<u32, sp_runtime::traits::BlakeTwo256>, sp_runtime::OpaqueExtrinsic>>, polkadot_overseer::Handle> ``` I think this happened because our aggression in the current form is way too spammy and create problems in situation where we already constructed blocks with a load of candidates to check which what happened around `#25933682` before and after. However aggression, does help in the nightmare scenario where the network is segmented and sparsely connected, so I tend to think we shouldn't completely remove it. The current configuration is: ``` l1_threshold: Some(16), l2_threshold: Some(28), resend_unfinalized_period: Some(8), ``` The way aggression works right now : 1. After L1 is triggered all nodes send all messages they created to all the other nodes and all messages they would have they already send according to the topology. 2. Because of resend_unfinalized_period for each block all messages at step 1) are sent every 8 blocks, so for example let's say we have blocks 1 to 24 unfinalized, then at block 25, all messages for block 1, 9 will be resent, and consequently at block 26, all messages for block 2, 10 will be resent, this becomes worse as more blocks are created if backing backpressure did not kick in yet. In total this logic makes that each node receive 3 * total_number_of messages_per_block 3. L2 aggression is way too spammy, when L2 aggression is enabled all nodes sends all messages of a block on GridXY, that means that all messages are received and sent by node at least 2*sqrt(num_validators), so on kusama would be 66 * NUM_MESSAGES_AT_FIRST_UNFINALIZED_BLOCK, so even with a reasonable number of messages like 10K, which you can have if you escalated because of no shows, you end-up sending and receiving ~660k messages at once, I think that's what makes the approval-distribution to appear unresponsive on some nodes. 4. Duplicate messages are received by the nodes which turn, mark the node as banned, which may create more no-shows. ## Proposed improvements: 1. Make L2 trigger way later 28 blocks, instead of 64, this should literally the last resort, until then we should try to let the approval-voting escalation mechanism to do its things and cover the no-shows. 2. On L1 aggression don't send messages for blocks too far from the first_unfinalized there is no point in sending the messages for block 20, if block 1 is still unfinalized. 3. On L1 aggression, send messages then back-off for 3 * resend_unfinalized_period to give time for everyone to clear up their queues. 4. If aggression is enabled accept duplicate messages from validators and don't punish them by reducting their reputation which, which may create no-shows. --------- Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io> Co-authored-by:
Andrei Sandu <54316454+sandreim@users.noreply.github.com>
-
Ludovic_Domingues authored
# Description Migrated polkadot-runtime-common auctions benchmarking to the new benchmarking syntax v2. This is part of #6202 --------- Co-authored-by:
Giuseppe Re <giuseppe.re@parity.io> Co-authored-by:
Bastian Köcher <git@kchr.de>
-
- Dec 10, 2024
-
-
Ron authored
## Description Our smoke tests transfer `WETH` from Sepolia to Westend-AssetHub breaks, try to reregister `WETH` on AH but fails as following: https://bridgehub-westend.subscan.io/xcm_message/westend-4796d6b3600aca32ef63b9953acf6a456cfd2fbe https://assethub-westend.subscan.io/extrinsic/9731267-0?event=9731267-2 The reason is that the transact call encoded on BH to register the asset https://github.com/paritytech/polkadot-sdk/blob/a77940ba /bridges/snowbridge/primitives/router/src/inbound/mod.rs#L282-L289 ``` 0x3500020209079edaa8020300fff9976782d46cc05630d1f6ebab18b2324d6b1400ce796ae65569a670d0c1cc1ac12515a3ce21b5fbf729d63d7b289baad070139d01000000000000000000000000000000 ``` the `asset_id` which is the xcm location can't be decoded on AH in V5 Issue initial post in https://matrix.to/#/!qUtSTcfMJzBdPmpFKa:parity.io/$RNMAxIIOKGtBAqkgwiFuQf4eNaYpmOK-Pfw4d6vv1aU?via=parity.io&via=matrix.org&via=web3.foundation --------- Co-authored-by:
Adrian Catangiu <adrian@parity.io> Co-authored-by:
Francisco Aguirre <franciscoaguirreperez@gmail.com>
-
Alexandru Gheorghe authored
The way we build the messages we need to send to approval-distribution can result in a situation where is we have multiple assignments covered by a coalesced approval, the messages are sent in this order: ASSIGNMENT1, APPROVAL, ASSIGNMENT2, because we iterate over each candidate and add to the queue of messages both the assignment and the approval for that candidate, and when the approval reaches the approval-distribution subsystem it won't be imported and gossiped because one of the assignment for it is not known. So in a network where a lot of nodes are restarting at the same time we could end up in a situation where a set of the nodes correctly received the assignments and approvals before the restart and approve their blocks and don't trigger their assignments. The other set of nodes should receive the assignments and approvals after the restart, but because the approvals never get broacasted anymore because of this bug, the only way they could approve is if other nodes start broadcasting their assignments. I think this bug contribute to the reason the network did not recovered on `25-11-25 15:55:40` after the restarts. Tested this scenario with a `zombienet` where `nodes` are finalising blocks because of aggression and all nodes are restarted at once and confirmed the network lags and doesn't recover before and it does after the fix --------- Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io>
-
Joseph Zhao authored
Close: #5858 --------- Co-authored-by:
Bastian Köcher <git@kchr.de>
-
Branislav Kontur authored
Co-authored-by:
Francisco Aguirre <franciscoaguirreperez@gmail.com>
-
- Dec 09, 2024
-
-
Adrian Catangiu authored
# Description Sending XCM messages to other chains requires paying a "transport fee". This can be paid either: - from `origin` local account if `jit_withdraw = true`, - taken from Holding register otherwise. This currently works for following hops/scenarios: 1. On destination no transport fee needed (only sending costs, not receiving), 2. Local/originating chain: just set JIT=true and fee will be paid from signed account, 3. Intermediary hops - only if intermediary is acting as reserve between two untrusted chains (aka only for `DepositReserveAsset` instruction) - this was fixed in https://github.com/paritytech/polkadot-sdk/pull/3142 But now we're seeing more complex asset transfers that are mixing reserve transfers with teleports depending on the involved chains. # Example E.g. transferring DOT between Relay and parachain, but through AH (using AH instead of the Relay chain as parachain's DOT reserve). In the `Parachain --1--> AssetHub --2--> Relay` scenario, DOT has to be reserve-withdrawn in leg `1`, then teleported in leg `2`. On the intermediary hop (AssetHub), `InitiateTeleport` fails to send onward message because of missing transport fees. We also can't rely on `jit_withdraw` because the original origin is lost on the way, and even if it weren't we can't rely on the user having funded accounts on each hop along the way. # Solution/Changes - Charge the transport fee in the executor from the transferred assets (if available), - Only charge from transferred assets if JIT_WITHDRAW was not set, - Only charge from transferred assets if unless using XCMv5 `PayFees` where we do not have this problem. # Testing Added regression tests in emulated transfers. Fixes https://github.com/paritytech/polkadot-sdk/issues/4832 Fixes https://github.com/paritytech/polkadot-sdk/issues/6637 --------- Signed-off-by:
Adrian Catangiu <adrian@parity.io> Co-authored-by:
Francisco Aguirre <franciscoaguirreperez@gmail.com>
-
Alexandru Gheorghe authored
After finality started lagging on kusama around 025-11-25 15:55:40 validators started seeing ocassionally this log, when importing votes covering more than one assignment. ``` Possible bug: Vote import failed ``` That happens because the assumption that assignments from the same validator would have the same required routing doesn't hold after you enabled aggression, so you might end up receiving the first assignment then you modify the routing for it in `enable_aggression` then your receive the second assignment and the vote covering both assignments, so the rouing for the first and second assingment wouldn't match and we would fail to import the vote. From the logs I've seen, I don't think this is the reason the network didn't fully recover until the failsafe kicked it, because the votes had been already imported in approval-voting before this error. --------- Signed-off-by:
Alexandru Gheorghe <alexandru.gheorghe@parity.io>
-
Maksym H authored
- change bench to default to old CLI - fix profile to production --------- Co-authored-by:
GitHub Action <action@github.com> Co-authored-by: command-bot <>
-
- Dec 08, 2024
-
-
Francisco Aguirre authored
Last feature we wanted for V5, changing `SetAssetClaimer` to be just one of many possible "hints" that you can specify at the beginning of your program to change its behaviour. This makes it easier to add new hints in the future and have barriers accept them. --------- Co-authored-by:
GitHub Action <action@github.com>
-
- Dec 06, 2024
-
-
Maksym H authored
Co-authored-by: command-bot <>
-
- Dec 05, 2024
-
-
Francisco Aguirre authored
Closes: https://github.com/paritytech/polkadot-sdk/issues/6585 Removing the `require_weight_at_most` parameter in V5 Transact had only one problem. Converting a message from V5 to V4 to send to chains that didn't upgrade yet. The conversion would not know what weight to give to the Transact, since V4 and below require it. To fix this, I added back the weight in the form of an `Option<Weight>` called `fallback_max_weight`. This can be set to `None` if you don't intend to deal with a chain that hasn't upgraded yet. If you set it to `Some(_)`, the behaviour is the same. The plan is to totally remove this in V6 since there will be a good conversion path from V6 to V5. --------- Co-authored-by:
GitHub Action <action@github.com> Co-authored-by:
Adrian Catangiu <adrian@parity.io>
-
- Dec 03, 2024
-
-
Lulu authored
-
- Nov 29, 2024
-
-
eskimor authored
This might actually happen in non malicious cases. Co-authored-by:
eskimor <eskimor@no-such-url.com>
-
- Nov 28, 2024
-
-
Ludovic_Domingues authored
# Description Migrated pallet-xcm-benchmarks to benchmaking syntax V2 This is part of #6202 --------- Co-authored-by:
Giuseppe Re <giuseppe.re@parity.io>
-
- Nov 26, 2024
-
-
Francisco Aguirre authored
The `query_weight_to_asset_fee` function was trying to convert versions by using `try_as`, this function [doesn't convert from a versioned to a concrete type](https://github.com/paritytech/polkadot-sdk/blob/0156ca8f/polkadot/xcm/src/lib.rs#L131). This would cause all calls with a lower version to fail. The correct function to use is the good old [try_into](https://github.com/paritytech/polkadot-sdk/blob/0156ca8f /polkadot/xcm/src/lib.rs#L184). Now those calls work :) --------- Co-authored-by: command-bot <> Co-authored-by:
Branislav Kontur <bkontur@gmail.com> Co-authored-by:
GitHub Action <action@github.com>
-
Branislav Kontur authored
This PR addresses two small fixes: 1. Fixed a typo ("as as") found on the way. 2. Resolved a bug in the `local/remote exporters` used for bridging. Previously, they consumed `dest` and `msg` without returning them when inner routers/exporters failed with `NotApplicable`. This PR ensures compliance with the [`SendXcm`](https://github.com/paritytech/polkadot-sdk/blob/master/polkadot/xcm/src/v5/traits.rs#L449-L450) and [`ExportXcm`](https://github.com/paritytech/polkadot-sdk/blob/master/polkadot/xcm/xcm-executor/src/traits/export.rs#L44-L45) traits. --------- Co-authored-by:
GitHub Action <action@github.com>
-
- Nov 25, 2024
-
-
jpserrat authored
Closes #6415 # Description Remove unused message `ReportCollator` and test related to this message on the collator protocol validator side. cc: @tdimitrov --------- Co-authored-by:
Tsvetomir Dimitrov <tsvetomir@parity.io> Co-authored-by: command-bot <>
-
Branislav Kontur authored
-
- Nov 22, 2024
-
-
eskimor authored
Co-authored-by:
Robert <robert@gonimo.com> Co-authored-by:
ordian <write@reusable.software>
-
gupnik authored
Step in https://github.com/paritytech/polkadot-sdk/issues/3268 This PR adds the ability for these pallets to specify their source of the block number. This is useful when these pallets are migrated from the relay chain to a parachain and vice versa. This change is backwards compatible: 1. If the `BlockNumberProvider` continues to use the system pallet's block number 2. When a pallet deployed on the relay chain is moved to a parachain, but still uses the relay chain's block number However, we would need migrations if the deployed pallets are upgraded on an existing parachain, and the `BlockNumberProvider` uses the relay chain block number. --------- Co-authored-by:
Kian Paimani <5588131+kianenigma@users.noreply.github.com>
-
- Nov 19, 2024
-
-
Bastian Köcher authored
This pull request forward all the logging directives given to the node via `RUST_LOG` or `-l` to the workers, instead of only forwarding `RUST_LOG`. --------- Co-authored-by:
GitHub Action <action@github.com>
-
Maciej authored
Aims to implement Stage 3 of Validator Disbling as outlined here: https://github.com/paritytech/polkadot-sdk/issues/4359 Features: - [x] New Disabling Strategy (Staking level) - [x] Re-enabling logic (Session level) - [x] More generic disabling decision output - [x] New Disabling Events Testing & Security: - [x] Unit tests - [x] Mock tests - [x] Try-runtime checks - [x] Try-runtime tested on westend snap - [x] Try-runtime CI tests - [ ] Re-enabling Zombienet Test (?) - [ ] SRLabs Audit Closes #4745 Closes #2418 --------- Co-authored-by:
ordian <write@reusable.software> Co-authored-by:
Ankan <10196091+Ank4n@users.noreply.github.com> Co-authored-by:
Tsvetomir Dimitrov <tsvetomir@parity.io>
-
- Nov 18, 2024
-
-
Tsvetomir Dimitrov authored
Since async backing parameters runtime api is released on all networks the code in backing subsystem can be simplified by removing the usages of `ProspectiveParachainsMode` and keeping only the branches of the code under `ProspectiveParachainsMode::Enabled`. The PR does that and reworks the tests in mod.rs to use async backing. It's a preparation for https://github.com/paritytech/polkadot-sdk/issues/5079 --------- Co-authored-by:
Alin Dima <alin@parity.io> Co-authored-by: command-bot <>
-